Generative AI isn't so mysterious anymore. Lots of people have tried ChatGPT, and Google is putting AI answers right at the top of billions of searches. Still, for a lot of people, the story of AI feels divorced from the actual, current state of computing. While insiders post benchmarks, share rumors about new models, and wonder aloud if they're in a bubble and if that bubble might be popping, most tech companies are still trying to figure out if and how this new technology fits into their products.
In some ways, the big tech companies are all over the place on AI. Alphabet's CEO is saying that it could be "more profound" than "fire or electricity or anything that we've done in the past." Apple's Tim Cook describes AI features merely as "helpful." Sam Altman is "feeling the AGI," while Elon Musk thinks AI is going to be great for scientific discovery as long as it doesn't get so "woke" that it ends up being "deadly." On one matter, however, the entire industry seems to be in agreement: AI is great for summarizing things.
Summarization is less glamorous than talking, thinking machines, but it's still a pretty good trick. Google is rolling out AI features in Gemini and across Android, Chrome, and its websites to summarize users' emails, documents, and meetings with tools for summarizing websites and YouTube videos as well. Microsoft is building similar features into its products with summarization for Word, PowerPoint, and Excel documents, as well as meeting transcripts and email threads. LinkedIn will summarize users' feeds to "take on the hard work" of ... browsing LinkedIn. Summarization is AI's big hammer, and every digital interface looks like a nail. Amazon is summarizing reviews; Facebook is summarizing comments; and Slack will summarize conversations; Reddit will soon start summarizing itself. In less visible but highly consequential contexts, companies like Epic are using AI to summarize health records and patient visits for medical professionals. If you're a tech company looking for a way to build AI into your app, summarization is a no-brainer. Let's say, for example, that you have a popular app for finding and rating hiking trails:
Ah, hmmm. How about a popular social platform full of people with an interest in sports and news, attached to an AI company?
To be fair, as summarization tasks go, these are sort of hard. The AllTrails listing for the trail to the summit of Mount Everest is full of jokes, so much so that a couple of months after this screenshot was taken the trail was deleted entirely; Grok is synthesizing its news summary from posts on X, a place where people tend to post everything about the news except the news itself. So-called generative AI is deeply linked to summarization as a concept, and modern AI tools can do impressive work on large documents and data sets. It makes sense, in other words, that companies would give this a shot -- summarization is something these models can handle impressively in the right contexts.
So far, though, it's not clear that tech companies know what those contexts are. On Amazon, review summaries are serviceable but vague to the point of meaninglessness -- much in the way popular products on Amazon almost invariably have ratings between 4 and 5 stars, their AI-generated review summaries tend to emphasize that while most users like the product, some say it's not perfect. When they're more specific, they also tend to be weird, demanding user fluency in AI-ese. An Amazon AI review summary for a refurbished iPhone 15, for example, says that customers like the phone's "battery health,=" but that "some users have reported that the charger is dirty" and that "opinions are mixed on screen scratches."
In Gmail, Google is offering users the option to summarize emails. In cases where the email is long but uncomplicated -- marketing messages and newsletters, routine updates from workplaces, automatic communications from a bank or an insurance company -- it does the job pretty well, although it's more impressive than useful. In cases where the email is shorter and the sender has made efforts to be concise, the feature feels pointless or worse. A few examples from my own inbox: A recent summary of an invitation to a birthday party included a date but messed up the location; a brief exchange about a nice Father's Day note was summarized as a relative being "interested in seeing John Herrman's special puppies," possibly because it included photos of my children in animal face paint; an email to class parents with the subject line "Half Day Friday, June 14th 11:20am dismissal" was summarized by Google's state-of-the-art AI as "[Teacher's Name] informs staff that Friday, June 14th is a half day and dismissal is at 11:20," which both was longer than the email's completely adequate subject line and introduced a small error.
One problem with using AI to summarize everything is that it gets stuff wrong -- not most of the time, but far more than you'd want in the roles it's playing (paralegal, nurse aide, personal assistant, intern). Another problem is that if the products I've been testing recently are any indication, the plan is to summarize lots of things, many of which are already in some way summarized, having been produced with brevity and a specific audience in mind.
Take Apple Intelligence, the first version of which is in testing with a small group of iPhone users. At its last event, the company showed off how new iPhones would soon summarize emails, phone calls, and websites. In a few days of using it, though, the feature that stood out to me was its attempts to summarize text messages and email notifications, which frequently turned out either wrong or weird. I'd text with a friend, joking about how the Olympic gold medalist used to ride in Central Park just like we did; Apple would summarize the thread as about a "Central Park runner's life journey." Casual group texts surfaced summaries with comically off-base interpretations -- a discussion about a soccer player changing teams turned into a notification about "expressing love for someone and discussing potential purchases" -- while more actionable messages got decontextualized into a meaningless new form of summary slop. Other testers have noticed as well:
Apple Intelligence summarized a text message in a group chat from "We're leaving next wknd for Illinois / Michigan" to "Leaving for Illinois/Michigan next weekend," which was five characters shorter but no longer attached to the sender's name, turning what would have been a legible notification into a useless one. A series of joyous, emoji-filled texts from my wife in response to videos of my daughter swimming got summarized like this:
Which is honestly sort of funny! But not something I'll leave on (or pay for) if given the choice.
The worry and/or joke when tech companies started teasing AI text generation and summarization at the same time was that, before long, everyone would be using AI to send overlong messages for their recipients' AIs to digest back to a manageable length. Maybe it would be an arms race. Maybe it would result in the spammification of personal communication. Maybe it would just work itself out!
More than a year later, as the tech industry prepares to present the entire world to its users in summary, the reality is slightly more complicated. Automated verbosity really is a problem in some contexts, and automated summary is a possible solution. In contexts where users already know they need it -- parsing giant spreadsheets, catching up on low-stakes meetings or long message threads at work -- these features are already popular and appreciated. On parts of the internet where users are constantly confronted with overabundant useless content, or in workplaces where communication is wasteful and inefficient, some of these features will be a relief, a second layer of defense against content that's a degree or two removed from spam.
In contexts where people are communicating more deliberately, though -- where they're actually trying to talk to one another and put care into what they're saying and how they're saying it -- summarization, in its current state at least, feels inadequate, out of place, and counterproductive, and that's when it doesn't mess up. Deployed on everything, Apple and Google's AIs end up assuming strange roles, acting like personal assistants for administrative work but also for ... talking to your friends and family? It's mediation for mediation's sake. Like a lot of first-wave AI tools, these are features that exist because they're newly possible, not necessarily because they make sense.
One optimistic theory of consumer AI is that it could automate drudgery and make time for more worthwhile tasks, making quick work of work and leaving more space for the things that really matter in life. What's happening now -- if not instead of that, at least in addition to it -- is that tech companies are rolling out features that automate that stuff, too: the texts you actually want to read; the threads you might enjoy; the jokes that look, to a machine literally instructed to perform as "an expert at summarizing posts," like inefficiency.
It's plausible that these companies figure out a balance here, as regular users actually encounter this stuff, relegating text summarization and generation to the real contexts where it's effective, accurate, and unintrusive. For now, though, in assuming that everything should be summarized, they risk the opposite effect: Making everything feel like work.