7 Sources
7 Sources
[1]
A writer is suing Grammarly for turning her and other authors into 'AI editors' without consent | TechCrunch
Grammarly released a controversial feature last week that uses AI to simulate editorial feedback, making it seem like you're getting a critique from novelist Stephen King, the late scientist Carl Sagan, or tech journalist Kara Swisher. But Grammarly did not get permission from the hundreds of experts it included in this feature, called "Expert Review," to use their names. One of the affected writers, journalist Julia Angwin, has filed a class action lawsuit against Superhuman, the parent company that owns Grammarly, arguing that the company violated the privacy and publicity rights of her and the other writers it impersonated. A class action lawsuit allows writers to join Angwin in her case. "I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise," Angwin said in a statement. The situation is more than a little ironic -- Angwin has spent her career leading investigations into tech companies' impacts on privacy. Other critics of this kind of technology, like renowned AI ethicist Timnit Gebru, were also included in Grammarly's "expert review." The "expert review" feature, available only to subscribers paying $144 a year, predictably fails to deliver on the promise of thoughtful feedback. Casey Newton, the founder and editor of the tech newsletter Platformer and another person impersonated by Grammarly, fed one of his articles into the tool and got feedback from Grammarly's approximation of tech journalist Kara Swisher. Grammarly's imitation of Swisher produced "feedback" so generic that it raises the question of why the company would go through the rigmarole of using these writers' likenesses in the first place. Here is what Grammarly's approximation of Kara Swisher told him: "Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?" Newton relayed the message from the AI approximation of Kara Swisher to the actual, real human being, Kara Swisher. "You rapacious information and identity thieves better get ready for me to go full McConaughey on you," Swisher texted Newton (referring to Grammarly). "Also, you suck." Grammarly has since disabled the "expert review" feature, according to a LinkedIn post by Superhuman CEO Shishir Mehrotra. While Mehotra offered an apology, he continued to defend the idea of the feature. "Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal," he wrote. "For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has."
[2]
Grammarly pulls AI tool mimicking Stephen King and other writers
Writing tool Grammarly has disabled an AI feature which mimicked personas of prominent writers, including Stephen King and scientist Carl Sagan, following a backlash from people impersonated, including a multi-million dollar lawsuit. The Expert Review function, which offered writing feedback "inspired by" the styles of famous authors and academics, was taken down this week by Superhuman, the tech firm which runs Grammarly. The feature was met with resistance from writers who found their names and reputations used as "AI personas" without their consent. Shishir Mehrotra, the firm's chief executive, apologised on LinkedIn, acknowledging the tool had "misrepresented" the voices of experts. Julia Angwin, an investigative journalist whose persona was one of those used in the feature, has filed a class-action federal lawsuit in the US against Superhuman and Grammarly. Writing on social media, Angwin said: "I'm suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent." According to legal filings cited by Wired, the action was launched on Wednesday in the Southern District of New York. It states Angwin, on behalf of herself and others in a similar position, "challenges Grammarly's misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits for Grammarly and its owner, Superhuman". The lawsuit argues it is "unlawful to appropriate peoples' names and identities for commercial purposes," and seeks to stop the platform from attributing advice to experts that they "never gave". The damages sought exceed $5m (£3.7m), Wired reports. Grammarly was founded in 2009 as a writing-review tool and began integrating a suite of generative-AI tools in August 2025. Part of this was the Expert Review function which appears to have launched without the named famous personas introduced later. Although the company began rebranding to Superhuman in October, Grammarly was kept as the name of its main service. As criticism mounted in recent days, Superhuman initially said it would maintain the feature but allow those named to "opt-out", according to The Verge. Wes Fenlon, a gaming journalist whose persona was used in the tool, wrote on BlueSky: "Opt-out via email is a laughably inadequate recourse for selling a product that verges on impersonation and profits on unearned credibility." Mehrotra said in response to the backlash: "Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. "This kind of scrutiny improves our products, and we take it seriously." He said the AI agent had drawn on "publicly available information from third-party LLMs to surface writing suggestions inspired by the published work of influential voices". The firm's chief executive apologised, adding: "We hear the feedback and recognize we fell short on this." Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[3]
Grammarly removes AI Expert Review feature mimicking writers after backlash
Feature generated editing suggestions inspired by well-known authors and academics, prompting a class-action lawsuit over the use of real names without consent Grammarly has disabled a controversial AI feature that imitated the style of prominent writers and academics, and is facing a multimillion dollar lawsuit from those whose identities were used without consent. The feature, called Expert Review, used generative AI to produce feedback supposedly inspired by writers including the novelist Stephen King, the astrophysicist and author Neil deGrasse Tyson, and the late scientist Carl Sagan. A class-action lawsuit has been filed in the southern district of New York against Superhuman, Grammarly's parent company. The lawsuit argues that using a person's name for commercial gain without permission is unlawful, and argues that damages due across the plaintiff class are in excess of $5m (£3.7m). Since Grammarly's feature has come to public attention, a number of writers have spoken out about being included. "[Grammarly] curated a list of real people, gave its models free rein to hallucinate plausible-sounding advice on their behalf, and put it all behind a subscription," wrote tech journalist Casey Newton, who was among those featured in the software. "That's a deliberate choice to monetise the identities of real people without involving them, and it sucks." Vanessa Heggie, an associate professor at the University of Birmingham, posted on LinkedIn about how fellow academic David Abulafia, who died in January, was included too, describing it as "obscene". Investigative journalist Julia Angwin, who appeared in the software, is the lead plaintiff in the lawsuit. "I had thought of deepfakes as something that happens to celebrities, mostly around images," Angwin told the BBC. "Editing is a skill ... it's my livelihood, but it's not something I've ever thought about anyone trying to steal from me before. I didn't even think it was steal-able." Angwin's lawyer, Peter Romer-Friedman, told the BBC the case had already generated interest from writers. "We've heard from over 40 people in the last 24 hours since we filed the suit," he said. Grammarly was launched in 2009 as a spelling and grammar check tool, but began adding a range of generative AI features last year, including Expert Review. "Expert Review agent offers subject-matter expertise and personalised, topic-specific feedback to elevate writing that meets rigorous academic or professional standards tailored to the user's field," Grammarly wrote in a blog post announcing the feature. Superhuman's chief executive, Shishir Mehrotra, apologised in a LinkedIn post. "Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices," he wrote. "We hear the feedback and recognise we fell short on this. I want to apologise and acknowledge that we'll rethink our approach going forward." In response to the lawsuit, Mehrotra told the BBC: "We announced that Expert Review was being taken down for a redesign before the claim was filed, and in its short lifespan it had very little usage. We are sorry, and we will rethink our approach going forward." Despite this, he said that the legal claims were "without merit" and Superhuman will "strongly defend against them".
[4]
Grammarly removes AI feature which used real authors' identities, faces class action lawsuit
Credit: Thomas Fuller / SOPA Images / LightRocket via Getty Images Grammarly has pulled its AI-powered Expert Review feature after being called out for using journalists' and authors' identities without permission. The writing assistant software is now facing a class action lawsuit accusing it of exploiting writers' names for its own profit. Launched alongside seven other AI agents last August, Expert Review was available on Grammarly's Free and $12 Pro plans at launch, and was promoted as providing users with feedback on the content of their writing. A page on Grammarly's website which has since been taken down stated that Expert Review "[drew] on insights from subject-matter experts and trusted publications," and provided AI-generated feedback "based on publicly available expert content" (via Wayback Machine). Users could even personalise which "expert" sources Grammarly drew from by selecting the names of specific authors. "Expert Review agent offers subject-matter expertise and personalized, topic-specific feedback to elevate writing that meets rigorous academic or professional standards tailored to the user's field," Grammarly wrote in its blog post announcing the feature. Grammarly's Expert Review came to attention last week after Wired reported that the feature was offering AI-generated edits in the name of real writers and academics, both living and dead. The tool's user guide does provide the disclaimer that its references to experts "are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities." However, the same page also claims that Expert Review offers "insights from leading professionals, authors, and subject-matter experts." Many said subject-matter experts have not taken kindly to Grammarly using their identities without their knowledge or consent. "[Grammarly] curated a list of real people, gave its models free rein to hallucinate plausible-sounding advice on their behalf, and put it all behind a subscription," wrote Platformer founder Casey Newton, who was among those invoked by Grammarly. That's a deliberate choice to monetize the identities of real people without involving them, and it sucks." "This has got to be some kind of defamation or something," historian Mar Hicks posted to Bluesky, having shared a screenshot of their identity being included in Expert Review. "You can't just steal people's IP and then pretend they're saying something they never said." Responding to the backlash, Grammarly told Platformer on Monday that it would allow writers to email them to opt out of inclusion in its Expert Review feature. This prompted further criticism, as experts were not told that Grammarly was using their identity, nor had they granted it permission in the first place. Impacted authors wouldn't know that they needed to opt out unless a Grammarly user saw their name while using Expert Review and informed them. Further, providing the option to opt out did not address Grammarly's use of dead authors' identities. Deceased writers used by Expert Review reportedly included astronomer Carl Sagan and intersectional academic bell hooks. "So Grammerly [sic] is violating the memory of bell hooks AND making AI versions of the rest of us before we're even dead," wrote researcher Sarah J. Jackson. "Someone tell me who to sue, not even joking." Shishir Mehrotra, CEO of Grammarly developer Superhuman, subsequently announced on Wednesday that it was pulling Expert Review offline. However, he also indicated that the company intends to eventually bring it back in some form. "Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices," Mehrotra posted to LinkedIn. "As context, the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward. "After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented -- or not represented at all." "That this even existed in the first place suggests a total disconnect from normal human society," climate writer Ketan Joshi replied to Mehrotra's post. "It should've been immediately obvious that this was exploitative and creepy and cruel." "With all the talk about how AI 'builds from" (read: 'steals') existent content, creating a tool that actually makes up 'advice' from real people who spend their lives caring about writing and expertise... it's hard to fathom," wrote the New York Times' Dan Saltzstein. "There should be consequences to this beyond 'we're going to reevaluate.' A promise to never do anything like this again, at minimum." Though Grammarly has made no such pledge at present, it is already facing repercussions for its actions that go beyond reputational damage. New York Times writer Julia Angwin filed a class action lawsuit against Superhuman on Wednesday, having discovered that Grammarly's Expert Review had used her identity without her consent. The law firm representing her, Peter Romer-Friedman Law PLLC, has put out a call for any writers who were impacted to join the class action. Though it isn't clear exactly how many writers' identities Grammarly allegedly misappropriated, it may be a sizable cohort. Looking at tech journalists alone, The Verge reports that Expert Review named several members of its editorial staff, as well as writers from Wired, Bloomberg, The New York Times, The Atlantic, PC Gamer, Gizmodo, Digital Foundry, Tom's Guide, and Mashable's sister sites IGN and Rock Paper Shotgun. Angwin has claimed that "lots of folks" have already made inquiries about joining the lawsuit. "I'm taking this action on behalf of not just myself, but everyone who spent years and decades refining their skills as a writer and editor, only to find an AI impersonating them," Angwin wrote in a LinkedIn post. "For over 100 years, New York law has prohibited companies from using a person's name for commercial purposes without their consent," said Peter Romer-Friedman of Peter Romer-Friedman Law PLLC. "The law does not provide an exception for technology companies or AI." Filed in a New York District Court, the class action is seeking damages as well as an injunction to prevent Grammarly from using writers' identities without their consent. Mashable has reached out to Superhuman for comment.
[5]
Grammarly's AI tool mimicked experts without their consent. Now it's being sued
Grammarly, the tool meant to assist with spelling, grammar, and in identifying plagiarism, is being sued for a new AI tool called "Expert Review." The tool offers editing suggestions from established authors and writers -- ostensibly not a bad idea -- except that none of those people consented to being involved in the first place. The tool offers real-time writing tips from celebrities like Stephen King and Neil deGrasse Tyson, as well as journalists, like The Markup founder Julie Angwin, who filed the class action lawsuit against Grammarly's parent company Superhuman, after she alleged the tool used her likeness without her permission: "have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise," Angwin said in a statement. From photorealistic deepfakes on Sora to scammers using chatbots to swindle users out of money, AI has already been bending reality and using people's likenesses at worrying speeds. The Grammarly lawsuit shows how professional writers' likenesses are also up for grabs -- in addition to having that same technology threaten their very careers and livelihoods. This is the latest battle in the war over what legal and ethical boundaries AI should not cross.
[6]
Shishir Mehrotra's Push to Remake Grammarly Shows the Risks of A.I. Leadership
Facing a class action and fierce backlash, Mehrotra now says the feature will be redesigned with a "better approach" to involving real experts. Shishir Mehrotra's push to turn Grammarly, now parent-branded as Superhuman, into an A.I. powerhouse has delivered a major backlash. In less than a year as CEO, he has rebranded the company, led acquisitions and pushed an aggressive pivot into A.I. agents. But one of those tools, an A.I. "Expert Review" feature launched last summer, has turned from showcase to liability. Expert Review offered users writing suggestions in the style of well-known authors and journalists, presenting feedback "from" figures who had never agreed to be involved. The tool, since disabled, has become a cautionary example of companies racing into A.I. without fully weighing reputational, legal and ethical risks. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Introduced in August, Expert Review promised tailored feedback from marquee names such as author Stephen King and astrophysicist Neil deGrasse Tyson. The catch was that none of the featured "experts" had consented to the project, which generated advice using A.I. models trained on their work and likenesses. Writers quickly noticed and pushed back. "No one asked me for permission to use my name in this way, much less compensate me for whatever expert-reviewing labor my A.I. clone was apparently now doing on my behalf," wrote journalist Casey Newton. In other cases, the feature invoked people who could not possibly have agreed. Vanessa Heggie, an associate professor at the University of Birmingham, called the appearance of historian David Abulafia, who died in January, "obscene." Earlier this month, the company was hit with a class action lawsuit alleging it used writers' and journalists' names for commercial purposes without consent, seeking more than $5 million in damages. "We have reviewed the lawsuit, and we believe the legal claims are without merit and will strongly defend against them," Mehrotro said in a statement. On March 11, the same day the suit was filed, Mehrotra posted a public apology on LinkedIn and announced that Expert Review would be paused. "We hear the feedback and recognize we fell short on this," he wrote, describing the tool as part of Superhuman's effort to bring A.I. directly to users. "I want to apologize and acknowledge that we'll rethink our approach going forward." The controversy came about a year into Mehrotra's tenure. He became CEO after Superhuman acquired Coda, the productivity startup he co-founded and led, at the end of 2024. Before that, he held senior roles at YouTube, GoogleTV and Microsoft. Since taking over, he has made clear that his ambition is to expand the company far beyond its roots as a grammar checker. That strategy crystallized last October, when he announced that Grammarly's parent company would be renamed Superhuman, while the writing tool would keep the Grammarly name. The rebrand was meant to signal a broader push into workplace productivity and email products as competition in A.I.-powered writing tools heats up. Mehrotra has backed the vision with a rapid cadence of releases. Expert Review arrived as part of a bundle of A.I. agents promising to predict essay grades, surface relevant citations, simulate reader reactions and check for plagiarism. It is still unclear whether this ambitious pivot will pay off for Grammarly, which started 17 years ago as a grammar and spelling assistant and was last valued at $13 billion in 2021. In response to the backlash, Mehrotra has suggested the feature will eventually return in a different form. He has said "there is a better approach to bringing experts onto our platform" and that Superhuman is working on a new version of Expert Review designed to "provide significantly more benefit to both users and experts." Whether those experts will be willing to partner with the company after this episode remains an open question.
[7]
Grammarly Shuts Down 'Expert Review' AI After Backlash Over Fake Author Feedback
Grammarly Pulls AI Tool That Mimicked Feedback From Real Writers After Public Criticism Grammarly has disabled its 'Expert Review' AI feature after receiving massive backlash from writers and journalists who said the tool generated feedback that appeared to come from real authors without their consent. The feature provided users with writing advice as if it were coming from well-known experts, quickly raising concerns about misrepresentation and identity misuse. The company confirmed that the feature has been taken down after facing growing backlash online. Critics argued that the feature could mislead users by making AI-generated feedback appear to have been written by real professionals. it is reviewing the feature's design and considering changes to ensure greater transparency in the future.
Share
Share
Copy Link
Grammarly pulled its controversial AI Expert Review feature after journalist Julia Angwin filed a class-action lawsuit alleging the company used her identity and hundreds of other writers without permission. The tool, which simulated editorial feedback from figures like Stephen King and Carl Sagan, sparked backlash over unauthorized use of individuals' likenesses for commercial gain.
Grammarly has disabled its AI Expert Review feature and is now defending against a class-action lawsuit that alleges the company exploited hundreds of writers' identities without their consent. Journalist Julia Angwin filed the lawsuit in the Southern District of New York against Superhuman, Grammarly's parent company, seeking damages exceeding $5 million
2
3
. The case challenges what Angwin describes as the monetization of personal identities through an AI tool mimicking experts she and others never authorized.
Source: TechCrunch
"I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise," Angwin said in a statement
1
. The lawsuit argues that Grammarly violated privacy and publicity rights by using generative AI to mimic writing styles of prominent figures including Stephen King, Carl Sagan, Neil deGrasse Tyson, and AI ethicist Timnit Gebru without obtaining their permission.Launched in August 2025 as part of Grammarly's suite of generative AI tools, the Expert Review feature was available to subscribers paying $144 annually
1
. The tool promised to deliver AI-powered writing feedback "inspired by" famous authors and academics, allowing users to select specific experts whose style they wanted the system to emulate4
.
Source: Analytics Insight
According to Grammarly's now-removed promotional materials, Expert Review drew "on insights from subject-matter experts and trusted publications" and provided feedback "based on publicly available expert content"
4
. Shishir Mehrotra, Superhuman's CEO, explained that the AI agent used "publicly available information from third-party LLMs to surface writing suggestions inspired by the published work of influential voices"2
.However, the feature's output fell dramatically short of its promises. Casey Newton, founder of tech newsletter Platformer and another person impersonated by the tool, tested it and received generic feedback so bland it raised questions about why Grammarly bothered using real names at all
1
. When tech journalist Kara Swisher—whose identity was also used without consent—learned what the AI approximation of her had suggested, she responded: "You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck"1
.The unauthorized use of individuals' likenesses sparked immediate outrage from the writing and academic communities. Gaming journalist Wes Fenlon, whose persona was used in the tool, wrote on BlueSky: "Opt-out via email is a laughably inadequate recourse for selling a product that verges on impersonation and profits on unearned credibility"
2
. This criticism came after Grammarly initially responded to complaints by offering an opt-out option rather than shutting down the feature entirely.The situation became even more troubling when it emerged that the disabled AI feature included deceased writers such as Carl Sagan, bell hooks, and historian David Abulafia, who died in January
3
4
. Vanessa Heggie, an associate professor at the University of Birmingham, described Abulafia's inclusion as "obscene" .
Source: Fast Company
Newton articulated the core issue: "[Grammarly] curated a list of real people, gave its models free rein to hallucinate plausible-sounding advice on their behalf, and put it all behind a subscription. That's a deliberate choice to monetize the identities of real people without involving them, and it sucks"
3
4
.Related Stories
The lawsuit filed by Angwin's lawyer, Peter Romer-Friedman, argues that it is "unlawful to appropriate peoples' names and identities for commercial purposes" and seeks to stop Grammarly from attributing advice to experts that they "never gave"
2
. Within 24 hours of filing, Romer-Friedman reported hearing from over 40 people interested in joining the case3
.Mehrotra apologized in a LinkedIn post, stating: "Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. We hear the feedback and recognize we fell short on this"
1
. He announced that Expert Review would be disabled while the company reimagines the feature "to make it more useful for users, while giving experts real control over how they want to be represented—or not represented at all"4
.Despite the apology, Mehrotra told the BBC that the legal claims are "without merit" and Superhuman will "strongly defend against them." He also noted that "in its short lifespan it had very little usage" .
This case highlights growing ethical and legal challenges around generative AI and the misappropriation of professional identities. Angwin told the BBC: "I had thought of deepfakes as something that happens to celebrities, mostly around images. Editing is a skill... it's my livelihood, but it's not something I've ever thought about anyone trying to steal from me before. I didn't even think it was steal-able"
3
.The situation underscores concerns about how AI companies are violating privacy and publicity rights while racing to deploy new features. As one writer noted, the case represents "the latest battle in the war over what legal and ethical boundaries AI should not cross"
5
. The outcome of this class-action lawsuit could establish important precedents for how companies can—or cannot—use real people's identities to train and market AI systems, particularly when those identities represent years of professional expertise that writers depend on for their livelihoods.Summarized by
Navi
[1]
[4]
05 Mar 2026•Policy and Regulation

29 Oct 2025•Business and Economy

18 Aug 2025•Technology
