Grammarly faces class-action lawsuit over AI Expert Review feature that mimicked writers without consent

Reviewed byNidhi Govil

7 Sources

Share

Grammarly pulled its controversial AI Expert Review feature after journalist Julia Angwin filed a class-action lawsuit alleging the company used her identity and hundreds of other writers without permission. The tool, which simulated editorial feedback from figures like Stephen King and Carl Sagan, sparked backlash over unauthorized use of individuals' likenesses for commercial gain.

Grammarly Faces Legal Action Over Unauthorized AI Feature

Grammarly has disabled its AI Expert Review feature and is now defending against a class-action lawsuit that alleges the company exploited hundreds of writers' identities without their consent. Journalist Julia Angwin filed the lawsuit in the Southern District of New York against Superhuman, Grammarly's parent company, seeking damages exceeding $5 million

2

3

. The case challenges what Angwin describes as the monetization of personal identities through an AI tool mimicking experts she and others never authorized.

Source: TechCrunch

Source: TechCrunch

"I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise," Angwin said in a statement

1

. The lawsuit argues that Grammarly violated privacy and publicity rights by using generative AI to mimic writing styles of prominent figures including Stephen King, Carl Sagan, Neil deGrasse Tyson, and AI ethicist Timnit Gebru without obtaining their permission.

How the AI-Powered Writing Feedback Tool Operated

Launched in August 2025 as part of Grammarly's suite of generative AI tools, the Expert Review feature was available to subscribers paying $144 annually

1

. The tool promised to deliver AI-powered writing feedback "inspired by" famous authors and academics, allowing users to select specific experts whose style they wanted the system to emulate

4

.

Source: Analytics Insight

Source: Analytics Insight

According to Grammarly's now-removed promotional materials, Expert Review drew "on insights from subject-matter experts and trusted publications" and provided feedback "based on publicly available expert content"

4

. Shishir Mehrotra, Superhuman's CEO, explained that the AI agent used "publicly available information from third-party LLMs to surface writing suggestions inspired by the published work of influential voices"

2

.

However, the feature's output fell dramatically short of its promises. Casey Newton, founder of tech newsletter Platformer and another person impersonated by the tool, tested it and received generic feedback so bland it raised questions about why Grammarly bothered using real names at all

1

. When tech journalist Kara Swisher—whose identity was also used without consent—learned what the AI approximation of her had suggested, she responded: "You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck"

1

.

Backlash Intensifies Over Mimicked Experts Without Consent

The unauthorized use of individuals' likenesses sparked immediate outrage from the writing and academic communities. Gaming journalist Wes Fenlon, whose persona was used in the tool, wrote on BlueSky: "Opt-out via email is a laughably inadequate recourse for selling a product that verges on impersonation and profits on unearned credibility"

2

. This criticism came after Grammarly initially responded to complaints by offering an opt-out option rather than shutting down the feature entirely.

The situation became even more troubling when it emerged that the disabled AI feature included deceased writers such as Carl Sagan, bell hooks, and historian David Abulafia, who died in January

3

4

. Vanessa Heggie, an associate professor at the University of Birmingham, described Abulafia's inclusion as "obscene" .

Source: Fast Company

Source: Fast Company

Newton articulated the core issue: "[Grammarly] curated a list of real people, gave its models free rein to hallucinate plausible-sounding advice on their behalf, and put it all behind a subscription. That's a deliberate choice to monetize the identities of real people without involving them, and it sucks"

3

4

.

Legal Claims and Company Response

The lawsuit filed by Angwin's lawyer, Peter Romer-Friedman, argues that it is "unlawful to appropriate peoples' names and identities for commercial purposes" and seeks to stop Grammarly from attributing advice to experts that they "never gave"

2

. Within 24 hours of filing, Romer-Friedman reported hearing from over 40 people interested in joining the case

3

.

Mehrotra apologized in a LinkedIn post, stating: "Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. We hear the feedback and recognize we fell short on this"

1

. He announced that Expert Review would be disabled while the company reimagines the feature "to make it more useful for users, while giving experts real control over how they want to be represented—or not represented at all"

4

.

Despite the apology, Mehrotra told the BBC that the legal claims are "without merit" and Superhuman will "strongly defend against them." He also noted that "in its short lifespan it had very little usage" .

Broader Implications for AI and Identity Rights

This case highlights growing ethical and legal challenges around generative AI and the misappropriation of professional identities. Angwin told the BBC: "I had thought of deepfakes as something that happens to celebrities, mostly around images. Editing is a skill... it's my livelihood, but it's not something I've ever thought about anyone trying to steal from me before. I didn't even think it was steal-able"

3

.

The situation underscores concerns about how AI companies are violating privacy and publicity rights while racing to deploy new features. As one writer noted, the case represents "the latest battle in the war over what legal and ethical boundaries AI should not cross"

5

. The outcome of this class-action lawsuit could establish important precedents for how companies can—or cannot—use real people's identities to train and market AI systems, particularly when those identities represent years of professional expertise that writers depend on for their livelihoods.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo