OpenAI Researcher Quits Over ChatGPT Ads, Warns of Facebook-Style Privacy Erosion

Reviewed byNidhi Govil

4 Sources

Share

Zoë Hitzig, a former OpenAI researcher, resigned this week citing deep concerns about ChatGPT ads and their potential to manipulate users. She warned that OpenAI risks repeating Facebook's mistakes by building economic incentives that could override privacy promises, turning what she calls an unprecedented archive of human candor into a tool for manipulation.

OpenAI Researcher Quits Over Advertising Strategy

Zoë Hitzig, an economist and researcher who spent two years at OpenAI, announced her resignation in a guest essay published in The New York Times on Wednesday, just two days after the company began testing advertisements inside ChatGPT

1

. The OpenAI researcher quits over what she describes as a fundamental shift in the company's priorities, expressing concern that OpenAI "seems to have stopped asking the questions I'd joined to help answer"

1

. During her tenure, Hitzig, who holds a junior fellowship at the Harvard Society of Fellows, helped shape how AI models were built and priced while contributing to early safety policies before formal standards existed

4

.

Source: Ars Technica

Source: Ars Technica

The Archive of Human Candor and User Privacy Concerns

Hitzig's objection centers not on advertising in AI chatbots itself, but on the nature of data at stake. She describes ChatGPT as having "generated an archive of human candor that has no precedent"

2

, noting that users share medical fears, relationship problems, and religious beliefs with the chatbot. This level of disclosure occurs "because people believed they were talking to something that had no ulterior agenda"

1

. The researcher pointed to evidence that approximately one million people per week discuss mental distress with ChatGPT

3

, highlighting the sensitive nature of these conversations. Data privacy advocates worry this accumulated record creates "a potential for manipulating users in ways we don't have the tools to understand, let alone prevent"

2

.

Source: Gizmodo

Source: Gizmodo

Facebook Comparison and Economic Incentives

The Facebook comparison forms a central pillar of Hitzig's warning about prioritizing profit over user privacy. She drew direct parallels to Facebook's early history, when the social media company promised users control over their data and the ability to vote on policy changes

1

. Those pledges eroded over time, and the Federal Trade Commission later found that privacy changes Facebook marketed as giving users more control actually did the opposite

3

. Hitzig warned that OpenAI is "building an economic engine that creates strong incentives to override its own rules"

1

, suggesting a slippery slope toward manipulative advertising practices despite current promises.

OpenAI's Current Ad Model and Subscription Tiers

OpenAI announced in January that it would test ChatGPT ads in the US for users on its free and $8-per-month "Go" subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would remain ad-free

1

. The company stated that ads would appear at the bottom of responses, be clearly labeled, and would not influence the chatbot's answers. In a blog post, OpenAI promised to "keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers"

2

. Sam Altman defended the revenue model as a way to offer AI to people who cannot afford subscription costs

3

.

Trust Erosion and Independent Oversight

While Hitzig believes OpenAI's current privacy promises are genuine, she has lost trust in the company's ability to maintain that position long-term, especially since nothing legally binds it to follow through

2

. She pointed to concerns about sycophancy in ChatGPT—where the model became overly flattering to users and potentially contributed to "chatbot psychosis" and self-harm—as evidence that OpenAI may already be optimizing for engagement despite claiming otherwise

2

. To address ethical AI development concerns, Hitzig recommended establishing real independent oversight "not a blog post of principles" or placing user data under independent control through a trust with "a legal duty to act in users' interests"

3

. She argued against what she called a "false choice" between restricting people to ads or giving them nothing, suggesting profits from one service could offset costs for another

3

.

Source: Inc.

Source: Inc.

Industry Response and Privacy Nihilism

Anthropic attempted to capitalize on the controversy with a Super Bowl ad featuring the tagline "Ads are coming to AI. But not to Claude," depicting AI conversations being interrupted by intrusive advertisements

3

. However, AdWeek found the ad ranked in the bottom 3% of likability across all Super Bowl spots, with public response marked more by confusion than support

2

. This tepid reaction may reflect what experts describe as privacy nihilism—two decades of social media have created a sense of resignation about data collection

2

. Forrester research found that 83% of surveyed users would continue using the free tier of ChatGPT despite advertisements

2

], suggesting Hitzig faces an uphill battle in mobilizing public concern about AI safety and user data protection.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo