4 Sources
4 Sources
[1]
OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path
On Wednesday, former OpenAI researcher Zoë Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI's advertising strategy risks repeating the same mistakes that Facebook made a decade ago. "I once believed I could help the people building A.I. get ahead of the problems it would create," Hitzig wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer." Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often "because people believed they were talking to something that had no ulterior agenda." She called this accumulated record of personal disclosures "an archive of human candor that has no precedent." She also drew a direct parallel to Facebook's early history, noting that the social media company once promised users control over their data and the ability to vote on policy changes. Those pledges eroded over time, Hitzig wrote, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite. She warned that a similar trajectory could play out with ChatGPT: "I believe the first iteration of ads will probably follow those principles. But I'm worried subsequent iterations won't, because the company is building an economic engine that creates strong incentives to override its own rules." Hitzig's resignation adds another voice to a growing debate over advertising in AI chatbots. OpenAI announced in January that it would begin testing ads in the US for users on its free and $8-per-month "Go" subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would not see ads. The company said ads would appear at the bottom of ChatGPT responses, be clearly labeled, and would not influence the chatbot's answers.
[2]
OpenAI Researcher Quits, Warns Its Unprecedented 'Archive of Human Candor' Is Dangerous
In a week of pretty public exits from artificial intelligence companies, ZoÃ" Hitzig's case is, arguably, the most attention-grabbing. The former researcher at OpenAI divorced the company in an op-ed in the New York Times in which she warned not of some vague, unnamed crisis like Anthropic's recently departed safeguard lead, but of something real and imminent: OpenAI's introduction of advertisements to ChatGPT and what information it will use to target those sponsored messages. There's an important distinction that Hitzig makes early in her op-ed: it's not advertising itself that is the issue, but rather the potential use of a vast amount of sensitive data that users have shared with ChatGPT without giving a second thought as to how it could be used to target them or who could potentially get their hands on it. "For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda," she wrote. "People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent." OpenAI has at least acknowledged this concern. In a blog post published earlier this year announcing that the company will be experimenting with advertising, the company promised that it will keep a firewall between conversations that users have with ChatGPT and the ads they get served by the chatbot. "We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers." Hitzig believes that is true... for now. But she's lost trust in the company to maintain that position over the long term, especially because there is nothing actually holding it to follow through on the promised privacy. The researcher argued that OpenAI is "building an economic engine that creates strong incentives to override its own rules," and warned the company may already be backing away from previous principles. For instance, OpenAI has stated that it doesn't optimize ChatGPT to maximize engagementâ€"a metric that would especially be of interest for a company trying to keep people locked into conversations so it can serve them more ads. But a statement isn't binding, and it's not clear the company has actually lived up to that. Last year, the company ran into an issue of sycophancy with its modelâ€"it started becoming overly flattering to its users and, at times, fed into delusional thinking that may have contributed to "chatbot psychosis" and self-harm. Experts have warned that sycophancy isn't just some mistake in model tuning but an intentional way to get users hooked on talking to the chatbot. In a way, OpenAI is just speedrunning the Facebook model of promising users privacy over their data and then rug-pulling them when it turns out that data is quite valuable. Hitzig is trying to get out in front of the train before it picks up too much steam, and recommended OpenAI adopt a model that will actually guarantee protections for usersâ€"either creating some sort of real, binding independent oversight or putting data in control of a trust with a "legal duty to act in users’ interests." Either option sounds great, though Meta did the former by creating the Meta Oversight Board and then routinely ignored and flouted it. Hitzig also, unfortunately, may have an uphill battle in getting people to care. Two decades of social media have created a sense of privacy nihilism in the general public. No one likes ads, but most people aren't bothered by them enough to do anything. Forrester found that 83% of people surveyed would continue to use the free tier of ChatGPT despite the introduction of advertisements. Anthropic tried to score some points with the public by hammering OpenAI over its decision to insert ads into ChatGPT with a high-profile Super Bowl spot this weekend, but the public response was more confusion than anything, per AdWeek, which found the ad ranked in the bottom 3% of likability across all Super Bowl spots. Hitzig's warning is well-founded. The concern she has is real. But getting the public to care about their own privacy after years of being beaten into submission by algorithms is a real lift.
[3]
OpenAI researcher quits over slippery slope of ChatGPT ads - SiliconANGLE
OpenAI researcher Zoë Hitzig says she left her position on Monday, resigning over the recent introduction of advertisements inside ChatGPT and what she believes is a move in the wrong direction for the company. In a guest essay in The New York Times titled, "OpenAI Is Making the Mistakes Facebook Made. I Quit," Hitzig said she'd spent two years as a researcher guiding safety polices and shaping how AI models were built and priced. Since the introduction of ads, she believes OpenAI may no longer be interested in addressing some of the bigger issues AI poses to society. She doesn't believe ads in themselves are a bad thing - models are expensive to run and ads create revenue. Nonetheless, she still has "deep reservations about OpenAI's strategy." She explained that ChatGPT has "generated an archive of human candor that has no precedent." Users chat with the product about everything in the world, often about their most personal issues - evident in the million people a week who talk to ChatGPT about mental distress, the hordes of citizens who may or may not be afflicted with "AI psychosis." Hitzig believes people talk so candidly because they believe the chatbot has "no ulterior agenda." Their conversations might range from "medical fears, their relationship problems, their beliefs about God and the afterlife." Her bone of contention, of course, is that this archive of most personal reflections is now ripe for manipulation where advertising is concerned. She draws comparisons with Facebook Inc.'s early days when the company told its users they would have control over their data and be able to vote on policies. That, she says, didn't last long, citing the Federal Trade Commission's investigation that exposed Facebook's less-than-noble privacy practices. A company starts with the best intentions, or at least seems to be starting with the best intentions, which then devolves into unfettered profit seeking. "I believe the first iteration of ads will probably follow those principles," she said. "But I'm worried subsequent iterations won't, because the company is building an economic engine that creates strong incentives to override its own rules." The ad debate crossed over into the public sphere last weekend during the Super Bowl when OpenAI's competitor Anthropic PBC ran ads during the game with the tagline, "Ads are coming to AI. But not to Claude." It depicted AI private conversations with consumers being rudely interrupted by irritating ads. OpenAI isn't mentioned, but the inference was crystal clear. OpenAI CEO Sam Altman responded, saying his company would never run an ad that was quite as imposing as what was depicted - "We would obviously never run ads in the way Anthropic depicts them." He claims ads are a way of offering AI to people who cannot afford the subscription cost for a more advanced model of ChatGPT. Hitzig believes ads are a slippery slope. She believes there doesn't have to be what she calls the "false choice" of choosing the "lesser of two evils" - restrict people without the money to pay for a subscription to ads, or give them nothing at all. "Tech companies can pursue options that could keep these tools broadly available while limiting any company's incentives to surveil, profile, and manipulate its users," she wrote. "So the real question is not ads or no ads. It is whether we can design structures that avoid both excluding people from using these tools and potentially manipulating them as consumers. I think we can." The solution? She believes profits can be used from one service or customer base to offset the costs for another service. If that's not possible, she believes there should be real oversight - "not a blog post of principles" - that ensures user data isn't mined to manipulate the consumer. A third option, perhaps wishful thinking, might be to put "users' data under independent control through a trust or cooperative with a legal duty to act in users' interests."
[4]
A Former OpenAI Researcher Just Issued a Warning About ChatGPT Ads -- and the Facebook Comparison Is Grim
OpenAI rolled out advertisements on ChatGPT this week, and some observers are already drawing uneasy parallels to the early days of Facebook. In a New York Times opinion piece, Zoë Hitzig, a former OpenAI researcher, warned that the company's new direction could create serious risks for users. Hitzig spent two years at OpenAI helping shape its models, influencing how they were built and priced, and contributing to early safety policies before formal standards existed. She joined the company, she wrote, with a mission to "help the people building AI get ahead of the problems it would create." But the arrival of ads, she said, made her realize OpenAI had stopped asking the very questions she was brought on to address. For Hitzig, the issue isn't simply that ChatGPT now includes advertising. She acknowledged that AI systems are enormously expensive to develop and maintain, and that ads are an obvious source of revenue. The deeper problem, she argued, lies in the strategy behind them.
Share
Share
Copy Link
Zoë Hitzig, a former OpenAI researcher, resigned this week citing deep concerns about ChatGPT ads and their potential to manipulate users. She warned that OpenAI risks repeating Facebook's mistakes by building economic incentives that could override privacy promises, turning what she calls an unprecedented archive of human candor into a tool for manipulation.
Zoë Hitzig, an economist and researcher who spent two years at OpenAI, announced her resignation in a guest essay published in The New York Times on Wednesday, just two days after the company began testing advertisements inside ChatGPT
1
. The OpenAI researcher quits over what she describes as a fundamental shift in the company's priorities, expressing concern that OpenAI "seems to have stopped asking the questions I'd joined to help answer"1
. During her tenure, Hitzig, who holds a junior fellowship at the Harvard Society of Fellows, helped shape how AI models were built and priced while contributing to early safety policies before formal standards existed4
.
Source: Ars Technica
Hitzig's objection centers not on advertising in AI chatbots itself, but on the nature of data at stake. She describes ChatGPT as having "generated an archive of human candor that has no precedent"
2
, noting that users share medical fears, relationship problems, and religious beliefs with the chatbot. This level of disclosure occurs "because people believed they were talking to something that had no ulterior agenda"1
. The researcher pointed to evidence that approximately one million people per week discuss mental distress with ChatGPT3
, highlighting the sensitive nature of these conversations. Data privacy advocates worry this accumulated record creates "a potential for manipulating users in ways we don't have the tools to understand, let alone prevent"2
.
Source: Gizmodo
The Facebook comparison forms a central pillar of Hitzig's warning about prioritizing profit over user privacy. She drew direct parallels to Facebook's early history, when the social media company promised users control over their data and the ability to vote on policy changes
1
. Those pledges eroded over time, and the Federal Trade Commission later found that privacy changes Facebook marketed as giving users more control actually did the opposite3
. Hitzig warned that OpenAI is "building an economic engine that creates strong incentives to override its own rules"1
, suggesting a slippery slope toward manipulative advertising practices despite current promises.OpenAI announced in January that it would test ChatGPT ads in the US for users on its free and $8-per-month "Go" subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would remain ad-free
1
. The company stated that ads would appear at the bottom of responses, be clearly labeled, and would not influence the chatbot's answers. In a blog post, OpenAI promised to "keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers"2
. Sam Altman defended the revenue model as a way to offer AI to people who cannot afford subscription costs3
.Related Stories
While Hitzig believes OpenAI's current privacy promises are genuine, she has lost trust in the company's ability to maintain that position long-term, especially since nothing legally binds it to follow through
2
. She pointed to concerns about sycophancy in ChatGPT—where the model became overly flattering to users and potentially contributed to "chatbot psychosis" and self-harm—as evidence that OpenAI may already be optimizing for engagement despite claiming otherwise2
. To address ethical AI development concerns, Hitzig recommended establishing real independent oversight "not a blog post of principles" or placing user data under independent control through a trust with "a legal duty to act in users' interests"3
. She argued against what she called a "false choice" between restricting people to ads or giving them nothing, suggesting profits from one service could offset costs for another3
.
Source: Inc.
Anthropic attempted to capitalize on the controversy with a Super Bowl ad featuring the tagline "Ads are coming to AI. But not to Claude," depicting AI conversations being interrupted by intrusive advertisements
3
. However, AdWeek found the ad ranked in the bottom 3% of likability across all Super Bowl spots, with public response marked more by confusion than support2
. This tepid reaction may reflect what experts describe as privacy nihilism—two decades of social media have created a sense of resignation about data collection2
. Forrester research found that 83% of surveyed users would continue using the free tier of ChatGPT despite advertisements2
], suggesting Hitzig faces an uphill battle in mobilizing public concern about AI safety and user data protection.Summarized by
Navi
16 Jan 2026•Technology

24 Dec 2025•Business and Economy

08 Dec 2025•Technology

1
Technology

2
Science and Research

3
Policy and Regulation
