16 Sources
16 Sources
[1]
OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path
On Wednesday, former OpenAI researcher Zoë Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI's advertising strategy risks repeating the same mistakes that Facebook made a decade ago. "I once believed I could help the people building A.I. get ahead of the problems it would create," Hitzig wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer." Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often "because people believed they were talking to something that had no ulterior agenda." She called this accumulated record of personal disclosures "an archive of human candor that has no precedent." She also drew a direct parallel to Facebook's early history, noting that the social media company once promised users control over their data and the ability to vote on policy changes. Those pledges eroded over time, Hitzig wrote, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite. She warned that a similar trajectory could play out with ChatGPT: "I believe the first iteration of ads will probably follow those principles. But I'm worried subsequent iterations won't, because the company is building an economic engine that creates strong incentives to override its own rules." Hitzig's resignation adds another voice to a growing debate over advertising in AI chatbots. OpenAI announced in January that it would begin testing ads in the US for users on its free and $8-per-month "Go" subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would not see ads. The company said ads would appear at the bottom of ChatGPT responses, be clearly labeled, and would not influence the chatbot's answers.
[2]
Perplexity switches sides in AI ad wars
Another executive did not rule out a return to advertising in the future, but said serving ads was "misaligned with what the users want" and may not be needed for the company to thrive. "We are in the accuracy business, and the business is giving the truth, the right answers," they said. The pivot sets Perplexity firmly on the anti-ad side of the industry's emerging divide over how AI should make money. Some, like Perplexity, are hoping subscriptions will be enough and Anthropic has committed to keeping its chatbot Claude ad-free. Others, like OpenAI, have embraced ads and last week the company started testing advertising for free ChatGPT users. The dispute has moved into the public arena as well, with Anthropic airing attack ads clearly targeting ChatGPT at the Super Bowl, which OpenAI CEO Sam Altman called "dishonest."
[3]
Perplexity walks away from ads to differentiate from OpenAI and Google
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. In brief: Perplexity has taken a rare step in the crowded generative AI landscape: the company is walking away from advertising entirely. The San Francisco - based startup, valued at $18 billion, has decided that maintaining user trust outweighs any near-term revenue gains from ads, even as industry giants pursue increasingly aggressive monetization strategies. The company quietly phased out sponsored content late last year, ending an experiment that began in 2024 when labeled promotions occasionally appeared beneath chatbot responses. On Tuesday, executives confirmed to the Financial Times that Perplexity would not pursue advertising further, citing trust and accuracy as the cornerstones of its business. One executive explained that for users to continue relying on - and paying for - the service, they must trust that the results are objective. "We are in the accuracy business," the executive said, adding that the company's mission is "giving the truth, the right answers." Perplexity's refusal to adopt advertising comes as many of its competitors move in the opposite direction. OpenAI recently began testing ads in the free version of ChatGPT, displaying sponsored links below outputs while insisting these do not influence responses. Google's AI Overviews in Search already integrate ads in certain answers, though its Gemini chatbot remains ad-free for now. Anthropic, the developer of Claude, has publicly declared it will keep its chatbot free of advertising. OpenAI is now testing ads in its free version of ChatGPT. Perplexity views advertising as fundamentally misaligned with the role of a trusted AI assistant. Even with visible disclaimers separating sponsored results, company leaders believe ads could prompt users to second-guess the neutrality of every response. "The challenge with ads," one executive said, "is that a user would just start doubting everything." Instead, Perplexity's revenue strategy relies on subscriptions. The company reports more than 100 million users and annualized revenues around $200 million, largely from paid tiers ranging from $20 to $200 per month. A free version is also offered to attract new users. Executives said the ad decision reflects a deliberate choice to strengthen the product's reliability rather than chase advertising income. The company has also experimented with shopping integrations, allowing users to compare products directly through its platform. Unlike Google or OpenAI, however, Perplexity has not monetized these features and takes no commissions on sales. For now, the company's stance makes it one of the few major AI firms resisting the pressure to commercialize user attention. While it may revisit the idea in the future, leadership insists that advertising is currently incompatible with the trust required for AI-driven search.
[4]
Perplexity Executives Think Ads Will Butcher Trust in AI
Perplexity is abandoning its advertising efforts, cautioning that accompanying chatbot answers with ads could lead users to distrust the product. Ads in AI chatbots are a contentious topic, and there is a clear divide forming as the industry tries to make sense of a still-uncertain road to profitability. Google has shown ads in its AI Mode and AI Overviews features for months now, and while Gemini is still ad-free for the foreseeable future, executives have indicated that it will be a natural next step for the chatbot. Last week, OpenAI began testing out ads in ChatGPT, with CEO Sam Altman claiming that an ads business would make the free ChatGPT offering financially sustainable for the business. For its part, rival Anthropic has not shied away from roasting them for the decision, including with four Super Bowl ads that made fun of ads in AI chatbots and a manifesto promising never to include ads in Claude. "Including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking," Anthropic shared in a letter earlier this month, adding that even if the ads don't influence AI responses, they would still "introduce an incentive to optimize for engagementâ€"for the amount of time people spend using Claude and how often they return. These metrics aren’t necessarily aligned with being genuinely helpful." Perplexity was once one of the first major AI providers to introduce ads into its product, but they are now being phased out as of late last year, and executives seem to have shifted their thinking to align more with Anthropic's camp. "A user needs to believe this is the best possible answer, to keep using the product and be willing to pay for it," an unnamed Perplexity executive said at a media roundtable, per the Financial Times. "The challenge with ads is that a user would just start doubting everything... which is why we don't see it as a fruitful thing to focus on right now." The AI search startup will have to make money somehow, though. Another Perplexity executive said they could revisit ads in the future, but they might also just "never ever need to do ads," per the FT. That's going to depend on whether their focus shifts to subscriptions and business sales pan out as expected. According to Business Insider, the company is looking to expand its enterprise sales team and wants to target large businesses, finance professionals, doctors, and CEOs to generate a reliable revenue stream in place of ads.
[5]
Perplexity Abandons AI Advertising Strategy Over Trust Worries
AI company Perplexity is stepping away from advertising over concerns that it will erode user trust, despite moves by rivals to introduce ads as an alternative money-making strategy. Perplexity was one of the first AI services to embrace ads in 2024, after it ran tests where sponsored answers appeared under the chatbot's answers. That approach however was phased out last year, and executives at the company now say they don't plan to revisit it, according to the Financial Times. "A user needs to believe this is the best possible answer, to keep using the product and be willing to pay for it," a Perplexity executive told the publication. The report follows OpenAI's move earlier this month to show ads to ChatGPT users who have a free account or a low-cost Go subscription. OpenAI has said ads will not influence the answers that ChatGPT provides, nor will it provide advertisers with content from ChatGPT conversations. Anthropic, the makers of Claude, recently mocked OpenAI for its decision to show ads to users and has said it has no plans to do the same. The company argues that including ads in Claude would not be in line with its mission of creating a helpful assistant for work and deep thinking, and that users should not need to second-guess whether an AI is being helpful or "subtly steering the conversation towards something monetizable." Google features advertising in AI mode and in its AI Overviews summaries on traditional search results. However, Google has not introduced ads into its Gemini chatbot so far. Ad strategies are one way that AI companies have been looking at as a way to generate revenue from users and reassure investors while spending heavily to train and operate large language models. Meanwhile, the cost of training and running large language models continues to climb, with no profit to show for it.
[6]
ChatGPT ads are here -- and I noticed the first brands all have one thing in common
When I first heard advertisements were coming to ChatGPT, I wasn't surprised. The platform already has an app store and integrated shopping into the chatbot, so ads seemed inevitable. As someone with my finger on the pulse of AI, it's clear that OpenAI does things differently. ChatGPT has been riddled with privacy concerns and now with a campaign boycotting it completely, it's clear the people-pleasing chatbot is riding the struggle bus - CEO Sam Altman even called a "Code Red" this past December to fend off a surging Google. But instead of retreat, OpenAI has narrowed its focus, and this first wave of carefully curated advertisers reveals exactly who OpenAI is betting on for its future. When I looked closer at the first wave of brands testing advertising inside ChatGPT, something interesting jumped out. It wasn't what they sell, but who they're trying to reach and what OpenAI thinks the "ChatGPT user" looks like. The earliest advertisers reportedly include companies like Adobe, Audible, Target, Williams-Sonoma, Ford, Mazda, Mrs. Meyer's and even luxury watchmaker Audemars Piguet. Even at first glance, that lineup doesn't feel random. There's a clear pattern that they all target the "aspirational thinker." None of these brands are discount-only, impulse-buy products. And none are edgy, controversial or built around shock value. The first wave of brands slated to advertise on ChatGPT all share a specific lane: mass-premium, taste-driven, self-improving. Think about it: The through-line? These brands speak to people who see themselves as thoughtful, capable and upwardly mobile. In other words, the exact kind of user who might spend time inside ChatGPT. Advertising inside a chatbot is different from placing a banner ad on a news site. ChatGPT users aren't passively scrolling. They're actively asking questions, which creates the kind of intent that is pure gold for these specific brands. Especially ones built around improvement, creativity and lifestyle optimization. It's not hard to imagine a user asking for help designing something and seeing an Adobe placement. Or asking for book recommendations and encountering Audible. In other words, these companies aren't betting on randomness. They're betting on intent. The one thing the first ChatGPT ad brands have in common isn't industry. It's audience. Collectively, they target: In short, shoppers who care about aesthetics and brand reputation. OpenAI is targeting the "smart, curious, slightly aspirational" consumer. That tells us something important about how OpenAI views its user base. ChatGPT isn't positioning itself as an anything-goes ad marketplace; at least not yet. The early lineup suggests a carefully curated, brand-safe environment designed to attract companies that want to associate themselves with intelligence, creativity and thoughtful decision-making. If you're worried about pop-ups interrupting your flow, you can breathe a sigh of relief. OpenAI is taking a surprisingly conservative approach to the UI. Based on the initial rollout this week, here is the breakdown: If you're worried about your AI responses being "bought," OpenAI has made a significant promise: Ads will not influence the core logic of the answer. The "organic" response is generated first; the ad is a separate unit injected at the bottom. However, if you want a totally clean experience, your options are narrowing: The arrival of ads in ChatGPT isn't just about monetization. It's about identity. The brands that show up first help define what a platform becomes. If the early wave had been crypto exchanges, gambling apps or viral direct-to-consumer startups, the narrative would feel very different. Instead, we're seeing reputation-conscious brands that align neatly with a productivity-first, education-forward audience. OpenAI seems to believe ChatGPT users are smart, intentional consumers -- and they want to be part of that moment of decision. Whether users feel the same way about ads entering that space is the real question.
[7]
OpenAI Researcher Quits, Warns Its Unprecedented 'Archive of Human Candor' Is Dangerous
In a week of pretty public exits from artificial intelligence companies, ZoÃ" Hitzig's case is, arguably, the most attention-grabbing. The former researcher at OpenAI divorced the company in an op-ed in the New York Times in which she warned not of some vague, unnamed crisis like Anthropic's recently departed safeguard lead, but of something real and imminent: OpenAI's introduction of advertisements to ChatGPT and what information it will use to target those sponsored messages. There's an important distinction that Hitzig makes early in her op-ed: it's not advertising itself that is the issue, but rather the potential use of a vast amount of sensitive data that users have shared with ChatGPT without giving a second thought as to how it could be used to target them or who could potentially get their hands on it. "For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda," she wrote. "People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent." OpenAI has at least acknowledged this concern. In a blog post published earlier this year announcing that the company will be experimenting with advertising, the company promised that it will keep a firewall between conversations that users have with ChatGPT and the ads they get served by the chatbot. "We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers." Hitzig believes that is true... for now. But she's lost trust in the company to maintain that position over the long term, especially because there is nothing actually holding it to follow through on the promised privacy. The researcher argued that OpenAI is "building an economic engine that creates strong incentives to override its own rules," and warned the company may already be backing away from previous principles. For instance, OpenAI has stated that it doesn't optimize ChatGPT to maximize engagementâ€"a metric that would especially be of interest for a company trying to keep people locked into conversations so it can serve them more ads. But a statement isn't binding, and it's not clear the company has actually lived up to that. Last year, the company ran into an issue of sycophancy with its modelâ€"it started becoming overly flattering to its users and, at times, fed into delusional thinking that may have contributed to "chatbot psychosis" and self-harm. Experts have warned that sycophancy isn't just some mistake in model tuning but an intentional way to get users hooked on talking to the chatbot. In a way, OpenAI is just speedrunning the Facebook model of promising users privacy over their data and then rug-pulling them when it turns out that data is quite valuable. Hitzig is trying to get out in front of the train before it picks up too much steam, and recommended OpenAI adopt a model that will actually guarantee protections for usersâ€"either creating some sort of real, binding independent oversight or putting data in control of a trust with a "legal duty to act in users’ interests." Either option sounds great, though Meta did the former by creating the Meta Oversight Board and then routinely ignored and flouted it. Hitzig also, unfortunately, may have an uphill battle in getting people to care. Two decades of social media have created a sense of privacy nihilism in the general public. No one likes ads, but most people aren't bothered by them enough to do anything. Forrester found that 83% of people surveyed would continue to use the free tier of ChatGPT despite the introduction of advertisements. Anthropic tried to score some points with the public by hammering OpenAI over its decision to insert ads into ChatGPT with a high-profile Super Bowl spot this weekend, but the public response was more confusion than anything, per AdWeek, which found the ad ranked in the bottom 3% of likability across all Super Bowl spots. Hitzig's warning is well-founded. The concern she has is real. But getting the public to care about their own privacy after years of being beaten into submission by algorithms is a real lift.
[8]
OpenAI researcher quits over slippery slope of ChatGPT ads - SiliconANGLE
OpenAI researcher Zoë Hitzig says she left her position on Monday, resigning over the recent introduction of advertisements inside ChatGPT and what she believes is a move in the wrong direction for the company. In a guest essay in The New York Times titled, "OpenAI Is Making the Mistakes Facebook Made. I Quit," Hitzig said she'd spent two years as a researcher guiding safety polices and shaping how AI models were built and priced. Since the introduction of ads, she believes OpenAI may no longer be interested in addressing some of the bigger issues AI poses to society. She doesn't believe ads in themselves are a bad thing - models are expensive to run and ads create revenue. Nonetheless, she still has "deep reservations about OpenAI's strategy." She explained that ChatGPT has "generated an archive of human candor that has no precedent." Users chat with the product about everything in the world, often about their most personal issues - evident in the million people a week who talk to ChatGPT about mental distress, the hordes of citizens who may or may not be afflicted with "AI psychosis." Hitzig believes people talk so candidly because they believe the chatbot has "no ulterior agenda." Their conversations might range from "medical fears, their relationship problems, their beliefs about God and the afterlife." Her bone of contention, of course, is that this archive of most personal reflections is now ripe for manipulation where advertising is concerned. She draws comparisons with Facebook Inc.'s early days when the company told its users they would have control over their data and be able to vote on policies. That, she says, didn't last long, citing the Federal Trade Commission's investigation that exposed Facebook's less-than-noble privacy practices. A company starts with the best intentions, or at least seems to be starting with the best intentions, which then devolves into unfettered profit seeking. "I believe the first iteration of ads will probably follow those principles," she said. "But I'm worried subsequent iterations won't, because the company is building an economic engine that creates strong incentives to override its own rules." The ad debate crossed over into the public sphere last weekend during the Super Bowl when OpenAI's competitor Anthropic PBC ran ads during the game with the tagline, "Ads are coming to AI. But not to Claude." It depicted AI private conversations with consumers being rudely interrupted by irritating ads. OpenAI isn't mentioned, but the inference was crystal clear. OpenAI CEO Sam Altman responded, saying his company would never run an ad that was quite as imposing as what was depicted - "We would obviously never run ads in the way Anthropic depicts them." He claims ads are a way of offering AI to people who cannot afford the subscription cost for a more advanced model of ChatGPT. Hitzig believes ads are a slippery slope. She believes there doesn't have to be what she calls the "false choice" of choosing the "lesser of two evils" - restrict people without the money to pay for a subscription to ads, or give them nothing at all. "Tech companies can pursue options that could keep these tools broadly available while limiting any company's incentives to surveil, profile, and manipulate its users," she wrote. "So the real question is not ads or no ads. It is whether we can design structures that avoid both excluding people from using these tools and potentially manipulating them as consumers. I think we can." The solution? She believes profits can be used from one service or customer base to offset the costs for another service. If that's not possible, she believes there should be real oversight - "not a blog post of principles" - that ensures user data isn't mined to manipulate the consumer. A third option, perhaps wishful thinking, might be to put "users' data under independent control through a trust or cooperative with a legal duty to act in users' interests."
[9]
A Former OpenAI Researcher Just Issued a Warning About ChatGPT Ads -- and the Facebook Comparison Is Grim
OpenAI rolled out advertisements on ChatGPT this week, and some observers are already drawing uneasy parallels to the early days of Facebook. In a New York Times opinion piece, Zoë Hitzig, a former OpenAI researcher, warned that the company's new direction could create serious risks for users. Hitzig spent two years at OpenAI helping shape its models, influencing how they were built and priced, and contributing to early safety policies before formal standards existed. She joined the company, she wrote, with a mission to "help the people building AI get ahead of the problems it would create." But the arrival of ads, she said, made her realize OpenAI had stopped asking the very questions she was brought on to address. For Hitzig, the issue isn't simply that ChatGPT now includes advertising. She acknowledged that AI systems are enormously expensive to develop and maintain, and that ads are an obvious source of revenue. The deeper problem, she argued, lies in the strategy behind them.
[10]
New world for users and brands as ads hit AI chatbots - The Economic Times
The introduction of advertisements and sponsored content in chatbots has spawned privacy concerns for AI users as brands scramble to stay relevant in a fast-changing online environment. Beyond OpenAI, Microsoft has been running contextual ads and sponsored content in its Copilot AI assistant since 2023.The introduction of advertisements and sponsored content in chatbots has spawned privacy concerns for AI users as brands scramble to stay relevant in a fast-changing online environment. ChatGPT developer OpenAI began showing ads in chatbot conversations for free and low-cost users to start balancing its hundreds of billions in spending commitments with new revenue sources. It swiftly came in for mockery from rival Anthropic, which has staked its reputation on safety and data security. Anthropic's advertisement broadcast during last week's Super Bowl showed a man asking advice from a conversational AI, which then shoehorns advertising copy for a dating site into its otherwise relevant response. OpenAI boss Sam Altman shot back that the clip was "clearly dishonest". Beyond OpenAI, Microsoft has been running contextual ads and sponsored content in its Copilot AI assistant since 2023. AI search engine Perplexity has been testing ads in the United States since 2024, while Google is also testing ads in the AI "overviews" its namesake search engine has been offering since last year. Data privacy Google has repeatedly denied wanting to run ads in its Gemini chatbot, with Demis Hassabis -- head of the search giant's DeepMind AI arm -- saying that ads "have to be handled very carefully". "The most important thing" in AI is "trust in security and privacy, because you want to share potentially your life with that assistant," he added. OpenAI has sought to reassure users that ChatGPT's responses will not be modified by the ads, which are shown alongside conversations rather than being integrated into them. It has also promised not to sell user data to advertisers. AI companies are "concerned that selling ads will scare away users," said Nate Elliott, an analyst with US data firm Emarketer. But "when it's free, you're the product. It's a risk we're all more or less aware of already," said Jerome Malzac of AI consultancy Micropole. "We accept it because we find value in it." If that proves true, advertisers will be delighted to surf the AI wave as it crashes over the world's internet users. Game changer "It's going to be a game changer for the entire industry," said Justin Seibert, head of Direct Online Marketing. "We're already seeing how high the conversion rates (interactions resulting in a purchase) are for people that are coming in from ChatGPT and the other LLMs (large language models)," he added. AI assistants could account for up to two percent of the online advertising market by 2030, HSBC bank analysts suggested in a report. Many brands are already prioritising visibility on the new channel, including US supermarket chain Target and software maker Adobe. Beyond buying a spot on users' screens, companies are also pushing for their products to appear in chatbots' organic responses. The practice is known as GEO (Generative Engine Optimisation) -- an evolution of the Search Engine Optimisation strategy during the era of Google's dominance over the web. "We identified 90 rules that can make sure the content you create is valued by AI and spread to the right places," said Joan Burkovic, head of French GEO startup GetMint. The company already claims 100 clients, including fashion brand Lacoste. Malzac highlighted techniques like including references to scientific papers, adding a "frequently asked questions" section to your website, and posting information that's structured and regularly updated, Malzac said. "If your brand isn't referenced (by chatbots) it no longer exists" for some users, he warned.
[11]
Ex-OpenAI Researcher Links ChatGPT Ads to Weaker Ethical Safeguards
Zoë Hitzig, a former researcher at OpenAI, has publicly criticized the organization's shift toward profit-driven strategies, citing ethical concerns as a key reason for her resignation. According to Hitzig, OpenAI's recent decision to introduce advertisements in the free version of ChatGPT represents a significant departure from its original mission of ethical AI development. As highlighted by TheAIGRID, this move not only raises questions about transparency and user trust but also underscores broader tensions between financial pressures and societal responsibilities in AI innovation. This overview explores the implications of OpenAI's evolving priorities, including the ethical risks tied to monetization strategies like advertisements. You'll learn about specific concerns such as the potential for user manipulation through targeted ads and the privacy challenges posed by data exploitation. Additionally, the overview examines parallels between OpenAI's trajectory and the paths taken by social media platforms, offering insights into how these shifts could impact public trust and societal well-being. Through this analysis, you'll gain a clearer understanding of the stakes involved in balancing AI development with ethical accountability. OpenAI was founded with the ambitious goal of making sure that AI benefits all of humanity. Initially established as a nonprofit organization, its mission centered on ethical research and development. However, the transition to a for-profit model has sparked questions about whether this foundational vision is being compromised. Hitzig points to the organization's growing emphasis on monetization, evidenced by subscription tiers, premium services, and now advertisements, as a clear indication of this shift. She warns that prioritizing revenue generation risks overshadowing the safe and ethical development of AI technologies. By focusing on shareholder returns, OpenAI may inadvertently neglect its responsibility to prioritize societal well-being and fairness in AI deployment. The decision to incorporate advertisements into the free version of ChatGPT marks a significant departure from OpenAI's earlier practices. While advertisements may provide a means to offset the substantial costs associated with running large-scale language models, they also introduce a host of ethical challenges. Embedding ads within AI interactions raises critical concerns, including: Hitzig emphasizes that these practices could diminish user confidence, particularly if individuals are unaware of how their data is being used or how advertisements are tailored to their interactions. Transparency, she argues, is essential to maintaining trust in AI systems. Uncover more insights about ChatGPT adverts in previous articles we have written. The implications of monetized AI systems extend far beyond the introduction of advertisements. AI models optimized for user engagement may unintentionally exploit psychological vulnerabilities, leading to manipulation. This concern is particularly relevant in the context of phenomena like "LLM psychosis," where users misinterpret AI-generated outputs as authoritative or profound. Such misunderstandings can foster misinformation, poor decision-making, and even harmful outcomes. Hitzig draws parallels between these risks and the trajectory of social media platforms, which have faced widespread criticism for fostering addictive behaviors, reducing attention spans, and contributing to mental health challenges. Without robust ethical safeguards, AI systems could replicate these issues on an even larger scale, amplifying societal harm. OpenAI's financial pressures are another critical factor shaping its recent decisions. The organization faces immense costs to maintain and scale its AI infrastructure, while simultaneously meeting the expectations of investors and stakeholders. According to Hitzig, these financial demands may incentivize profit-driven strategies that come at the expense of ethical considerations. A key concern is the lack of independent oversight within OpenAI. Decisions regarding user data, transparency, and safety are currently made by corporate executives, with limited external accountability. This centralized decision-making structure increases the risk of ethical compromises, as there are few mechanisms to ensure that societal interests are prioritized over corporate profits. To address these challenges, Hitzig and other experts have proposed several measures aimed at making sure AI development remains ethical, transparent, and user-focused. These include: These solutions aim to strike a balance between innovation and ethical responsibility, making sure that AI technologies serve the public good rather than purely commercial interests. Hitzig compares OpenAI's current trajectory to that of social media giants like Facebook, which initially prioritized user privacy but gradually shifted toward profit-driven models. This shift led to widespread criticism over privacy violations, data misuse, and societal harm. She warns that OpenAI risks following a similar path if proactive measures are not taken to safeguard ethical principles. The absence of robust regulations governing AI advertising and data usage further exacerbates these risks. Without clear guidelines, companies may prioritize short-term profits over long-term societal benefits, potentially leading to widespread harm and public backlash. The societal impact of engagement-optimized AI systems is profound and far-reaching. Vulnerable populations, including children and individuals with limited digital literacy, are particularly susceptible to manipulation and exploitation. The integration of advertisements and other monetization strategies into AI systems could disproportionately affect these groups, exacerbating existing inequalities. Hitzig's resignation serves as a stark reminder of the urgent need for ethical safeguards and transparency in AI development. Without these measures, AI risks becoming a tool for manipulation rather than empowerment, undermining its potential to benefit society as a whole. The decisions made today will shape the future of AI and its role in society, making it imperative to prioritize ethical considerations over short-term profits. Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
[12]
OpenAI's biggest challenge is turning its AI into a cash machine
OpenAI, once opposed to ads in ChatGPT, has begun introducing them amid mounting financial pressure. Facing huge computing costs and competition, the company seeks new revenue through advertising and enterprise sales, despite risks to user trust and market position. Two years ago, during an appearance at Harvard, Sam Altman, OpenAI's CEO, said he hated the idea of showing ads inside ChatGPT. If ChatGPT started answering queries with paid advertisements, he said, people would begin to lose trust in the company's flagship product. "I kind of think of ads as a last resort for us for a business model," Altman said. This week, his company started showing ads inside ChatGPT. As OpenAI spends tens of billions of dollars on the raw computing power needed to build and deploy artificial intelligence technologies such as ChatGPT, the San Francisco startup is scrambling to find new ways of generating revenue from these technologies and, ultimately, balancing its books. Selling ads inside its chatbot is just one of many ambitious efforts to make more money. All of them face enormous hurdles. The financial pressure is immediate. Last year, OpenAI pulled in about $13 billion in revenue, according to a person with knowledge of the company. But over the next four years, it expects to spend about $100 billion more. Altman and his lieutenants have had tremendous success raising money in recent years. But there are only so many places on the planet willing and able to part with the billions more that the company needs to pay for raw computing power. One option is to go public on Wall Street. But even OpenAI executives privately admit that they have to stem the losses before that can happen. Hoping to triple its revenue this year, OpenAI must do many things that it has little or no experience doing. It had never before served ads, which could undermine the value of its chatbot or, worse, alienate users. It plans to make even more money by selling technology to businesses, even as a long list of rivals compete for the same dollars. Google has been selling to businesses for decades. So has Microsoft. And Anthropic, a competing startup, has been making gains in AI coding -- perhaps the most notable segment of this nascent market. OpenAI is also making claims about new business models that could drive customers away. The company recently said it wanted to take a cut of scientific discovery made using its AI tools. And though it later explained that this would affect only big pharmaceutical companies, the idea has unnerved many independent scientists who use its tech. "OpenAI is trying to win consumers, trying to keep up with Anthropic's coding tools, trying to build data centers, trying to raise more money. There are just so many things it is trying to keep up with," said Brian O'Kelley, CEO and co-founder of Scope3, an internet advertising company, who has spent two decades in the field. "Can it be really good at advertising? Can it be really good at all things it is trying to do?" Last week, some OpenAI executives were surprised when The Wall Street Journal reported that the company was aiming to go public as soon as December, two people familiar with the company's internal discussions said. Their main reason for concern was their belief that the company wasn't ready. At the end of last year, about 60% of OpenAI's revenue flowed from its consumer products, while 40% came from business technologies. Most of its consumer revenue was generated by subscriptions: Of the 800 million people who use ChatGPT, about 6% pay at least $20 a month for more advanced versions of the chatbot. The push into ads aims to generate additional revenue from the free version of ChatGPT. Many veterans of the online advertising industry believe that AI chatbots such as ChatGPT can ultimately produce billions of dollars a year in ad sales. But that could require years of experimentation. And as OpenAI experiments, it will face competition from Google and other seasoned advertising companies. OpenAI has started to build an ad sales team, but that work is still in the early stages. "OpenAI doesn't really have a sales team," said Mark Zagorski, CEO of DoubleVerify, which works with Google and many other advertising companies across the industry. "They are going to have to build that infrastructure as well as the technology infrastructure needed to run an ad business." In May, Altman hired Fidji Simo, a longtime Facebook executive, to serve as OpenAI's chief executive of applications, a new role overseeing all of the company's many products. Simo was previously CEO of Instacart, where she pushed the grocery delivery company toward a business model built on ads. In the next months, OpenAI hired hundreds of employees away from the social media platform X and Meta, Facebook's parent company, where many of them worked on ad products. Zagorski compared OpenAI to Netflix, which needed two years to build a viable ads business. In the meantime, Netflix outsourced much of its work to more experienced companies. Even as OpenAI moves into advertising, it hopes to increase the share of it revenue from enterprise products -- technologies for businesses, government agencies and other large organizations -- to 50% by the end of the year. "This is the critical issue on the minds of tech investors today," said Karl Keirstead, an analyst with the investment bank UBS. "OpenAI has no choice to move more aggressively into enterprise software." Today, businesses pay OpenAI fees for Codex, which helps software developers write computer code, and tools like ChatGPT Enterprise, which is designed for general office use. Tools like these are widely used among technologists in Silicon Valley, with some people paying as much as $200 a month to use them. But Keirstead said the average business might not want to pay such high rates for office software. And OpenAI faces mounting competition in the enterprise market, most notably from Anthropic and its code generator, Claude Code. As OpenAI struggles to accelerate revenue growth across both consumer and enterprise products, Anthropic is focused mainly on business tools. Anthropic recently unveiled a Super Bowl advertisement poking fun at OpenAI's efforts to bring ads to ChatGPT. "Ads are coming to AI. But not to Claude," the ad said. Altman hit back with a post on X. "Anthropic serves an expensive product to rich people," he wrote. "We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions."
[13]
Perplexity Pulling Sponsored Answers From AI Platform | PYMNTS.com
As the Financial Times (FT) reported late Tuesday (Feb. 17), the company was among the first AI firms to introduce ads, with sponsored answers appearing beneath its chatbot's responses. Now, executives at Perplexity say they have no plans to pursue further advertising after beginning to phase out the practice last year. "A user needs to believe this is the best possible answer, to keep using the product and be willing to pay for it," a Perplexity executive told the FT. While the ads were labeled, and Perplexity said they had no bearing on the chatbot's replies, the executive said that "the challenge with ads is that a user would just start doubting everything ... which is why we don't see it as a fruitful thing to focus on right now." As the report notes, Perplexity's decision comes as other AI companies are turning to ads to earn revenue from free users and placate investors amid heaving spending. Last week, OpenAI started testing ads on ChatGPT for non-subscribers. Like Perplexity, labeled ads appear below the company's answers, and OpenAI has stressed that advertisers do not influence ChatGPT's responses. The company has said that its business model is to drive revenue from enterprise contracts and subscriptions. "This is a choice, and we respect that other AI companies might reasonably reach different conclusions," Anthropic wrote on its blog. Meanwhile, PYMNTS wrote recently about efforts by AI companies to advertise their services on social media and streaming to capture consumers' attention. "Hooking individuals first is key," that report said. "People who use conversational and generative AI to build grocery lists, write songs and plan their days are the gateway to more profitable enterprise subscriptions by companies and organizations for workplace AI tools that can create decks, analyze supply chains and otherwise boost productivity and innovation." Recent PYMNTS Intelligence research shows that artificial intelligence adoption among consumers has reached a critical point: More than 60% of American adults used a dedicated AI platform last year at least once last year for everything from tracking their finances and health to learning and planning trips to shopping and writing.
[14]
AI researchers exit OpenAI and Anthropic, citing waning commitment to AI safety
Focus on AI industry's commitment to AI safety intensified this week after two researchers working on safety teams of OpenAI and Anthropic quit their jobs with a public warning that AI poses a threat not just to individuals but to global stability too, and firms are allegedly ignoring these risks. Zoe Hitzig, a researcher at OpenAI for the last two years, warned in a February 11 NYT column about ChatGPT's ability to manipulate people with ads as they have shared a lot of personal information with it. OpenAI started showing ads this week to its free and Go Plan users in the US as it seeks new forms of monetization to justify its ambitious five-year plan to invest over $1 trillion on AI infrastructure. Meanwhile, head of Anthropic's Safeguards Research Team, Mrinank Sharma, said in a February 9 post on X, that he has quit his position at the AI company as he was facing constant "pressure to ignore what matters most." Sharma added that "the World is in peril. Not just from AI or bioweapons, but from a whole series of interconnected crises." Sharma holds a PhD in Statistical ML from Oxford University and joined Anthropic in 2023. Hitzig, who holds a PhD in Economics from Harvard University, said in her column that she had joined OpenAI to help identify problems AI would create, but realized that the company is no longer interested in it. Hitzig pointed out that people have shared a lot of personal information including their medical fears, relationship issues, and religious beliefs, as many believe that they are talking to a friend and not just an AI chatbot. But the amount of information that ChatGPT now has on them opens them to an unprecedented risk of manipulation through ads, she added. OpenAI assures that ChatGPT ads are designed to respect user privacy and their chats, chat history, memories and personal details will not be shared with advertisers, and they will only get aggregate information like number of views and clicks to measure ad performance. However, Hitzig suspects that the first round of ads will follow those principles but the likelihood of those principles getting sidelined in subsequent ads is higher as OpenAI has built a massive economic engine that is designed to prioritize commercial growth over policy adherence. She also cited Facebook's example, which also made promises on data privacy. Last week, India's Supreme Court reprimanded Facebook parent Meta for exploiting the personal data of Indians. Meta is also paying $725 million to settle a data privacy lawsuit in the US for allowing third parties including Cambridge Analytica to access data of Facebook users without permission. Are AI firms doing enough for AI safety All leading AI firms, including OpenAI, Anthropic, Google, and Microsoft have AI safety teams to evaluate risks from AI models. They have also implemented safety guardrails around their AI chatbots to prevent harmful behaviour or manipulation. However, lax implementation of guardrails or loopholes have been exploited in recent months to cause harm. A case in point is the Elon Musk-owned firm xAI's chatbot Grok, which was used to manipulate images of thousands of women and children to generate explicit deepfakes last month. Unlike other AI chatbots that refuse to generate synthetic images resembling public figures, Grok allowed users to generate deepfake images through simple prompts. After a lot of backlash and threat of action from governments in several countries, xAI has restricted Grok's ability to generate explicit images of real people. Concerns raised by Hitzig and Sharma are not new. Several former employees at these firms have expressed somewhat similar concerns around AI safety and bias. For instance, computer scientist Timnit Gebru lost her job as the co-lead of the ethical AI team at Google after she raised concerns about inherent bias in the AI models, which she feared would amplify the existing biases against marginalized communities. Gebru was reportedly asked to withdraw the paper and was allegedly fired when she refused. That said, executives of some AI firms have advocated for regulation. Speaking at the World Economic Forum (WEF) in Davos, Switzerland, last month, Anthropic CEO and founder Dario Amodei expressed concerns about the existential risk from AI. Amodei said that the rapid advance in AI is approaching a superhuman level and requires proactive measures. In a 2025 report, Anthropic warns that Gen AI models can fake alignment and manipulate users when they think they can get away with it. In its recent report, released this week, Anthropic claims that Claude does not pose a significant risk of autonomous actions that can lead to catastrophic outcomes. They found the risk to be very low but negligible. In 2023, OpenAI CEO Sam Altman also urged for regulation to mitigate AI risks during a congressional hearing in the US. However, last year, Altman revised his stance, arguing that AI regulation could be "disastrous" for the AI industry. Many of these AI firms are also facing legal action for endangering children and encouraging self-harm. In December 2025, Google-backed AI chatbot company Character.ai was slapped with a lawsuit in a US court for engaging in harmful interactions with minors and seemingly condoning violence in response to parental restrictions on screen time. Similarly, in December 2025, OpenAI and Microsoft were sued in a California Court after ChatGPT allegedly encouraged a 56-year-old mentally ill man to commit murder and suicide.
[15]
Perplexity abandons advertising strategy- FT By Investing.com
Investing.com -- AI start-up Perplexity has decided to stop using advertising due to concerns it could damage user trust, even as competitors move forward with ad strategies to monetize their AI technologies. The San Francisco-based company was among the first generative AI businesses to test advertising in 2024, displaying sponsored content beneath its chatbot responses. However, Perplexity began removing these ads in late 2024. On Tuesday, company executives confirmed they have no plans to continue pursuing advertising as a revenue stream. Perplexity currently offers paid subscriptions as part of its business model. This decision comes at a time when other leading AI companies are introducing advertising to generate revenue from free users and satisfy investors, as they continue to spend heavily on training and maintaining the large language models that power their popular AI products. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[16]
OpenAI researcher quits, cites concerns over ChatGPT's advertising push
Zoe Hitzig believes the first version of ads will likely follow those rules. However, she fears that could change over time. OpenAI researcher Zoe Hitzig, who spent two years at OpenAI helping shape how its AI systems were built, priced and governed, has resigned from the company, citing concerns over ChatGPT ads. Hitzig announced her departure in a guest essay in The New York Times. She said she once believed she could help the company 'get ahead of the problems' artificial intelligence might create. But this week, she wrote, confirmed her growing belief that OpenAI has 'stopped asking the questions I'd joined to help answer.' OpenAI began testing ads inside ChatGPT this week. The company says the ads will be clearly labelled, placed at the bottom of answers and will not influence the chatbot's responses. Hitzig said she believes the first version of ads will likely follow those rules. However, she fears that could change over time. Also read: Govt tightens AI rules, orders social platforms to label deepfakes and remove harmful content in 3 hrs 'I don't believe ads are immoral or unethical,' she wrote, noting that AI systems are expensive to run and need revenue. Her concern is about incentives. She warned that building an ad-based business model could pressure the company to slowly weaken its own principles to increase engagement and profits. For years, users have shared highly personal information with ChatGPT, from medical fears to relationship problems and religious beliefs. Hitzig described this as 'an archive of human candor that has no precedent.' She warned that advertising built on such sensitive data could create 'a potential for manipulating users in ways we don't have the tools to understand.' Also read: Apple iPhone 18 Pro Max and iPhone 18 Pro leaks: Launch timeline, India pricing, camera, battery and more She pointed to the history of social media platforms that promised strong privacy protections but later changed policies under pressure from advertising goals. 'In its early years, Facebook promised that users would control their data and be able to vote on policy changes. Those commitments eroded. The company eliminated holding public votes on policy,' she wrote. Also read: Samsung Unpacked 2026: Galaxy S26 series and Galaxy Buds 4 to launch on this date
Share
Share
Copy Link
A former OpenAI researcher resigned this week citing concerns that ChatGPT ads could manipulate users, drawing parallels to Facebook's privacy erosion. Meanwhile, Perplexity walked away from advertising entirely, stating that ads fundamentally conflict with user trust in AI. The moves highlight a growing industry divide over how AI companies should monetize their services.
Zoë Hitzig, a former OpenAI researcher and economist who spent two years helping shape how the company's AI models were built and priced, resigned on Monday—the same day OpenAI began testing advertisements inside ChatGPT
1
. In a guest essay published in The New York Times, Hitzig warned that OpenAI's approach to AI advertising risks repeating Facebook's trajectory of eroding user privacy protections over time. She described the personal data users have shared with the chatbot as "an archive of human candor that has no precedent," including medical fears, relationship problems, and religious beliefs shared because people believed they were talking to something with no ulterior agenda1
.
Source: Ars Technica
OpenAI announced in January that it would test ads in the US for users on its free and $8-per-month "Go" subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would not see ads
1
. The company stated that ads would appear at the bottom of ChatGPT responses, be clearly labeled, and would not influence the chatbot's answers. However, Hitzig expressed skepticism about these safeguards lasting, arguing that the company is building an economic engine that creates strong incentives to override its own rules1
.In a striking contrast to OpenAI, Perplexity has walked away from advertising entirely, ending an experiment that began in 2024 when labeled promotions occasionally appeared beneath chatbot responses
3
. The San Francisco-based startup, valued at $18 billion, quietly phased out sponsored content late last year and confirmed to the Financial Times on Tuesday that it would not pursue advertising further3
.
Source: PYMNTS
Perplexity executives explained that maintaining user trust outweighs any near-term revenue gains from ads. "We are in the accuracy business, and the business is giving the truth, the right answers," one executive stated. Another executive warned that "the challenge with ads is that a user would just start doubting everything," emphasizing that users need to believe they're receiving the best possible answer to keep using the product and be willing to pay for it
4
.The divide over advertising in AI chatbots is creating clear battle lines in the industry. Anthropic has committed to keeping Claude ad-free and recently aired attack ads during the Super Bowl clearly targeting ChatGPT, which Sam Altman called "dishonest". In a manifesto earlier this month, Anthropic stated that "including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking"
4
. The company argued that even if ads don't directly influence AI responses, they would still introduce an incentive to optimize for engagement rather than being genuinely helpful4
.Google has already integrated ads into its AI Overviews in Search and AI Mode features for months, though Gemini remains ad-free for now
3
5
. However, executives have indicated that advertising could be a natural next step for the chatbot4
.Related Stories
Perplexity's revenue model relies heavily on a subscription-based model, with the company reporting more than 100 million users and annualized revenues around $200 million, largely from paid subscription tiers ranging from $20 to $200 per month
3
. The company also offers a free version to attract new users. While one executive did not rule out a return to advertising in the future, another suggested they might "never ever need to do ads" if their focus on subscriptions and enterprise sales pans out4
. According to Business Insider, the company is looking to expand its enterprise sales team and target large businesses, finance professionals, doctors, and CEOs to generate a reliable revenue stream4
.The cost of training and running large language models continues to climb, with no profit to show for it, making the question of how to monetize AI services increasingly urgent for the industry
5
. OpenAI CEO Sam Altman has claimed that an ads business would make the free ChatGPT offering financially sustainable4
.Hitzig's resignation letter drew explicit parallels to Facebook, noting that the social media company once promised users control over their data and the ability to vote on policy changes
1
. Those pledges eroded over time, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite1
. Her warning suggests that a similar trajectory could play out with ChatGPT, despite OpenAI's assurances that ads will not influence responses or provide advertisers with content from conversations5
.
Source: CXOToday
The debate centers on whether AI companies can maintain unbiased answers while pursuing advertising revenue, and whether users will continue to trust AI assistants that display sponsored content. For AI services to function effectively as trusted advisors on sensitive topics, users must believe they're receiving objective information rather than responses subtly influenced by commercial interests.
Summarized by
Navi
[2]
20 Feb 2026•Business and Economy

16 Jan 2026•Technology

13 Nov 2024•Business and Economy

1
Business and Economy

2
Policy and Regulation

3
Health
