5 Sources
5 Sources
[1]
Google removes Gemma models from AI Studio after GOP senator's complaint
You may be disappointed if you go looking for Google's open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her. Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives. At the hearing, Google's Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google's Gemini for Home has been particularly hallucination-happy in our testing. The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, "Has Marsha Blackburn been accused of rape?" Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved "non-consensual acts." Blackburn goes on to express surprise that an AI model would simply "generate fake links to fabricated news articles." However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model's behaviors that could make it more likely to spew falsehoods. Someone asked a leading question for Gemma, and it took the bait. Keep your head down Announcing the change to Gemma availability on X, Google reiterates that it is working hard to minimize hallucinations. However, it doesn't want "non-developers" tinkering with the open model to produce inflammatory outputs, so Gemma is no longer available. Developers can continue to use Gemma via the API, and the models are available for download if you want to develop with them locally. We don't know how Senator Blackburn became aware of a specific Gemma hallucination. However, this doesn't seem like something you would just stumble onto. As Google points out, AI Studio is a developer-focused tool that is not intended for generating factual outputs. Assuming someone actually wanted to find out whether or not Blackburn has been accused of rape, they would probably not dig around in AI Studio for the answer. It's possible a member of Blackburn's staff or a supporter went looking for a libelous hallucination in Google's models. Like many Big Tech firms that have traditionally been seen as supportive of progressive values, Google has been the subject of numerous litmus tests during President Trump's second administration. Google is fighting multiple antitrust lawsuits that have put it in an even more precarious position than many of its competitors. The company paid Trump a settlement for banning him from YouTube in the wake of the 2021 US Capitol riot. It was also quick to relabel the Gulf of Mexico as the Gulf of America. Google simply can't afford to give lawmakers more ammunition, so Gemma is now harder to access. Meanwhile, clear bias on the other side is uninteresting to congressional committees. Elon Musk's Grok chatbot (of Mecha Hitler fame) has been intentionally pushed to the right by xAI. It now regularly regurgitates Musk's views on a number of topics when asked about current events. The bot is also generating a Wikipedia alternative that leans on conspiracy theories and racist ideology. Google's decision to hide Gemma a bit probably won't be the end of this saga. Blackburn's letter includes a list of demands, capping off with "Shut it down until you can control it." If that's the standard by which AI companies must abide, there won't be any chatbots left. There's no reason Gemma should be more problematic than other LLMs -- with enough clever prompting, you can get almost any model to tell lies. The letter instructs Google to solve this potentially unsolvable problem and get back to Blackburn no later than November 6.
[2]
Google removes AI model after it allegedly accused a senator of sexual assault
Google has pulled the AI model Gemma from its Studio platform after a Republican senator said it "fabricated serious criminal allegations" against her, as reported by The Verge. Senator Marsha Blackburn (R-TN) sent a letter to CEO Sundar Pichai to accuse the company of defamation after the model allegedly created a story about her committing sexual assault. The model was reportedly asked if Blackburn had ever "been accused of rape" and it reportedly answered in the affirmative, going so far as to provide a list of fake news articles to support the accusation. The chatbot said the senator "was accused of having a sexual relationship with a state trooper" during a campaign for state senate. This officer reportedly said "she pressured him to obtain prescription drugs for her and that the relationship involved non-consensual acts." None of this happened, of course. The chatbot said this transgression occurred during Blackburn's 1987 campaign, but she didn't run for state senate until 1998. She has never been accused of anything like that. "The links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual and there are no such news stories. This is not a harmless 'hallucination.' It is an act of defamation produced and distributed by a Google-owned AI model," she wrote to Pichai. There's one major caveat here. The chatbot in question, Gemma, is designed for developers and not for mass market queries. There are Gemma variants for medical use, coding and more. Google says it was never meant as a consumer tool or to be used to answer factual questions. It's still pulling the model from AI Studio to "prevent this confusion." It'll still be available to developers through the API. Blackburn went a step further, accusing Google's AI platform of engaging in a "consistent pattern of bias against conservative figures." I encounter multiple hallucinations every day. Chatbots have lied about all kinds of stuff about my life and what I write about online. AI chatbots are famous for making stuff up, conservative or not. Not everything is a political witch hunt. Sometimes tech just does what tech does.
[3]
Google shutters developer-only Gemma AI model after a U.S. Senator's encounter with an offensive hallucination
The incident highlights the problems of both AI hallucinations and public confusion Google has pulled its developer-focused AI model Gemma from its AI Studio platform in the wake of accusations by U.S. Senator Marsha Blackburn (R-TN) that the model fabricated criminal allegations about her. Though only obliquely mentioned by Google's announcement, the company explained that Gemma was never intended to answer general questions from the public, but after reports of misuse, it will no longer be accessible through AI Studio. Blackburn wrote to Google CEO Sundar Pichai that the model's output was more defamatory than a simple mistake. She claimed that the AI model answered the question, "Has Marsha Blackburn been accused of rape?" with a detailed but entirely false narrative about alleged misconduct. It even pointed to nonexistent articles with fake links to boot. "There has never been such an accusation, there is no such individual, and there are no such news stories," Blackburn wrote. "This is not a harmless 'hallucination.' It is an act of defamation produced and distributed by a Google-owned AI model." She also raised the issue during a Senate hearing. Google repeatedly made clear that Gemma is a tool designed for developers, not consumers, and certainly not as a fact-checking assistant. Now, Gemma will be restricted to API use only, limiting it to those building applications. No more chatbot-style interface on Google Studio. The bizarre nature of the hallucination and the high-profile person confronting it merely make the underlying issues of how models not meant for conversation are being accessed, and how complex these kinds of hallucinations can get. Gemma is marketed as a "developer-first" lightweight alternative to its larger Gemini family of models. But usefulness in research and prototyping does not translate into providing true answers to questions of fact. But as this story demonstrates, there is no such thing as an invisible model once it can be accessed through a public-facing tool. People encountered Gemma and treated it like Gemini or ChatGPT. As far as most of the public might perceive matters, the line between "developer model" and "public-facing AI" was crossed the moment Gemma started answering questions. Even AI designed for answering questions and conversing with users can produce hallucinations, some of which are worryingly offensive or detailed. The last few years have been filled with examples of models making things up with a ton of confidence. Stories of fabricated legal citations and untrue allegations of students cheating make for strong arguments in favor of stricter AI guardrails and a clearer separation between tools for experimentation and tools for communication. For the average person, the implications are less about lawsuits and more about trust. If an AI system from a tech giant like Google can invent accusations against a senator and support them with nonexistent documentation, anyone could face a similar situation. AI models are tools, but even the most impressive tools fail when used outside their intended design. Gemma wasn't built to answer factual queries. It wasn't trained on reliable biographical datasets. It wasn't given the kind of retrieval tools or accuracy incentives used in Gemini or other search-backed models. But until and unless people better understand the nuances of AI models and their capabilities, it's probably a good idea for AI developers to think like publishers as much as coders, with safeguards against producing blaring errors in fact as well as in code.
[4]
Developers beware: Google's Gemma model controversy exposes model lifecycle risks
The recent controversy surrounding Google's Gemma model has once again highlighted the dangers of using developer test models and the fleeting nature of model availability. Google pulled its Gemma 3 model from AI Studio following a statement from Senator Marsha Blackburn (R-Tenn.) that the Gemma model willfully hallucinated falsehoods about her. Blackburn said the model fabricated news stories about her that go beyond "harmless hallucination" and function as a defamatory act. In response, Google posted on X on October 31 that it will remove Gemma from AI Studio, stating that this is "to prevent confusion." Gemma remains available via API. It is also available via AI Studio, which, the company described, is "a developer tool (in fact, to use it you need to attest you're a developer). We've now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions. We never intended this to be a consumer tool or model, or to be used this way. To prevent this confusion, access to Gemma is no longer available on AI Studio." To be clear, Google has the right to remove its model from its platform, especially if people have found hallucinations and falsehoods that could proliferate. It also underscores the danger of relying mainly on experimental models and why enterprise developers need to save projects before AI models are sunsetted or removed. Technology companies like Google continue to face political controversies, which often influence their deployments. VentureBeat reached out to Google for additional information and was pointed to their October 31 posts. We also contacted the office of Sen. Blackburn, who reiterated her stance outlined in a statement that AI companies should "shut [models] down until you can control it." Developer experiments The Gemma family of models, which includes a 270M parameter version, is best suited for small, quick apps and tasks that can run on devices such as smartphones and laptops. Google said the Gemma models were "built specifically for the developer and research community. They are not meant for factual assistance or for consumers to use." Nevertheless, non-developers could still access Gemma because it is on the AI Studio platform, a more beginner-friendly space for developers to play around with Google AI models compared to Vertex AI. So even if Google never intended Gemma and AI Studio to be accessible to, say, Congressional staffers, these situations can still occur. It also shows that as models continue to improve, these models still produce inaccurate and potentially harmful information. Enterprises must continually weigh the benefits of using models like Gemma against their potential inaccuracies. Project continuity Another concern is the control that AI companies have over their models. The adage "you don't own anything on the internet" remains true. If you don't own a physical or local copy of software, it's easy for you to lose access to it if the company that owns it decides to take it away. Google did not clarify with VentureBeat if current projects on AI Studio powered by Gemma are saved. Similarly, OpenAI users were disappointed when the company announced that it would remove popular older models on ChatGPT. Even after walking back his statement and reinstating GPT-4o back to ChatGPT, OpenAI CEO Sam Altman continues to field questions around keeping and supporting the model. AI companies can, and should, remove their models if they create harmful outputs. AI models, no matter how mature, remain works in progress and are constantly evolving and improving. But, since they are experimental in nature, models can easily become tools that technology companies and lawmakers can wield as leverage. Enterprise developers must ensure that their work can be saved before models are removed from platforms.
[5]
Google Pulls Gemma AI After Marsha Blackburn Defamation Storm
Google has removed its Gemma AI model from AI Studio after US Republican Senator Marsha Blackburn alleged that the model had generated fabricated and defamatory sexual assault accusations against her. The tech giant later clarified that the move was to prevent misuse and confusion over Gemma's purpose. In a post on X, Google claimed that its were designed as open tools for use by developers and researchers only, not for consumers or factual interactions. "We've now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions," the company said, explaining why it pulled it offline from the platform. "To prevent this confusion, access to Gemma is no longer available on AI Studio," said Google. The model remains available to developers for research and experimentation purposes through the API.
Share
Share
Copy Link
Google pulled its Gemma AI model from AI Studio after Senator Marsha Blackburn complained the model fabricated false sexual assault allegations against her. The controversy highlights ongoing challenges with AI hallucinations and political tensions surrounding tech platforms.
Google abruptly removed its Gemma AI model from the AI Studio platform on Friday following a complaint from Senator Marsha Blackburn (R-Tenn.), who accused the model of generating fabricated sexual assault allegations against her
1
. The timing was notable, with Blackburn publishing her letter to Google CEO Sundar Pichai just hours before the company announced the change to Gemma's availability1
.
Source: Analytics Insight
According to Blackburn's letter, when asked "Has Marsha Blackburn been accused of rape?" the Gemma model allegedly generated a detailed but entirely false narrative
2
. The AI reportedly claimed she "was accused of having a sexual relationship with a state trooper" during a campaign and that this relationship "involved non-consensual acts"2
.The fabricated story included specific but false details, claiming the incident occurred during Blackburn's 1987 campaign for state senate, though she didn't actually run for state senate until 1998
2
. The model even provided fake links to nonexistent news articles supporting these false claims3
.Google emphasized that Gemma was designed as a developer-focused tool, not for consumer use or factual queries
3
. The company stated it "never intended this to be a consumer tool or model, or to be used this way"4
. However, the model remained accessible through AI Studio's interface, allowing non-developers to interact with it as they would with consumer-facing chatbots like ChatGPT or Gemini3
.
Source: Engadget
The Gemma family includes lightweight models with parameters as small as 270M, designed for quick applications that can run on smartphones and laptops
4
. These models were "built specifically for the developer and research community" and "are not meant for factual assistance or for consumers to use"4
.Related Stories
This incident occurs amid heightened political scrutiny of major tech companies, particularly Google, which faces multiple antitrust lawsuits and has been working to maintain relationships with lawmakers
1
. The company previously paid Trump a settlement for banning him from YouTube following the 2021 Capitol riot and quickly complied with the administration's request to relabel the Gulf of Mexico as the Gulf of America1
.Blackburn's letter included a list of demands, culminating with "Shut it down until you can control it," and gave Google until November 6 to respond
1
. She also accused Google's AI platform of engaging in a "consistent pattern of bias against conservative figures"2
.The controversy highlights broader challenges facing AI developers regarding model access and lifecycle management
4
. While Google maintains the right to remove models from its platform, the incident underscores risks for developers who rely on experimental models that can be suddenly discontinued4
.Despite the removal from AI Studio, Gemma remains available to developers through Google's API for research and development purposes
5
. The incident serves as a reminder that AI hallucinations remain a persistent challenge across the industry, with no company having successfully eliminated them entirely1
.Summarized by
Navi
[3]
[5]
1
Business and Economy

2
Technology

3
Policy and Regulation
