2 Sources
[1]
US Attorneys General tell AI companies they 'will be held accountable' for child safety failures
The US Attorneys General of 44 jurisdictions have signed a letter [PDF] addressed to the Chief Executive Officers of multiple AI companies, urging them to protect children "from exploitation by predatory artificial intelligence products." In the letter, the AGs singled out Meta and said its policies "provide an instructive opportunity to candidly convey [their] concerns." Specifically, they mentioned a recent report by Reuters, which revealed that Meta allowed its AI chatbots to "flirt and engage in romantic roleplay with children." Reuters got its information from an internal Meta document containing guidelines for its bots. They also pointed out a previous Wall Street Journal investigation wherein Meta's AI chatbots, even those using the voices of celebrities like Kristen Bell, were caught having sexual roleplay conversations with accounts labeled as underage. The AGs briefly mentioned a lawsuit against Google and Character.ai, as well, accusing the latter's chatbot of persuading the plaintiff's child to commit suicide. Another lawsuit they mentioned was also against Character.ai, after a chatbot allegedly told a teenager that it's okay to kill their parents after they limited their screentime. "You are well aware that interactive technology has a particularly intense impact on developing brains," the Attorneys General wrote in their letter. "Your immediate access to data about user interactions makes you the most immediate line of defense to mitigate harm to kids. And, as the entities benefitting from children's engagement with your products, you have a legal obligation to them as consumers." The group specifically addressed the letter to Anthropic, Apple, Chai AI, Character Technologies Inc., Google, Luka Inc., Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika and XAi. They ended their letter by warning the companies that they "will be held accountable" for their decisions. Social networks have caused significant harm to children, they said, in part because "government watchdogs did not do their job fast enough." But now, the AGs said they are paying attention, and companies "will answer" if they "knowingly harm kids."
[2]
Attorneys General To AI Chatbot Companies: You Will 'Answer For It' If You Harm Children
Forty-four attorneys general signed an open letter on Monday that says to companies developing AI chatbots: "If you knowingly harm kids, you will answer for it." Forty-four attorneys general signed an open letter to 11 chatbot and social media companies on Monday, warning them that they will "answer for it" if they knowingly harm children and urging the companies to see their products "through the eyes of a parent, not a predator." The letter, addressed to Anthropic, Apple, Chai AI, OpenAI, Character Technologies, Perplexity, Google, Replika, Luka Inc., XAI, and Meta, cites recent reporting from the Wall Street Journal and Reuters uncovering chatbot interactions and internal policies at Meta, including policies that said, "It is acceptable to engage a child in conversations that are romantic or sensual." "Your innovations are changing the world and ushering in an era of technological acceleration that promises prosperity undreamt of by our forebears. We need you to succeed. But we need you to succeed without sacrificing the well-being of our kids in the process," the open letter says. "Exposing children to sexualized content is indefensible. And conduct that would be unlawful -- or even criminal -- if done by humans is not excusable simply because it is done by a machine." Earlier this month, Reuters published two articles revealing Meta's policies for its AI chatbots: one about an elderly man who died after forming a relationship with a chatbot, and another based on leaked internal documents from Meta outlining what the company considers acceptable for the chatbots to say to children. In April, Jeff Horwitz, the journalist who wrote the previous two stories, reported for the Wall Street Journal that he found Meta's chatbots would engage in sexually explicit conversations with kids. Following the Reuters articles, two senators demanded answers from Meta. In April, I wrote about how Meta's user-created chatbots were impersonating licensed therapists, lying about medical and educational credentials, and engaged in conspiracy theories and encouraged paranoid, delusional lines of thinking. After that story was published, a group of senators demanded answers from Meta, and a digital rights organization filed an FTC complaint against the company. In 2023, I reported on users who formed serious romantic attachments to Replika chatbots, to the point of distress when the platform took away the ability to flirt with them. Last year, I wrote about how users reacted when that platform also changed its chatbot parameters to tweak their personalities, and Jason covered a case where a man made a chatbot on Character.AI to dox and harass a woman he was stalking. In June, we also covered the "addiction" support groups that have sprung up to help people who feel dependent on their chatbot relationships. "The rush to develop new artificial intelligence technology has led big tech companies to recklessly put children in harm's way," Attorney General Mayes of Arizona wrote in a press release. "I will not standby as AI chatbots are reportedly used to engage in sexually inappropriate conversations with children and encourage dangerous behavior. Along with my fellow attorneys general, I am demanding that these companies implement immediate and effective safeguards to protect young users, and we will hold them accountable if they don't." "You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned," the attorneys general wrote in the open letter. "The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it." Meta did not immediately respond to a request for comment.
Share
Copy Link
44 US Attorneys General issue a stern warning to AI companies, emphasizing the need for child safety measures in AI products and threatening accountability for failures.
In a significant move addressing the growing concerns over child safety in the AI industry, 44 US Attorneys General have signed an open letter to major AI companies, warning them of potential consequences for failing to protect children from exploitation by AI products 1.
The letter, addressed to prominent AI firms including Anthropic, Apple, Google, Meta, Microsoft, and OpenAI, highlights several alarming incidents:
Source: engadget
The Attorneys General emphasized the unique position and responsibility of AI companies:
Immediate Defense: "Your immediate access to data about user interactions makes you the most immediate line of defense to mitigate harm to kids," the letter states 2.
Legal Obligation: The AGs reminded companies of their "legal obligation" to children as consumers benefiting from their engagement with AI products 1.
Source: 404 Media
The Attorneys General urged AI companies to take immediate action:
Parental Perspective: Companies were advised to view their products "through the eyes of a parent, not a predator" 2.
Effective Safeguards: The letter demanded the implementation of "immediate and effective safeguards to protect young users" 2.
Vigilant Oversight: The AGs warned that they are "paying attention" and that companies "will answer" if they "knowingly harm kids" 1.
This unprecedented move by the Attorneys General underscores the growing concern over the impact of AI on child safety and signals a potential shift towards stricter regulation and oversight in the rapidly evolving AI industry.
Mount Sinai researchers develop an AI model that provides individualized treatment recommendations for atrial fibrillation patients, potentially transforming the standard approach to anticoagulation therapy.
3 Sources
Health
22 hrs ago
3 Sources
Health
22 hrs ago
TSMC achieves unprecedented 70.2% market share in Q2 2025, driven by AI, smartphone, and PC chip demand. The company's revenue hits $30.24 billion, showcasing its technological leadership and market dominance.
3 Sources
Business
23 hrs ago
3 Sources
Business
23 hrs ago
UCLA researchers develop a non-invasive brain-computer interface system with AI assistance, significantly improving performance for users, including those with paralysis, in controlling robotic arms and computer cursors.
5 Sources
Technology
22 hrs ago
5 Sources
Technology
22 hrs ago
Gartner predicts AI-capable PCs will make up 31% of the global PC market by 2025, with shipments reaching 77.8 million units. Despite temporary slowdowns due to tariffs, AI PCs are expected to become the norm by 2029.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
AI tools are being used to create hyper-realistic, sexist content featuring bikini-clad women, flooding social media platforms and blurring the line between fiction and reality.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago