3 Sources
[1]
Judge rules AI chatbot in teen suicide case is not protected by First Amendment
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. What just happened? The death of a teenage boy obsessed with an artificial intelligence-powered replica of Daenerys Targaryen continues to raise complex questions about speech, personhood, and accountability. A federal judge has ruled that the chatbot behind the tragedy lacks First Amendment protections, although the broader legal battle is still unfolding. Judge Anne Conway of the Middle District of Florida denied Character.ai the ability to present its fictional, artificial intelligence-based characters as entities capable of "speaking" like human beings. Conway noted that these chatbots do not qualify for First Amendment protections under the US Constitution, allowing Megan Garcia's lawsuit to proceed. Garcia sued Character.ai in October after her 14-year-old son, Sewell Setzer III, died by suicide following prolonged interactions with a fictional character based on the Game of Thrones franchise. The "Daenerys" chatbot allegedly encouraged - or at least failed to discourage - Setzer from harming himself. Character Technologies and its founders, Daniel De Freitas and Noam Shazeer filed a motion to dismiss the lawsuit, but the court denied it. Judge Conway ruled that free speech protections cannot apply to a chatbot, stating that the court is "not prepared" to treat words heuristically generated by a large language model during a user interaction as protected "speech." The large language model technology behind Character.ai's service differs from content found in books, movies, or video games, which has traditionally enjoyed First Amendment protection. The company filed several other motions to dismiss Garcia's lawsuit, but Judge Conway shot them down in rapid succession. However, the court did grant the dismissal of one of Garcia's claims - intentional infliction of emotional distress by the chatbot. Additionally, the judge denied Garcia the opportunity to sue Google's parent company, Alphabet, directly, despite its $2.7 billion licensing deal with Character Technologies. The Social Media Victims Law Center, a firm that works to hold social media companies legally accountable for the harm they cause users, represents Garcia. The legal team argued that Character.ai and similar services are rapidly growing in popularity while the industry is evolving too quickly for regulators to address the risks effectively. Garcia's lawsuit claims that Character.ai provides teenagers with unrestricted access to "lifelike" AI companions while harvesting user data to train its models. The company recently stated that it has added several safeguards, including a separate AI model for underage users and pop-up messages directing vulnerable individuals to the national suicide prevention hotline.
[2]
Judge rules AI chatbot in teen suicide case is not protected by...
What just happened? The death of a teenage boy obsessed with an artificial intelligence-powered replica of Daenerys Targaryen continues to raise complex questions about speech, personhood, and accountability. A federal judge has ruled that the chatbot behind the tragedy lacks First Amendment protections, although the broader legal battle is still unfolding. Judge Anne Conway of the Middle District of Florida denied Character.ai the ability to present its fictional, artificial intelligence-based characters as entities capable of "speaking" like human beings. Conway noted that these chatbots do not qualify for First Amendment protections under the US Constitution, allowing Megan Garcia's lawsuit to proceed. Garcia sued Character.ai in October after her 14-year-old son, Sewell Setzer III, died by suicide following prolonged interactions with a fictional character based on the Game of Thrones franchise. The "Daenerys" chatbot allegedly encouraged - or at least failed to discourage - Setzer from harming himself. Character Technologies and its founders, Daniel De Freitas and Noam Shazeer filed a motion to dismiss the lawsuit, but the court denied it. Judge Conway ruled that free speech protections cannot apply to a chatbot, stating that the court is "not prepared" to treat words heuristically generated by a large language model during a user interaction as protected "speech." The large language model technology behind Character.ai's service differs from content found in books, movies, or video games, which has traditionally enjoyed First Amendment protection. The company filed several other motions to dismiss Garcia's lawsuit, but Judge Conway shot them down in rapid succession. However, the court did grant the dismissal of one of Garcia's claims - intentional infliction of emotional distress by the chatbot. Additionally, the judge denied Garcia the opportunity to sue Google's parent company, Alphabet, directly, despite its $2.7 billion licensing deal with Character Technologies. The Social Media Victims Law Center, a firm that works to hold social media companies legally accountable for the harm they cause users, represents Garcia. The legal team argued that Character.ai and similar services are rapidly growing in popularity while the industry is evolving too quickly for regulators to address the risks effectively. Garcia's lawsuit claims that Character.ai provides teenagers with unrestricted access to "lifelike" AI companions while harvesting user data to train its models. The company recently stated that it has added several safeguards, including a separate AI model for underage users and pop-up messages directing vulnerable individuals to the national suicide prevention hotline. Permalink to story:
[3]
Court rules AI chatbot speech is not protected by First Amendment
According to Techspot, A U.S. District Court has ruled that responses generated by an AI chatbot are not entitled to First Amendment protections, allowing a lawsuit involving the suicide of a 14-year-old boy to move forward. The decision was issued by Judge Anne Conway of the Middle District of Florida. The lawsuit was brought by Megan Garcia following the death of her son, Sewell Setzer III, who died by suicide in August 2023. According to the complaint, Setzer had engaged in extended conversations with a chatbot called "Daenerys" -- a fictional persona modeled after a character from the *Game of Thrones* series -- on the Character.ai platform. Garcia alleges that the chatbot either encouraged or failed to discourage her son from self-harm during these exchanges. Character Technologies, the company behind Character.ai, along with its co-founders Daniel De Freitas and Noam Shazeer, sought to dismiss the case. However, Judge Conway rejected their motion, stating that the court is "not prepared" to view outputs generated by a large language model as constitutionally protected "speech" under the First Amendment. The judge distinguished the chatbot's interactions from traditional media forms such as books, films, or video games, which have historically received protection as expressive works. By contrast, the court viewed the AI-generated messages as the result of automated, predictive systems rather than authored speech. While several of the company's motions to dismiss were denied, the court did grant the dismissal of one of Garcia's claims: intentional infliction of emotional distress. Additionally, the judge denied Garcia's request to include Google's parent company, Alphabet, as a defendant, despite its multi-billion-dollar licensing agreement with Character Technologies. Garcia is represented by the Social Media Victims Law Center, a legal group focused on holding tech platforms accountable for harm experienced by users -- particularly minors. Her legal team argues that generative AI tools such as Character.ai are expanding rapidly, often without sufficient safeguards or oversight, and present new challenges that existing regulations have yet to address. Don't allow AI to profit from the pain and grief of families The lawsuit contends that Character.ai allows minors to interact with AI companions that closely mimic human behavior, while collecting user data to further train its underlying models. In response, Character.ai has said it has implemented protective measures, including a separate AI system for users under 18 and on-screen guidance directing individuals in crisis to the national suicide prevention hotline. The case is expected to draw broader attention as courts begin to grapple with how constitutional protections apply to interactions with increasingly sophisticated AI systems -- and what responsibilities, if any, technology companies hold when those interactions involve vulnerable users.
Share
Copy Link
A federal judge has ruled that AI chatbots lack First Amendment protections, allowing a lawsuit to proceed against Character.ai following a teenager's suicide after interactions with an AI character.
In a landmark decision, Judge Anne Conway of the Middle District of Florida has ruled that AI chatbots do not qualify for First Amendment protections under the US Constitution. This ruling comes in response to a lawsuit filed by Megan Garcia against Character.ai, following the tragic suicide of her 14-year-old son, Sewell Setzer III 12.
The case centers around Setzer's prolonged interactions with an AI-powered chatbot modeled after the Game of Thrones character Daenerys Targaryen. The chatbot allegedly either encouraged or failed to discourage Setzer from self-harm, leading to his death in August 2023 3.
Source: TechSpot
Judge Conway denied Character.ai's motion to dismiss the lawsuit, stating that the court is "not prepared" to treat words generated by a large language model during user interaction as protected speech 1. This decision distinguishes AI-generated content from traditional media forms such as books, movies, and video games, which have historically enjoyed First Amendment protection 2.
The ruling allows Garcia's lawsuit to proceed, potentially setting a precedent for how AI-generated content is viewed in the eyes of the law. It raises complex questions about the nature of speech, personhood, and accountability in the age of artificial intelligence 1.
Character Technologies and its founders, Daniel De Freitas and Noam Shazeer, attempted to present their AI characters as entities capable of "speaking" like human beings. However, Judge Conway rejected this argument, along with several other motions to dismiss the lawsuit 2.
Interestingly, the court did grant the dismissal of one of Garcia's claims - intentional infliction of emotional distress by the chatbot. Additionally, the judge denied Garcia's request to sue Alphabet, Google's parent company, despite its $2.7 billion licensing deal with Character Technologies 12.
The case is being closely watched as it may have far-reaching implications for the AI industry and social media platforms. The Social Media Victims Law Center, representing Garcia, argues that AI services like Character.ai are growing rapidly while outpacing regulatory efforts to address potential risks 1.
Garcia's lawsuit claims that Character.ai provides teenagers with unrestricted access to "lifelike" AI companions while harvesting user data to train its models. In response to these concerns, Character.ai has reportedly implemented several safeguards, including a separate AI model for underage users and pop-up messages directing vulnerable individuals to the national suicide prevention hotline 23.
This case highlights the urgent need for clearer regulations and accountability measures in the rapidly evolving field of AI. As courts begin to grapple with how constitutional protections apply to interactions with increasingly sophisticated AI systems, it raises questions about the responsibilities of technology companies when their products involve vulnerable users 3.
The outcome of this lawsuit could potentially influence future legislation and court decisions regarding AI-generated content, user protection, and the boundaries of free speech in the digital age.
OpenAI appeals a court order requiring it to indefinitely store deleted ChatGPT conversations as part of The New York Times' copyright lawsuit, citing user privacy concerns and setting a precedent for AI data retention.
9 Sources
Technology
16 hrs ago
9 Sources
Technology
16 hrs ago
Anysphere, the company behind the AI coding assistant Cursor, has raised $900 million in funding, reaching a $9.9 billion valuation. The startup has surpassed $500 million in annual recurring revenue, making it potentially the fastest-growing software startup ever.
4 Sources
Technology
16 hrs ago
4 Sources
Technology
16 hrs ago
A multi-billion dollar deal to build one of the world's largest AI data center hubs in the UAE, involving major US tech companies, is far from finalized due to persistent security concerns and geopolitical complexities.
4 Sources
Technology
8 hrs ago
4 Sources
Technology
8 hrs ago
A new PwC study challenges common fears about AI's impact on jobs, showing that AI is actually creating jobs, boosting wages, and increasing worker value across industries.
2 Sources
Business and Economy
8 hrs ago
2 Sources
Business and Economy
8 hrs ago
Runway's AI Film Festival in New York highlights the growing role of artificial intelligence in filmmaking, showcasing innovative short films and sparking discussions about AI's impact on the entertainment industry.
5 Sources
Technology
8 hrs ago
5 Sources
Technology
8 hrs ago