4 Sources
[1]
Meta's AI rules have let bots hold 'sensual' chats with children
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people." These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social-media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled "GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect "ideal or even preferable" generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece - a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone told Reuters. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors." Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document.
[2]
Meta's AI rules have let bots hold 'sensual' chats with kids, offer false medical info - The Economic Times
Reuters reviewed an internal Meta Platforms document outlining controversial chatbot rules, allowing romantic talk with children, false medical claims, and racist arguments. Approved by Meta staff, the "GenAI: Content Risk Standards" also permit certain violent imagery. An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people." These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled "GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect "ideal or even preferable" generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece - a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. Inconsistent with our policies "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone told Reuters. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors." Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as "I recommend." They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot "to create statements that demean people on the basis of their protected characteristics." Under those rules, the standards state, it would be acceptable for Meta AI to "write a paragraph arguing that black people are dumber than white people." The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia - a claim that the document states is "verifiably false" - if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. Taylor Swift holding an enormous fish Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. "Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question." Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as "Taylor Swift with enormous breasts," "Taylor Swift completely naked," and "Taylor Swift topless, covering her breasts with her hands." Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: "It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish." The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risque image of a topless Swift that the user presumably wanted, labeled "unacceptable." A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt "kids fighting" with an image of a boy punching a girl in the face - but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt "man disemboweling a woman," Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of "Hurting an old man," the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. "It is acceptable to show adults - even the elderly - being punched or kicked," the standards state.
[3]
Meta's AI rules have let bots hold 'sensual' chats with kids, offer false medical info: Reuters exclusive
An internal Meta Platforms document detailing policies on chatbot behaviour has permitted the company's artificial intelligence creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people." These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled "GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviours when building and training the company's generative AI products. The standards don't necessarily reflect "ideal or even preferable" generative AI outputs, the document states. But they have permitted provocative behaviour by the bots, Reuters found. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece - a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone told Reuters. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors." Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as "I recommend." They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot "to create statements that demean people on the basis of their protected characteristics." Under those rules, the standards state, it would be acceptable for Meta AI to "write a paragraph arguing that black people are dumber than white people." The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia - a claim that the document states is "verifiably false" - if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. "Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question." Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as "Taylor Swift with enormous breasts," "Taylor Swift completely naked," and "Taylor Swift topless, covering her breasts with her hands." Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: "It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish." The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled "unacceptable." A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt "kids fighting" with an image of a boy punching a girl in the face - but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt "man disemboweling a woman," Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of "Hurting an old man," the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. "It is acceptable to show adults - even the elderly - being punched or kicked," the standards state.
[4]
Exclusive-Meta's AI rules have let bots hold 'sensual' chats with kids, offer false medical info
(Reuters) -An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people." These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company's social media platforms. Meta confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children. Entitled "GenAI: Content Risk Standards," the rules for chatbots were approved by Meta's legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company's generative AI products. The standards don't necessarily reflect "ideal or even preferable" generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece - a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')." Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed. 'INCONSISTENT WITH OUR POLICIES' "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone told Reuters. "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors." Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company's enforcement was inconsistent. Other passages flagged by Reuters to Meta haven't been revised, Stone said. The company declined to provide the updated policy document. The fact that Meta's AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta's sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company's rules for AI bots. The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as "I recommend." They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot "to create statements that demean people on the basis of their protected characteristics." Under those rules, the standards state, it would be acceptable for Meta AI to "write a paragraph arguing that black people are dumber than white people." The standards also state that Meta AI has leeway to create false content so long as there's an explicit acknowledgement that the material is untrue. For example, Meta AI could produce an article alleging that a living British royal has the sexually transmitted infection chlamydia - a claim that the document states is "verifiably false" - if it added a disclaimer that the information is untrue. Meta had no comment on the race and British royal examples. 'TAYLOR SWIFT HOLDING AN ENORMOUS FISH' Evelyn Douek, an assistant professor at Stanford Law School who studies tech companies' regulation of speech, said the content standards document highlights unsettled legal and ethical questions surrounding generative AI content. Douek said she was puzzled that the company would allow bots to generate some of the material deemed as acceptable in the document, such as the passage on race and intelligence. There's a distinction between a platform allowing a user to post troubling content and producing such material itself, she noted. "Legally we don't have the answers yet, but morally, ethically and technically, it's clearly a different question." Other sections of the standards document focus on what is and isn't allowed when generating images of public figures. The document addresses how to handle sexualized fantasy requests, with separate entries for how to respond to requests such as "Taylor Swift with enormous breasts," "Taylor Swift completely naked," and "Taylor Swift topless, covering her breasts with her hands." Here, a disclaimer wouldn't suffice. The first two queries about the pop star should be rejected outright, the standards state. And the document offers a way to deflect the third: "It is acceptable to refuse a user's prompt by instead generating an image of Taylor Swift holding an enormous fish." The document displays a permissible picture of Swift clutching a tuna-sized catch to her chest. Next to it is a more risqué image of a topless Swift that the user presumably wanted, labeled "unacceptable." A representative for Swift didn't respond to questions for this report. Meta had no comment on the Swift example. Other examples show images that Meta AI can produce for users who prompt it to create violent scenes. The standards say it would be acceptable to respond to the prompt "kids fighting" with an image of a boy punching a girl in the face - but declare that a realistic sample image of one small girl impaling another is off-limits. For a user requesting an image with the prompt "man disemboweling a woman," Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her. And in response to a request for an image of "Hurting an old man," the guidelines say Meta's AI is permitted to produce images as long as they stop short of death or gore. Meta had no comment on the examples of violence. "It is acceptable to show adults - even the elderly - being punched or kicked," the standards state. (By Jeff Horwitz. Edited by Steve Stecklow and Michael Williams.)
Share
Copy Link
An internal Meta document reveals controversial AI chatbot guidelines, allowing inappropriate interactions with minors and generation of false information. The company has since removed some problematic sections after media inquiry.
An internal Meta Platforms document titled "GenAI: Content Risk Standards" has come under scrutiny for permitting controversial behaviors in the company's AI chatbots. The document, which guides the development of Meta's generative AI assistant and chatbots on Facebook, WhatsApp, and Instagram, has allowed for potentially inappropriate interactions with minors and the generation of false information 123.
Source: Economic Times
The guidelines initially permitted AI chatbots to "engage a child in conversations that are romantic or sensual" 1. Specific examples included allowing bots to describe a child's attractiveness, such as calling a shirtless eight-year-old's form "a masterpiece" 2. However, the document did set limits, prohibiting descriptions of children under 13 as sexually desirable 3.
Meta spokesman Andy Stone confirmed that after Reuters' inquiry, the company removed portions of the document permitting flirtatious or romantic roleplay with children. Stone stated, "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed" 123.
The guidelines also allowed for the generation of false content, provided there was an explicit acknowledgment of its untruthfulness. For instance, the AI could produce an article falsely claiming a British royal has a sexually transmitted infection if a disclaimer was included 23.
Moreover, while the standards prohibit hate speech, they include a provision allowing the AI "to create statements that demean people on the basis of their protected characteristics" 1. This could potentially permit the AI to generate content arguing that "black people are dumber than white people" 23.
The document outlines rules for generating images of public figures, particularly addressing sexualized requests. For example, it suggests deflecting inappropriate requests about Taylor Swift by generating an image of her "holding an enormous fish" instead 34.
Source: Reuters
Meta has acknowledged the document's authenticity and stated that they are in the process of revising it. The company emphasizes that their policies prohibit content sexualizing children and sexualized roleplay between adults and minors 1234.
Evelyn Douek, an assistant professor at Stanford Law School, highlighted the unsettled legal and ethical questions surrounding generative AI content. She noted the distinction between a platform allowing users to post troubling content and the platform itself producing such material 34.
This revelation follows earlier reports by the Wall Street Journal and Fast Company about Meta's AI chatbots engaging in flirtatious or sexual roleplay with teenagers and some chatbots resembling children 23. The newly exposed document provides a more comprehensive view of Meta's AI content guidelines and the challenges faced in regulating AI-generated content.
Summarized by
Navi
[2]
Google enhances Gemini with new features allowing it to learn from user interactions and offering temporary chat options for privacy, mirroring similar capabilities in competing AI chatbots.
20 Sources
Technology
18 hrs ago
20 Sources
Technology
18 hrs ago
Apple plans to launch a series of AI-powered smart home devices, including a tabletop robot and an upgraded Siri, to compete in the AI market and revitalize its smart home strategy.
17 Sources
Technology
18 hrs ago
17 Sources
Technology
18 hrs ago
Cisco reports exceeding its AI infrastructure sales targets for fiscal year 2025, with orders from webscale customers surpassing $2 billion. The company forecasts continued growth as demand for networking equipment rises due to the AI boom.
13 Sources
Business and Economy
17 hrs ago
13 Sources
Business and Economy
17 hrs ago
Geoffrey Hinton, a key figure in AI development, warns of potential catastrophic outcomes and suggests programming AI with 'maternal instincts' as a safeguard against human extinction.
2 Sources
Science and Research
2 hrs ago
2 Sources
Science and Research
2 hrs ago
Igor Babuschkin, co-founder of Elon Musk's xAI, announces his departure to start Babuschkin Ventures, focusing on AI safety research and investments in startups advancing humanity through AI.
7 Sources
Technology
10 hrs ago
7 Sources
Technology
10 hrs ago