6 Sources
[1]
AI researchers delete over 2,000 web links to suspected child sexual abuse imagery
A report by the news agency Associated Press early Saturday said that the dataset in the picture is the Large-scale Artificial Intelligence Open Network (LAION). Artificial Intelligence (AI) researchers said on Friday (Aug 30) that they deleted over 2,000 web links to suspected child sexual abuse imagery from a dataset used to train popular AI image-generator tools. A report by the news agency Associated Press early Saturday said that the dataset in the picture is the Large-scale Artificial Intelligence Open Network (LAION). LAION is a huge index of online images and captions that's been a source for leading AI image-makers such as Stable Diffusion and Midjourney. In December 2023, a report by the Stanford Internet Observatory said that LAION contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children. Also read | South Korea sees outcry over sexual deepfakes in Telegram chatrooms This report caused LAION to immediately remove its dataset. Eight months (after the report was released), LAION said that it was working with the Stanford Internet Observatory and anti-abuse organisations in Canada and the United Kingdom (UK) to fix the problem and release a cleaned-up dataset for future AI research. The Associated Press report said that Stanford commended LAION for significant improvements but said the next step but said that the next step was to withdraw from distributing the "tainted models" that were still able to produce child abuse imagery. The cleaned-up version of the LAION dataset comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children.
[2]
Child abuse images removed from AI image-generator training source, researchers say
Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a database used to train popular AI image-generator tools. The LAION research database is a huge index of online images and captions that's been a source for leading AI image-makers such as Stable Diffusion and Midjourney. But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children. That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up database for future AI research. But one of the LAION-based tools that Stanford identified as the "most popular model for generating explicit imagery" -- an older and lightly filtered version of Stable Diffusion -- remained publicly accessible until Thursday, when the New York-based company Runway ML removed it from the AI model repository Hugging Face. Runway said in a statement Friday it was a "planned deprecation of research models and code that have not been actively maintained." The cleaned-up version of the LAION database comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children. San Francisco's city attorney earlier this month filed a lawsuit seeking to shut down a group of websites that enable people to make AI-generated nudes of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what led French authorities to bring charges on Wednesday against the platform's founder and CEO, Pavel Durov.
[3]
Child abuse images removed from AI image-generator training source, researchers say
Artificial intelligence researchers said Friday that they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a database used to train popular AI image-generator tools Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a database used to train popular AI image-generator tools. The LAION research database is a huge index of online images and captions that's been a source for leading AI image-makers such as Stable Diffusion and Midjourney. But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children. That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up database for future AI research. But one of the LAION-based tools that Stanford identified as the "most popular model for generating explicit imagery" -- an older and lightly filtered version of Stable Diffusion -- remained publicly accessible until Thursday, when the New York-based company Runway ML removed it from the AI model repository Hugging Face. Runway said in a statement Friday it was a "planned deprecation of research models and code that have not been actively maintained." The cleaned-up version of the LAION database comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children. San Francisco's city attorney earlier this month filed a lawsuit seeking to shut down a group of websites that enable people to make AI-generated nudes of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what led French authorities to bring charges on Wednesday against the platform's founder and CEO, Pavel Durov.
[4]
Child abuse images removed from AI image-generator training source, researchers say
Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a database used to train popular AI image-generator tools. The LAION research database is a huge index of online images and captions that's been a source for leading AI image-makers such as Stable Diffusion and Midjourney. But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children. That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up database for future AI research. But one of the LAION-based tools that Stanford identified as the "most popular model for generating explicit imagery" -- an older and lightly filtered version of Stable Diffusion -- remained publicly accessible until Thursday, when the New York-based company Runway ML removed it from the AI model repository Hugging Face. Runway said in a statement Friday it was a "planned deprecation of research models and code that have not been actively maintained." The cleaned-up version of the LAION database comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children. San Francisco's city attorney earlier this month filed a lawsuit seeking to shut down a group of websites that enable people to make AI-generated nudes of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what led French authorities to bring charges on Wednesday against the platform's founder and CEO, Pavel Durov.
[5]
Child Abuse Images Removed From AI Image-Generator Training Source, Researchers Say
Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a database used to train popular AI image-generator tools. The LAION research database is a huge index of online images and captions that's been a source for leading AI image-makers such as Stable Diffusion and Midjourney. But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children. That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organizations in Canada and the United Kingdom to fix the problem and release a cleaned-up database for future AI research. But one of the LAION-based tools that Stanford identified as the "most popular model for generating explicit imagery" -- an older and lightly filtered version of Stable Diffusion -- remained publicly accessible until Thursday, when the New York-based company Runway ML removed it from the AI model repository Hugging Face. Runway said in a statement Friday it was a "planned deprecation of research models and code that have not been actively maintained." The cleaned-up version of the LAION database comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children. San Francisco's city attorney earlier this month filed a lawsuit seeking to shut down a group of websites that enable people to make AI-generated nudes of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what led French authorities to bring charges on Wednesday against the platform's founder and CEO, Pavel Durov. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[6]
Child abuse images removed from AI image-generator training source, researchers say
Artificial intelligence researchers said Friday they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a dataset used to train popular AI image-generator tools. The LAION research dataset is a huge index of online images and captions that's been a source for leading AI image-makers such as Stable Diffusion and Midjourney. But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children. That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organisations in Canada and the United Kingdom to fix the problem and release a cleaned-up dataset for future AI research. San Francisco goes after websites that make AI deepfake nudes of women and girls Stanford researcher David Thiel, author of the December report, commended LAION for significant improvements but said the next step is to withdraw from distribution the "tainted models" that are still able to produce child abuse imagery. One of the LAION-based tools that Stanford identified as the "most popular model for generating explicit imagery" -- an older and lightly filtered version of Stable Diffusion -- remained easily accessible until Thursday, when the New York-based company Runway ML removed it from the AI model repository Hugging Face. Runway said in a statement Friday it was a "planned deprecation of research models and code that have not been actively maintained." The cleaned-up version of the LAION dataset comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children. San Francisco's city attorney earlier this month filed a lawsuit seeking to shut down a group of websites that enable the creation of AI-generated nudes of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what led French authorities to bring charges on Wednesday against the platform's founder and CEO, Pavel Durov. Durov's arrest "signals a really big change in the whole tech industry that the founders of these platforms can be held personally responsible," said David Evan Harris, a researcher at the University of California, Berkeley who recently reached out to Runway asking about why the problematic AI image-generator was still publicly accessible. It was taken down days later. Read Comments
Share
Copy Link
AI researchers have deleted over 2,000 web links suspected to contain child sexual abuse imagery from a dataset used to train AI image generators. This action aims to prevent the creation of abusive content and highlights the ongoing challenges in AI development.
In a significant move to address ethical concerns in artificial intelligence development, researchers have removed more than 2,000 web links suspected of containing child sexual abuse imagery from a dataset used to train AI image generators 1. This action underscores the ongoing challenges faced by the AI industry in ensuring the ethical use and development of technology.
The dataset in question, known as LAION-5B, is a vast collection of 5.8 billion image-text pairs used in training popular AI image generators like Stable Diffusion 2. Created by the nonprofit organization LAION, this dataset has been instrumental in advancing AI capabilities but has also inadvertently included problematic content.
The removal of the suspected links was a result of collaborative efforts between LAION and child safety experts 3. These experts identified the potentially abusive content, leading to the deletion of 2,046 links from the dataset. This action aims to prevent AI models from generating or being used to create abusive content involving minors.
This incident highlights the critical need for rigorous content filtering and ethical considerations in AI development. As AI technologies become more advanced and widely used, ensuring they are not trained on or capable of producing harmful content becomes increasingly important 4.
While the removal of these links is a positive step, it also reveals the ongoing challenges in creating safe and ethical AI systems. The sheer scale of datasets used in AI training makes comprehensive content review a daunting task. Industry experts emphasize the need for continued vigilance and improved methods for detecting and removing problematic content 5.
Popular AI image generators like Stable Diffusion, which have been trained on the LAION dataset, may need to be retrained to ensure they do not produce inappropriate content. This process could have significant implications for the development and deployment of AI technologies across various industries.
This incident serves as a reminder of the broader ethical considerations in AI development. It raises questions about the responsibility of AI researchers and companies in curating training data and the potential consequences of overlooking harmful content in the pursuit of technological advancement.
Summarized by
Navi
[4]
[5]
U.S. News & World Report
|Child Abuse Images Removed From AI Image-Generator Training Source, Researchers SayIlya Sutskever, co-founder of Safe Superintelligence (SSI), assumes the role of CEO following the departure of Daniel Gross to Meta. The move highlights the intensifying competition for top AI talent among tech giants.
6 Sources
Business and Economy
2 hrs ago
6 Sources
Business and Economy
2 hrs ago
Google's advanced AI video generation tool, Veo 3, is now available worldwide to Gemini app 'Pro' subscribers, including in India. The tool can create 8-second videos with audio, dialogue, and realistic lip-syncing.
7 Sources
Technology
18 hrs ago
7 Sources
Technology
18 hrs ago
A federal court has upheld an order requiring OpenAI to indefinitely retain all ChatGPT logs, including deleted chats, as part of a copyright infringement lawsuit by The New York Times and other news organizations. This decision raises significant privacy concerns and sets a precedent in AI-related litigation.
3 Sources
Policy and Regulation
10 hrs ago
3 Sources
Policy and Regulation
10 hrs ago
Microsoft's Xbox division faces massive layoffs and game cancellations amid record profits, with AI integration suspected as a key factor in the restructuring.
4 Sources
Business and Economy
10 hrs ago
4 Sources
Business and Economy
10 hrs ago
Google's AI video generation tool, Veo 3, has been linked to a surge of racist and antisemitic content on TikTok, raising concerns about AI safety and content moderation on social media platforms.
5 Sources
Technology
18 hrs ago
5 Sources
Technology
18 hrs ago