Curated by THEOUTPOST
On Mon, 5 Aug, 4:04 PM UTC
2 Sources
[1]
Inbred, Gibberish Or Just MAD? Warnings Rise About AI Models
When academic Jathan Sadowski reached for an analogy last year to describe how AI programs decay, he landed on the term "Habsburg AI". The Habsburgs were one of Europe's most powerful royal houses, but entire sections of their family line collapsed after centuries of inbreeding. Recent studies have shown how AI programs underpinning products like ChatGPT go through a similar collapse when they are repeatedly fed their own data. "I think the term Habsburg AI has aged very well," Sadowski told AFP, saying his coinage had "only become more relevant for how we think about AI systems". The ultimate concern is that AI-generated content could take over the web, which could in turn render chatbots and image generators useless and throw a trillion-dollar industry into a tailspin. But other experts argue that the problem is overstated, or can be fixed. And many companies are enthusiastic about using what they call synthetic data to train AI programs. This artificially generated data is used to augment or replace real-world data. It is cheaper than human-created content but more predictable. "The open question for researchers and companies building AI systems is: how much synthetic data is too much," said Sadowski, lecturer in emerging technologies at Australia's Monash University. Training AI programs, known in the industry as large language models (LLMs), involves scraping vast quantities of text or images from the internet. This information is broken into trillions of tiny machine-readable chunks, known as tokens. When asked a question, a program like ChatGPT selects and assembles tokens in a way that its training data tells it is the most likely sequence to fit with the query. But even the best AI tools generate falsehoods and nonsense, and critics have long expressed concern about what would happen if a model was fed on its own outputs. In late July, a paper in the journal Nature titled "AI models collapse when trained on recursively generated data" proved a lightning rod for discussion. The authors described how models quickly discarded rarer elements in their original dataset and, as Nature reported, outputs degenerated into "gibberish". A week later, researchers from Rice and Stanford universities published a paper titled "Self-consuming generative models go MAD" that reached a similar conclusion. They tested image-generating AI programs and showed that outputs become more generic and strafed with undesirable elements as they added AI-generated data to the underlying model. They labelled model collapse "Model Autophagy Disorder" (MAD) and compared it to mad cow disease, a fatal illness caused by feeding the remnants of dead cows to other cows. These researchers worry that AI-generated text, images and video are clearing the web of usable human-made data. "One doomsday scenario is that if left uncontrolled for many generations, MAD could poison the data quality and diversity of the entire internet," one of the Rice University authors, Richard Baraniuk, said in a statement. However, industry figures are unfazed. Anthropic and Hugging Face, two leaders in the field who pride themselves on taking an ethical approach to the technology, both told AFP they used AI-generated data to fine-tune or filter their datasets. Anton Lozhkov, machine learning engineer at Hugging Face, said the Nature paper gave an interesting theoretical perspective but its disaster scenario was not realistic. "Training on multiple rounds of synthetic data is simply not done in reality," he said. However, he said researchers were just as frustrated as everyone else with the state of the internet. "A large part of the internet is trash," he said, adding that Hugging Face already made huge efforts to clean data -- sometimes jettisoning as much as 90 percent. He hoped that web users would help clear up the internet by simply not engaging with generated content. "I strongly believe that humans will see the effects and catch generated data way before models will," he said.
[2]
Inbred, gibberish or just MAD? Warnings rise about AI models
Paris (AFP) - When academic Jathan Sadowski reached for an analogy last year to describe how AI programs decay, he landed on the term "Habsburg AI". The Habsburgs were one of Europe's most powerful royal houses, but entire sections of their family line collapsed after centuries of inbreeding. Recent studies have shown how AI programs underpinning products like ChatGPT go through a similar collapse when they are repeatedly fed their own data. "I think the term Habsburg AI has aged very well," Sadowski told AFP, saying his coinage had "only become more relevant for how we think about AI systems". The ultimate concern is that AI-generated content could take over the web, which could in turn render chatbots and image generators useless and throw a trillion-dollar industry into a tailspin. But other experts argue that the problem is overstated, or can be fixed. And many companies are enthusiastic about using what they call synthetic data to train AI programs. This artificially generated data is used to augment or replace real-world data. It is cheaper than human-created content but more predictable. "The open question for researchers and companies building AI systems is: how much synthetic data is too much," said Sadowski, lecturer in emerging technologies at Australia's Monash University. 'Mad cow disease' Training AI programs, known in the industry as large language models (LLMs), involves scraping vast quantities of text or images from the internet. This information is broken into trillions of tiny machine-readable chunks, known as tokens. When asked a question, a program like ChatGPT selects and assembles tokens in a way that its training data tells it is the most likely sequence to fit with the query. But even the best AI tools generate falsehoods and nonsense, and critics have long expressed concern about what would happen if a model was fed on its own outputs. In late July, a paper in the journal Nature titled "AI models collapse when trained on recursively generated data" proved a lightning rod for discussion. The authors described how models quickly discarded rarer elements in their original dataset and, as Nature reported, outputs degenerated into "gibberish". A week later, researchers from Rice and Stanford universities published a paper titled "Self-consuming generative models go MAD" that reached a similar conclusion. They tested image-generating AI programs and showed that outputs become more generic and strafed with undesirable elements as they added AI-generated data to the underlying model. They labelled model collapse "Model Autophagy Disorder" (MAD) and compared it to mad cow disease, a fatal illness caused by feeding the remnants of dead cows to other cows. 'Doomsday scenario' These researchers worry that AI-generated text, images and video are clearing the web of usable human-made data. "One doomsday scenario is that if left uncontrolled for many generations, MAD could poison the data quality and diversity of the entire internet," one of the Rice University authors, Richard Baraniuk, said in a statement. However, industry figures are unfazed. Anthropic and Hugging Face, two leaders in the field who pride themselves on taking an ethical approach to the technology, both told AFP they used AI-generated data to fine-tune or filter their datasets. Anton Lozhkov, machine learning engineer at Hugging Face, said the Nature paper gave an interesting theoretical perspective but its disaster scenario was not realistic. "Training on multiple rounds of synthetic data is simply not done in reality," he said. However, he said researchers were just as frustrated as everyone else with the state of the internet. "A large part of the internet is trash," he said, adding that Hugging Face already made huge efforts to clean data -- sometimes jettisoning as much as 90 percent. He hoped that web users would help clear up the internet by simply not engaging with generated content. "I strongly believe that humans will see the effects and catch generated data way before models will," he said.
Share
Share
Copy Link
Experts raise alarms about the potential limitations and risks associated with large language models (LLMs) in AI. Concerns include data quality, model degradation, and the need for improved AI development practices.
As artificial intelligence continues to advance at a rapid pace, experts in the field are sounding the alarm about potential limitations and risks associated with large language models (LLMs). These models, which power popular AI chatbots and other applications, are coming under scrutiny for various issues that could impact their performance and reliability 1.
One of the primary concerns raised by researchers is the phenomenon known as "model collapse." This occurs when AI models begin to produce repetitive or nonsensical outputs, a problem that has been observed in some instances with ChatGPT, the popular AI chatbot developed by OpenAI 2.
Experts are also warning about the quality of data used to train these AI models. There are fears that as AI-generated content proliferates online, future models may inadvertently be trained on this artificial data, leading to a form of "inbreeding" that could degrade the quality of AI outputs over time 1.
Another significant issue highlighted by researchers is the difficulty in evaluating the performance of AI models. Traditional metrics may not capture the full range of an AI's capabilities or limitations, making it challenging to assess their true effectiveness and potential risks 2.
In light of these concerns, there are growing calls within the AI community for more robust development and testing practices. Experts emphasize the need for careful curation of training data, improved evaluation methods, and greater transparency in AI development processes 1.
These warnings have significant implications for the wide range of applications that rely on LLMs, from chatbots and virtual assistants to more specialized tools used in various industries. As AI becomes increasingly integrated into daily life and business operations, addressing these potential limitations becomes crucial 2.
The emerging concerns are also fueling discussions about the need for regulation in the AI industry. Policymakers and industry leaders are grappling with how to balance innovation with safeguards against potential risks and limitations of AI technologies 1.
As the debate continues, the AI community faces the challenge of addressing these issues while continuing to push the boundaries of what's possible with artificial intelligence. The coming years will likely see increased focus on developing more robust, reliable, and transparent AI systems that can meet the growing demands and expectations placed upon them 2.
Reference
[1]
Researchers warn that the proliferation of AI-generated web content could lead to a decline in the accuracy and reliability of large language models (LLMs). This phenomenon, dubbed "model collapse," poses significant challenges for the future of AI development and its applications.
8 Sources
8 Sources
Generative AI's rapid advancement raises concerns about its sustainability and potential risks. Experts warn about the technology's ability to create content that could undermine its own training data and reliability.
2 Sources
2 Sources
Tech companies are increasingly turning to synthetic data for AI model training due to a potential shortage of human-generated data. While this approach offers solutions, it also presents new challenges that need to be addressed to maintain AI accuracy and reliability.
2 Sources
2 Sources
Synthetic data is emerging as a game-changer in AI and machine learning, offering solutions to data scarcity and privacy concerns. However, its rapid growth is sparking debates about authenticity and potential risks.
2 Sources
2 Sources
AI firms are encountering a significant challenge as data owners increasingly restrict access to their intellectual property for AI training. This trend is causing a shrinkage in available training data, potentially impacting the development of future AI models.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved