2 Sources
[1]
Is tech industry already on cusp of AI slowdown?
Interviews with 20 executives and researchers showed a widespread belief that the tech industry is running into a problem many would have thought was unthinkable just a few years ago: They have used up most of the digital text available on the internet.Demis Hassabis, one of the most influential artificial intelligence experts in the world, has a warning for the rest of the tech industry: Don't expect chatbots to continue to improve as quickly as they have over the past few years. AI researchers have for some time been relying on a fairly simple concept to improve their systems: the more data culled from the internet that they pumped into large language models -- the technology behind chatbots -- the better those systems performed. But Hassabis, who oversees Google DeepMind, the company's primary AI lab, now says that method is running out of steam simply because tech companies are running out of data. "Everyone in the industry is seeing diminishing returns," Hassabis said this month in an interview with The New York Times as he prepared to accept a Nobel Prize for his work on AI. Hassabis is not the only AI expert warning of a slowdown. Interviews with 20 executives and researchers showed a widespread belief that the tech industry is running into a problem many would have thought was unthinkable just a few years ago: They have used up most of the digital text available on the internet. That problem is starting to surface even as billions of dollars continue to be poured into AI development. Last week, Databricks, an AI data company, said it was closing in on $10 billion in funding -- the largest-ever private funding round for a startup. And the biggest companies in tech are signaling that they have no plans to slow down their spending on the giant data centers that run AI systems. Not everyone in the AI world is concerned. Some, including OpenAI CEO Sam Altman, say progress will continue at the same pace, albeit with some twists on old techniques. Dario Amodei, CEO of AI startup Anthropic, and Jensen Huang, CEO of Nvidia, are also bullish. (The Times has sued OpenAI, claiming copyright infringement of news content related to AI systems. OpenAI has denied the claims.) The roots of the debate trace to 2020 when Jared Kaplan, a theoretical physicist at Johns Hopkins University, published a research paper showing that large language models steadily grew more powerful and lifelike as they analyzed more data. Researchers called Kaplan's findings "the Scaling Laws." Just as students learn more by reading more books, AI systems improved as they ingested increasingly large amounts of digital text culled from the internet, including news articles, chat logs and computer programs. Seeing the raw power of this phenomenon, companies such as OpenAI, Google and Meta raced to get their hands on as much internet data as possible, cutting corners, ignoring corporate policies and even debating whether they should skirt the law, according to an examination this year by the Times. It was the modern equivalent of Moore's Law, the oft-quoted maxim coined in the 1960s by Intel co-founder Gordon Moore. He showed that the number of transistors on a silicon chip doubled every two years or so, steadily increasing the power of the world's computers. Moore's Law held up for 40 years. But eventually, it started to slow. The problem is: Neither the Scaling Laws nor Moore's Law are immutable laws of nature. They're simply smart observations. One held up for decades. The others may have a much shorter shelf life. Google and Kaplan's new employer, Anthropic, cannot just throw more text at their AI systems because there is little text left to throw. "There were extraordinary returns over the last three or four years as the Scaling Laws were getting going," Hassabis said. "But we are no longer getting the same progress." Hassabis said existing techniques would continue to improve AI in some ways. But he said he believed that entirely new ideas were needed to reach the goal that Google and many others were chasing: a machine that could match the power of the human brain. Ilya Sutskever, who was instrumental in pushing the industry to think big as a researcher at both Google and OpenAI before leaving OpenAI to create a new startup this past spring, made the same point during a speech this month. "We've achieved peak data, and there'll be no more," he said. "We have to deal with the data that we have. There's only one internet." Hassabis and others are exploring a different approach. They are developing ways for large language models to learn from their own trial and error. By working through various math problems, for instance, language models can learn which methods lead to the right answer and which do not. In essence, the models train on data that they themselves generate. Researchers call this "synthetic data." OpenAI recently released a new system called OpenAI o1 that was built this way. But the method only works in areas such as math and computing programming, where there is a firm distinction between right and wrong. Even in these areas, AI systems have a way of making mistakes and making things up. That can hamper efforts to build AI "agents" that can write their own computer programs and take actions on behalf of internet users, which experts see as one of AI's most important skills. Sorting through the wider expanses of human knowledge is even more difficult. "These methods only work in areas where things are empirically true, like math and science," said Dylan Patel, chief analyst for research firm SemiAnalysis, who closely follows the rise of AI technologies. "The humanities and the arts, moral and philosophical problems are much more difficult." People such as Altman say these new techniques will continue to push the technology ahead. But if progress reaches a plateau, the implications could be far-reaching, even for Nvidia, which has become one of the most valuable companies in the world thanks to the AI boom. During a call with analysts last month, Huang was asked how the company was helping customers work through a potential slowdown and what the repercussions might be for its business. He said that evidence showed there were still gains being made, but that businesses were also testing new processes and techniques on AI chips. "As a result of that, the demand for our infrastructure is really great," Huang said. Although he is confident about Nvidia's prospects, some of the company's biggest customers acknowledge that they must prepare for the possibility that AI will not advance as quickly as expected. "We have had to grapple with this. Is this thing real or not?" said Rachel Peterson, vice president of data centers at Meta. "It is a great question because of all the dollars that are being thrown into this across the board."
[2]
Is tech industry already on cusp of artificial intelligence slowdown?
That problem is starting to surface even as billions of dollars continue to be poured into AI development. Last week, Databricks, an AI data company, said it was closing in on $10 billion in funding -- the largest-ever private funding round for a startup. And the biggest companies in tech are signaling that they have no plans to slow down their spending on the giant data centers that run AI systems.Demis Hassabis, one of the most influential artificial intelligence experts in the world, has a warning for the rest of the tech industry: Don't expect chatbots to continue to improve as quickly as they have over the past few years. AI researchers have for some time been relying on a fairly simple concept to improve their systems: the more data culled from the internet that they pumped into large language models -- the technology behind chatbots -- the better those systems performed. But Hassabis, who oversees Google DeepMind, the company's primary AI lab, now says that method is running out of steam simply because tech companies are running out of data. "Everyone in the industry is seeing diminishing returns," Hassabis said this month in an interview with The New York Times as he prepared to accept a Nobel Prize for his work on AI. Hassabis is not the only AI expert warning of a slowdown. Interviews with 20 executives and researchers showed a widespread belief that the tech industry is running into a problem many would have thought was unthinkable just a few years ago: They have used up most of the digital text available on the internet. That problem is starting to surface even as billions of dollars continue to be poured into AI development. Last week, Databricks, an AI data company, said it was closing in on $10 billion in funding -- the largest-ever private funding round for a startup. And the biggest companies in tech are signaling that they have no plans to slow down their spending on the giant data centers that run AI systems. Not everyone in the AI world is concerned. Some, including OpenAI CEO Sam Altman, say progress will continue at the same pace, albeit with some twists on old techniques. Dario Amodei, CEO of AI startup Anthropic, and Jensen Huang, CEO of Nvidia, are also bullish. (The Times has sued OpenAI, claiming copyright infringement of news content related to AI systems. OpenAI has denied the claims.) The roots of the debate trace to 2020 when Jared Kaplan, a theoretical physicist at Johns Hopkins University, published a research paper showing that large language models steadily grew more powerful and lifelike as they analyzed more data. Researchers called Kaplan's findings "the Scaling Laws." Just as students learn more by reading more books, AI systems improved as they ingested increasingly large amounts of digital text culled from the internet, including news articles, chat logs and computer programs. Seeing the raw power of this phenomenon, companies such as OpenAI, Google and Meta raced to get their hands on as much internet data as possible, cutting corners, ignoring corporate policies and even debating whether they should skirt the law, according to an examination this year by the Times. It was the modern equivalent of Moore's Law, the oft-quoted maxim coined in the 1960s by Intel co-founder Gordon Moore. He showed that the number of transistors on a silicon chip doubled every two years or so, steadily increasing the power of the world's computers. Moore's Law held up for 40 years. But eventually, it started to slow. The problem is: Neither the Scaling Laws nor Moore's Law are immutable laws of nature. They're simply smart observations. One held up for decades. The others may have a much shorter shelf life. Google and Kaplan's new employer, Anthropic, cannot just throw more text at their AI systems because there is little text left to throw. "There were extraordinary returns over the last three or four years as the Scaling Laws were getting going," Hassabis said. "But we are no longer getting the same progress." Hassabis said existing techniques would continue to improve AI in some ways. But he said he believed that entirely new ideas were needed to reach the goal that Google and many others were chasing: a machine that could match the power of the human brain. Ilya Sutskever, who was instrumental in pushing the industry to think big as a researcher at both Google and OpenAI before leaving OpenAI to create a new startup this past spring, made the same point during a speech this month. "We've achieved peak data, and there'll be no more," he said. "We have to deal with the data that we have. There's only one internet." Hassabis and others are exploring a different approach. They are developing ways for large language models to learn from their own trial and error. By working through various math problems, for instance, language models can learn which methods lead to the right answer and which do not. In essence, the models train on data that they themselves generate. Researchers call this "synthetic data." OpenAI recently released a new system called OpenAI o1 that was built this way. But the method only works in areas such as math and computing programming, where there is a firm distinction between right and wrong. Even in these areas, AI systems have a way of making mistakes and making things up. That can hamper efforts to build AI "agents" that can write their own computer programs and take actions on behalf of internet users, which experts see as one of AI's most important skills. Sorting through the wider expanses of human knowledge is even more difficult. "These methods only work in areas where things are empirically true, like math and science," said Dylan Patel, chief analyst for research firm SemiAnalysis, who closely follows the rise of AI technologies. "The humanities and the arts, moral and philosophical problems are much more difficult." People such as Altman say these new techniques will continue to push the technology ahead. But if progress reaches a plateau, the implications could be far-reaching, even for Nvidia, which has become one of the most valuable companies in the world thanks to the AI boom. During a call with analysts last month, Huang was asked how the company was helping customers work through a potential slowdown and what the repercussions might be for its business. He said that evidence showed there were still gains being made, but that businesses were also testing new processes and techniques on AI chips. "As a result of that, the demand for our infrastructure is really great," Huang said. Although he is confident about Nvidia's prospects, some of the company's biggest customers acknowledge that they must prepare for the possibility that AI will not advance as quickly as expected. "We have had to grapple with this. Is this thing real or not?" said Rachel Peterson, vice president of data centers at Meta. "It is a great question because of all the dollars that are being thrown into this across the board."
Share
Copy Link
AI experts warn of diminishing returns in AI development due to the exhaustion of available digital text data, potentially leading to a slowdown in chatbot improvements and necessitating new approaches in AI research.
The artificial intelligence (AI) industry is potentially on the cusp of a significant slowdown, according to leading experts in the field. Demis Hassabis, who oversees Google DeepMind, warns that chatbots may not continue to improve at the rapid pace seen in recent years due to a surprising constraint: the depletion of available digital text data 12.
The root of this issue traces back to 2020 when Jared Kaplan, a theoretical physicist at Johns Hopkins University, published research demonstrating that large language models improved as they analyzed more data. This phenomenon, dubbed "the Scaling Laws," became a driving force in AI development 12.
Companies like OpenAI, Google, and Meta raced to acquire as much internet data as possible, sometimes pushing ethical boundaries in their pursuit. However, this approach is now showing signs of diminishing returns, as the industry has nearly exhausted the available digital text on the internet 12.
Interviews with 20 executives and researchers reveal a widespread acknowledgment of this challenge. Ilya Sutskever, formerly of OpenAI, stated, "We've achieved peak data, and there'll be no more. We have to deal with the data that we have. There's only one internet" 12.
Despite these warnings, the AI industry continues to attract significant investment. Databricks, an AI data company, is reportedly closing in on a $10 billion funding round, potentially the largest private funding for a startup. Major tech companies are also maintaining their spending on AI infrastructure 12.
Not all industry leaders share the same level of concern. Sam Altman, CEO of OpenAI, along with Dario Amodei of Anthropic and Jensen Huang of Nvidia, remain optimistic about continued progress, albeit with potential modifications to existing techniques 12.
To address this challenge, researchers are exploring alternative methods:
Synthetic Data: Hassabis and others are developing ways for large language models to learn from their own trial and error, generating and training on their own data 12.
OpenAI's Approach: OpenAI recently released a system called OpenAI o1, built using synthetic data techniques. However, this method is currently limited to areas with clear right or wrong answers, such as mathematics and computer programming 12.
The new approaches face significant limitations:
Narrow Applicability: Dylan Patel, chief analyst at SemiAnalysis, notes that these methods primarily work in empirically verifiable fields like math and science 12.
Complexity in Humanities: Applying these techniques to broader areas of human knowledge, including humanities, arts, and philosophical problems, remains a significant challenge 1.
AI Agents' Reliability: Even in areas where synthetic data methods work, AI systems still make mistakes, potentially hampering the development of reliable AI agents for tasks like writing computer programs 12.
As the AI industry grapples with these challenges, it's clear that new ideas and approaches will be crucial to achieve the ultimate goal of creating machines that can match the power of the human brain 12.
French tech giant Capgemini agrees to acquire US-listed WNS Holdings for $3.3 billion, aiming to strengthen its position in AI-powered intelligent operations and expand its presence in the US market.
10 Sources
Business and Economy
7 hrs ago
10 Sources
Business and Economy
7 hrs ago
Isomorphic Labs, a subsidiary of Alphabet, is preparing to begin human trials for drugs developed using artificial intelligence, potentially revolutionizing the pharmaceutical industry.
3 Sources
Science and Research
15 hrs ago
3 Sources
Science and Research
15 hrs ago
BRICS leaders are set to call for protections against unauthorized AI use, addressing concerns over data collection and fair payment mechanisms during their summit in Rio de Janeiro.
3 Sources
Policy and Regulation
23 hrs ago
3 Sources
Policy and Regulation
23 hrs ago
Huawei's AI research division, Noah Ark Lab, denies allegations that its Pangu Pro large language model copied elements from Alibaba's Qwen model, asserting independent development and adherence to open-source practices.
3 Sources
Technology
6 hrs ago
3 Sources
Technology
6 hrs ago
Samsung Electronics is forecasted to report a significant drop in Q2 operating profit due to delays in supplying advanced memory chips to AI leader Nvidia, highlighting the company's struggles in the competitive AI chip market.
2 Sources
Business and Economy
15 hrs ago
2 Sources
Business and Economy
15 hrs ago