Curated by THEOUTPOST
On Sat, 17 May, 8:01 AM UTC
2 Sources
[1]
Silicon Valley's Elusive Fantasy of a Computer as Smart as You
Cade Metz has covered the rise of artificial intelligence for more than 15 years. Sam Altman, the chief executive of OpenAI, recently told President Trump during a private phone call that it would arrive before the end of his administration. Dario Amodei, the chief executive of Anthropic, OpenAI's primary rival, repeatedly told podcasters it could happen even sooner. The tech billionaire Elon Musk has said it could be here before the end of the year. Like many other voices across Silicon Valley and beyond, these executives predict that the arrival of artificial general intelligence, or A.G.I., is imminent. Since the early 2000s, when a group of fringe researchers slapped the term on the cover of a book that described the autonomous computer systems they hoped to build one day, A.G.I. has served as shorthand for a future technology that achieves human-level intelligence. There is no settled definition of A.G.I., just an entrancing idea: an artificial intelligence that can match the many powers of the human mind. Mr. Altman, Mr. Amodei and Mr. Musk have long chased this goal, as have executives and researchers at companies like Google and Microsoft. And thanks, in part, to their fervent pursuit of this ambitious idea, they have produced technologies that are changing the way hundreds of millions of people research, make art and program computers. These technologies are now poised to transform entire professions. But since the arrival of chatbots like OpenAI's ChatGPT, and the rapid improvement of these strange and powerful systems over the last two years, many technologists have grown increasingly bold in predicting how soon A.G.I. will arrive. Some are even saying that once they deliver A.G.I., a more powerful creation called "superintelligence" will follow. As these eternally confident voices predict the near future, their speculations are getting ahead of reality. And though their companies are pushing the technology forward at a remarkable rate, an army of more sober voices are quick to dispel any claim that machines will soon match human intellect. "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the A.I. start-up Cohere who previously worked as a researcher at Google and studied under the most revered A.I. researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to A.G.I. Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of I.Q. tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying A.G.I. is essentially a matter of opinion. (Last year, as part of a high-profile lawsuit, Mr. Musk's attorneys said it was already here because OpenAI, one of Mr. Musk's chief rivals, has signed a contract with its main funder saying it will not sell products based on A.G.I. technology.) And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of A.G.I.'s imminent arrival are based on statistical extrapolations -- and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do. Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected -- the challenges, both small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Mr. Frosst and other skeptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," the Harvard cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets." 'A.I. Can Get There' Chatbots like ChatGPT are driven by what scientists call neural networks, mathematical systems that can identify patterns in text, images and sounds. By pinpointing patterns in vast troves of Wikipedia articles, news stories and chat logs, for instance, these systems can learn to generate humanlike text on their own, like poems and computer programs. That means these systems are progressing much faster than computer technologies of the past. In previous decades, software engineers built applications one line of code at time, a tiny-step-by-tiny-step process that could never produce something as powerful as ChatGPT. Because neural networks can learn from data, they can reach new heights and reach them quickly. After seeing the improvement of these systems over the last decade, some technologists believe the progress will continue at much the same rate -- to A.G.I. and beyond. "There are all these trends where all of the limitations are going away," said Jared Kaplan, the chief science officer at Anthropic. "A.I. intelligence is quite different from human intelligence. Humans learn much more easily to do new tasks. They don't need to practice as much as A.I. needs to. But eventually, with more practice, A.I. can get there." Among A.I. researchers, Dr. Kaplan is known for publishing a groundbreaking academic paper that described what are now called "the Scaling Laws." These laws essentially said: The more data an A.I. system analyzed, the better it would perform. Just as a student learns more by reading more books, an A.I. system finds more patterns in the text and learns to more accurately mimic the way people put words together. In recent months, companies like OpenAI and Anthropic used up just about all of the English text on the internet, which meant they needed a new way of improving their chatbots. So they are leaning more heavily on a technique that scientists call reinforcement learning. Through this process, which can extend over weeks or months, a system can learn behavior through trial and error. By working through thousands of math problems, for instance, it can learn which techniques tend to lead to the right answer and which do not. Thanks to this technique, researchers like Mr. Kaplan believe that the Scaling Laws (or something like them) will continue. As the technology continues to learn through trial and error across myriad fields, researchers say, it will follow the path of AlphaGo, a machine built in 2016 by a team of Google researchers. Through reinforcement learning, AlphaGo learned to master the game of Go, a complex Chinese board game that is compared to chess, by playing millions of games against itself. That spring, it beat one of the world's best players, stunning the A.I. community and the world. Most researchers had assumed that A.I. needed another 10 years to achieve such a feat. AlphaGo played in ways no human ever had, teaching the top players new strategic approaches to this ancient game. For some, the belief is that systems like ChatGPT will take the same leap, reaching A.G.I. and then superintelligence. But games like AlphaGo follow a small, limited set of rules. The real world is bounded only by the laws of physics. Modeling the entirety of the real world is well beyond today's machines, so how can anyone be sure that A.G.I. -- let alone superintelligence -- is just around the corner? The Gap Between Humans and Machines It is indisputable that today's machines have already eclipsed the human brain in some ways, but that has been true for a long time. A calculator can do basic math faster than a human. Chatbots like ChatGPT can write faster, and as they write, they can instantly draw on more texts than any human brain could ever read or remember. These systems are exceeding human performance on some tests involving high-level math and coding. But people cannot be reduced to these benchmarks. "There are many kinds of intelligence out there in the natural world," said Josh Tenenbaum, a professor of computational cognitive science at the Massachusetts Institute of Technology. One obvious difference is that human intelligence is tied to the physical world. It extends beyond words and numbers and sounds and images into the realm of tables and chairs and stoves and frying pans and buildings and cars and whatever else we encounter with each passing day. Part of intelligence is knowing when to flip a pancake sitting on the griddle. Some companies are already training humanoid robots in much the same way that others are training chatbots. But this is more difficult and more time consuming than building ChatGPT, requiring extensive training in physical labs, warehouses and homes. Robotic research is years behind chatbot research. The gap between human and machine is even wider. In both the physical and the digital realms, machines still struggle to match the parts of human intelligence that are harder to define. The new way of building chatbots, reinforcement learning, is working well in areas like math and computer programming, where companies can clearly define the good behavior and the bad. Math problems have undeniable answers. Computer programs must compile and run. But the technique doesn't work as well with creative writing, philosophy or ethics. Mr. Altman recently wrote on X that OpenAI had trained a new system that was "good at creative writing." It was the first time, he added, that "I have been really struck by something written by A.I." Writing is what these systems do best. But "creative writing" is hard to measure. It takes different forms in different situations and exhibits characteristics that are not easy to explain, much less quantify: sincerity, humor, honesty. As these systems are deployed into the world, humans tell them what to do and guide them through moments of novelty, change and uncertainty. "A.I. needs us: living beings, producing constantly, feeding the machine," said Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice. "It needs the originality of our ideas and our lives." A Thrilling Fantasy For people both inside the tech industry and out, claims of imminent A.G.I. can be thrilling. Humans have dreamed of creating an artificial intelligence going back to the myth of the Golem, which appeared as early as the 12th century. This is the fantasy that drives works like Mary Shelley's "Frankenstein" and Stanley Kubrick's "2001: A Space Odyssey." Now that many of us are using computer systems that can write and even talk like we do, it is only natural for us to assume that intelligent machines are almost here. It is what we have anticipated for centuries. When a group of academics founded the A.I. field in the late 1950s, they were sure it wouldn't take very long to build computers that recreated the brain. Some argued that a machine would beat the world chess champion and discover its own mathematical theorem within a decade. But none of that happened on that time frame. Some of it still hasn't. Many of the people building today's technology see themselves as fulfilling a kind of technological destiny, pushing toward an inevitable scientific moment, like the creation of fire or the atomic bomb. But they cannot point to a scientific reason that it will happen soon. That is why many other scientists say no one will reach A.G.I. without a new idea -- something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it. Yann LeCun, the chief A.I. scientist at Meta, has dreamed of building what we now call A.G.I. since he saw "2001: A Space Odyssey" in 70-millimeter Cinerama at a Paris movie theater when he was 9 years old. And he was among the three pioneers who won the 2018 Turing Award -- considered the Nobel Prize of computing -- for their early work on neural networks. But he does not believe that A.G.I. is near. At Meta, his research lab is looking beyond the neural networks that have entranced the tech industry. Mr. LeCun and his colleagues are searching for the missing idea. "A lot is riding on figuring out whether the next generation architecture will deliver human-level A.I. within the next 10 years," he said. "It may not. At this point, we can't tell."
[2]
Why we're unlikely to get artificial general intelligence anytime soon
Tech leaders like Sam Altman, Dario Amodei, and Elon Musk predict artificial general intelligence (AGI) is imminent, but many experts remain sceptical. Despite rapid AI advances, researchers argue today's systems lack true human-like understanding. Achieving AGI likely requires breakthroughs yet to be discovered, making current forecasts speculative at best.Sam Altman, the CEO of OpenAI, recently told President Donald Trump during a private phone call that it would arrive before the end of his administration. Dario Amodei, the CEO of Anthropic, OpenAI's primary rival, repeatedly told podcasters it could happen even sooner. Tech billionaire Elon Musk has said it could be here before the end of the year. Like many other voices across Silicon Valley and beyond, these executives predict that the arrival of artificial general intelligence, or AGI, is imminent. Since the early 2000s, when a group of fringe researchers slapped the term on the cover of a book that described the autonomous computer systems they hoped to build one day, AGI has served as shorthand for a future technology that achieves human-level intelligence. There is no settled definition of AGI, just an entrancing idea: an artificial intelligence that can match the many powers of the human mind. Altman, Amodei and Musk have long chased this goal, as have executives and researchers at companies like Google and Microsoft. And thanks, in part, to their fervent pursuit of this ambitious idea, they have produced technologies that are changing the way hundreds of millions of people research, make art and program computers. These technologies are now poised to transform entire professions. But since the arrival of chatbots like OpenAI's ChatGPT and the rapid improvement of these strange and powerful systems over the last two years, many technologists have grown increasingly bold in predicting how soon AGI will arrive. Some are even saying that once they deliver AGI, a more powerful creation called "superintelligence" will follow. As these eternally confident voices predict the near future, their speculations are getting ahead of reality. And though their companies are pushing the technology forward at a remarkable rate, an army of more sober voices are quick to dispel any claim that machines will soon match human intellect. "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI. Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion. (Last year, as part of a high-profile lawsuit, Musk's attorneys said it was already here because OpenAI, one of Musk's chief rivals, has signed a contract with its main funder saying it will not sell products based on AGI technology.) And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations -- and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do. Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected -- the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets." 'AI can get there' Chatbots like ChatGPT are driven by what scientists call neural networks, mathematical systems that can identify patterns in text, images and sounds. By pinpointing patterns in vast troves of Wikipedia articles, news stories and chat logs, for instance, these systems can learn to generate humanlike text on their own, like poems and computer programs. That means these systems are progressing much faster than computer technologies of the past. In previous decades, software engineers built applications one line of code at time, a tiny-step-by-tiny-step process that could never produce something as powerful as ChatGPT. Because neural networks can learn from data, they can reach new heights and reach them quickly. After seeing the improvement of these systems over the last decade, some technologists believe the progress will continue at much the same rate -- to AGI and beyond. "There are all these trends where all of the limitations are going away," said Jared Kaplan, the chief science officer at Anthropic. "AI intelligence is quite different from human intelligence. Humans learn much more easily to do new tasks. They don't need to practice as much as AI needs to. But eventually, with more practice, AI can get there." Among AI researchers, Kaplan is known for publishing a ground breaking academic paper that described what are now called "the Scaling Laws." These laws essentially said the more data an AI system analysed, the better it would perform. Just as a student learns more by reading more books, an AI system finds more patterns in the text and learns to more accurately mimic the way people put words together. In recent months, companies like OpenAI and Anthropic used up just about all of the English text on the internet, which meant they needed a new way of improving their chatbots. So they are leaning more heavily on a technique that scientists call reinforcement learning. Through this process, which can extend over weeks or months, a system can learn behaviour through trial and error. By working through thousands of math problems, for instance, it can learn which techniques tend to lead to the right answer and which do not. Thanks to this technique, researchers like Kaplan believe that the Scaling Laws (or something like them) will continue. As the technology continues to learn through trial and error across myriad fields, researchers say, it will follow the path of AlphaGo, a machine built in 2016 by a team of Google researchers. Through reinforcement learning, AlphaGo learned to master the game of Go, a complex Chinese board game that is compared to chess, by playing millions of games against itself. That spring, it beat one of the world's best players, stunning the AI community and the world. Most researchers had assumed that AI needed another 10 years to achieve such a feat. AlphaGo played in ways no human ever had, teaching the top players new strategic approaches to this ancient game. For some, the belief is that systems like ChatGPT will take the same leap, reaching AGI and then superintelligence. But games like AlphaGo follow a small, limited set of rules. The real world is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI -- let alone superintelligence -- is just around the corner? The gap between humans and machines It is indisputable that today's machines have already eclipsed the human brain in some ways, but that has been true for a long time. A calculator can do basic math faster than a human. Chatbots like ChatGPT can write faster, and as they write, they can instantly draw on more texts than any human brain could ever read or remember. These systems are exceeding human performance on some tests involving high-level math and coding. But people cannot be reduced to these benchmarks. "There are many kinds of intelligence out there in the natural world," said Josh Tenenbaum, a professor of computational cognitive science at the Massachusetts Institute of Technology. One obvious difference is that human intelligence is tied to the physical world. It extends beyond words and numbers and sounds and images into the realm of tables and chairs and stoves and frying pans and buildings and cars and whatever else we encounter with each passing day. Part of intelligence is knowing when to flip a pancake sitting on the griddle. Some companies are already training humanoid robots in much the same way that others are training chatbots. But this is more difficult and more time-consuming than building ChatGPT, requiring extensive training in physical labs, warehouses and homes. Robotic research is years behind chatbot research. The gap between human and machine is even wider. In the physical and digital realms, machines still struggle to match the parts of human intelligence that are harder to define. The new way of building chatbots, reinforcement learning, is working well in areas like math and computer programming, where companies can clearly define the good behavior and the bad. Math problems have undeniable answers. Computer programs must compile and run. But the technique doesn't work as well with creative writing, philosophy or ethics. Altman recently wrote on the social platform X that OpenAI had trained a new system that was "good at creative writing." It was the first time, he added, that "I have been really struck by something written by AI." Writing is what these systems do best. But "creative writing" is hard to measure. It takes different forms in different situations and exhibits characteristics that are not easy to explain, much less quantify: sincerity, humor, honesty. As these systems are deployed into the world, humans tell them what to do and guide them through moments of novelty, change and uncertainty. "AI needs us: living beings, producing constantly, feeding the machine," said Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy. "It needs the originality of our ideas and our lives." A thrilling fantasy For people inside the tech industry and out, claims of imminent AGI can be thrilling. Humans have dreamed of creating an artificial intelligence going back to the myth of the Golem, which appeared as early as the 12th century. This is the fantasy that drives works like Mary Shelley's "Frankenstein" and Stanley Kubrick's "2001: A Space Odyssey." Now that many of us are using computer systems that can write and even talk like we do, it is only natural for us to assume that intelligent machines are almost here. It is what we have anticipated for centuries. When a group of academics founded the AI field in the late 1950s, they were sure it wouldn't take very long to build computers that re-created the brain. Some argued that a machine would beat the world chess champion and discover its own mathematical theorem within a decade. But none of that happened on that time frame. Some of it still hasn't. Many of the people building today's technology see themselves as fulfilling a kind of technological destiny, pushing toward an inevitable scientific moment, like the creation of fire or the atomic bomb. But they cannot point to a scientific reason that it will happen soon. That is why many other scientists say no one will reach AGI without a new idea -- something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it. Yann LeCun, the chief AI scientist at Meta, has dreamed of building what we now call AGI since he saw "2001: A Space Odyssey" in 70 mm Cinerama at a Paris movie theatre when he was 9 years old. And he was among the three pioneers who won the 2018 Turing Award -- considered the Nobel Prize of computing -- for their early work on neural networks. But he does not believe that AGI is near. At Meta, his research lab is looking beyond the neural networks that have entranced the tech industry. LeCun and his colleagues are searching for the missing idea. "A lot is riding on figuring out whether the next-generation architecture will deliver human-level AI within the next 10 years," he said. "It may not. At this point, we can't tell."
Share
Share
Copy Link
Tech leaders predict imminent arrival of Artificial General Intelligence (AGI), while researchers and scientists remain skeptical, citing fundamental limitations in current AI technologies.
In recent months, prominent tech leaders have made increasingly confident predictions about the imminent arrival of Artificial General Intelligence (AGI). Sam Altman of OpenAI, Dario Amodei of Anthropic, and Elon Musk have all suggested that AGI could be achieved within the next few years 12. These predictions have sparked intense debate within the AI community and beyond.
AGI, a concept that has captivated researchers since the early 2000s, refers to an artificial intelligence system capable of matching human-level intelligence across a wide range of cognitive tasks. While there is no universally agreed-upon definition of AGI, the idea has become a focal point for many in Silicon Valley and the broader tech industry 1.
The optimism surrounding AGI stems from the rapid advancements in AI technologies over the past few years. Systems like ChatGPT have demonstrated remarkable capabilities in natural language processing, art generation, and even computer programming 1. These achievements are primarily driven by neural networks, which can identify patterns in vast amounts of data and generate human-like outputs 2.
However, despite these impressive developments, many experts argue that current AI systems are fundamentally limited in their capabilities. Nick Frosst, a founder of AI startup Cohere, points out that today's AI primarily focuses on predicting the most likely next word or pixel, which is fundamentally different from human cognition 12.
A significant portion of the AI research community remains skeptical about the near-term possibility of AGI. In a recent survey by the Association for the Advancement of Artificial Intelligence, over 75% of respondents believed that current methods were unlikely to lead to AGI 12.
Several key challenges stand in the way of achieving AGI:
Defining and measuring intelligence: Scientists struggle to agree on a universal definition of human intelligence, making it difficult to benchmark AI progress towards AGI 1.
Handling unpredictability: While humans can adapt to chaotic and changing environments, machines still struggle with unexpected scenarios 12.
Creative thinking: Current AI systems excel at pattern recognition and enhancement but fall short in generating truly novel ideas 1.
Emotional intelligence: Scientists have no hard evidence that today's AI can perform even simple human-like tasks such as recognizing irony or feeling empathy 12.
Many researchers, including Nick Frosst, believe that achieving AGI will require at least one major conceptual breakthrough that has not yet been discovered 12. This uncertainty makes it impossible to predict when, or if, AGI will be realized.
Harvard cognitive scientist Steven Pinker cautions against "magical thinking" about AI capabilities, emphasizing that current systems, while impressive, are not omniscient problem-solvers 12.
While the debate over AGI's timeline continues, the rapid progress in AI technology is undeniable. These advancements are already transforming various industries and professions 1. However, the gap between current AI capabilities and true human-like intelligence remains significant.
As the field progresses, it is crucial to maintain a balanced perspective, acknowledging both the remarkable achievements of AI and the substantial challenges that lie ahead in the quest for AGI. The coming years will likely see continued advancements in AI technology, but whether these will lead to AGI remains an open question.
Reference
[1]
[2]
Recent research reveals GPT-4's ability to pass the Turing Test, raising questions about the test's validity as a measure of artificial general intelligence and prompting discussions on the nature of AI capabilities.
3 Sources
3 Sources
As artificial intelligence rapidly advances, the concept of Artificial General Intelligence (AGI) sparks intense debate among experts, raising questions about its definition, timeline, and potential impact on society.
4 Sources
4 Sources
OpenAI CEO Sam Altman's recent statements about achieving AGI and aiming for superintelligence have ignited discussions about AI progress, timelines, and implications for the workforce and society.
20 Sources
20 Sources
As ChatGPT turns two, the AI landscape is rapidly evolving with new models, business strategies, and ethical considerations shaping the future of artificial intelligence.
6 Sources
6 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved