2 Sources
[1]
Two Paths for A.I.
Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He'd become convinced that the company wasn't prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in "alignment," he told me -- the suite of techniques used to insure that A.I. acts in accordance with human commands and values -- were lagging behind gains in intelligence. Researchers, he said, were hurtling toward the creation of powerful systems they couldn't control. Kokotajlo, who had transitioned from a graduate program in philosophy to a career in A.I., explained how he'd educated himself so that he could understand the field. While at OpenAI, part of his job had been to track progress in A.I. so that he could construct timelines predicting when various thresholds of intelligence might be crossed. At one point, after the technology advanced unexpectedly, he'd had to shift his timelines up by decades. In 2021, he'd written a scenario about A.I. titled "What 2026 Looks Like." Much of what he'd predicted had come to pass before the titular year. He'd concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner. He sounded scared. Around the same time that Kokotajlo left OpenAI, two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference." In it, Kapoor and Narayanan, who study technology's integration with society, advanced views that were diametrically opposed to Kokotajlo's. They argued that many timelines of A.I.'s future were wildly optimistic; that claims about its usefulness were often exaggerated or outright fraudulent; and that, because of the world's inherent complexity, even powerful A.I. would change it only slowly. They cited many cases in which A.I. systems had been called upon to deliver important judgments -- about medical diagnoses, or hiring -- and had made rookie mistakes that indicated a fundamental disconnect from reality. The newest systems, they maintained, suffered from the same flaw. Recently, all three researchers have sharpened their views, releasing reports that take their analyses further. The nonprofit AI Futures Project, of which Kokotajlo is the executive director, has published "AI 2027," a heavily footnoted document, written by Kokotajlo and four other researchers, which works out a chilling scenario in which "superintelligent" A.I. systems either dominate or exterminate the human race by 2030. It's meant to be taken seriously, as a warning about what might really happen. Meanwhile, Kapoor and Narayanan, in a new paper titled "AI as Normal Technology," insist that practical obstacles of all kinds -- from regulations and professional standards to the simple difficulty of doing physical things in the real world -- will slow A.I.'s deployment and limit its transformational potential. While conceding that A.I. may eventually turn out to be a revolutionary technology, on the scale of electricity or the internet, they maintain that it will remain "normal" -- that is, controllable through familiar safety measures, such as fail-safes, kill switches, and human supervision -- for the foreseeable future. "AI is often analogized to nuclear weapons," they argue. But "the right analogy is nuclear power," which has remained mostly manageable and, if anything, may be underutilized for safety reasons. Which is it: business as usual or the end of the world? "The test of a first-rate intelligence," F. Scott Fitzgerald famously claimed, "is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function." Reading these reports back-to-back, I found myself losing that ability, and speaking to their authors in succession, in the course of a single afternoon, I became positively deranged. "AI 2027" and "AI as Normal Technology" aim to describe the same reality, and have been written by deeply knowledgeable experts, but arrive at absurdly divergent conclusions. Discussing the future of A.I. with Kapoor, Narayanan, and Kokotajlo, I felt like I was having a conversation about spirituality with Richard Dawkins and the Pope. In the parable of the blind men and the elephant, a group of well-intentioned people grapple with an unfamiliar object, failing to agree on its nature because each believes that the part he's encountered defines the whole. That's part of the problem with A.I. -- it's hard to see the whole of something new. But it's also true, as Kapoor and Narayanan write, that "today's AI safety discourse is characterized by deep differences in worldviews." If I were to sum up those differences, I'd say that, broadly speaking, West Coast, Silicon Valley thinkers are drawn to visions of rapid transformation, while East Coast academics recoil from them; that A.I. researchers believe in quick experimental progress, while other computer scientists yearn for theoretical rigor; and that people in the A.I. industry want to make history, while those outside of it are bored of tech hype. Meanwhile, there are barely articulated differences on political and human questions -- about what people want, how technology evolves, how societies change, how minds work, what "thinking" is, and so on -- that help push people into one camp or the other. An additional problem is simply that arguing about A.I. is unusually interesting. That interestingness, in itself, may be proving to be a trap. When "AI 2027" appeared, many industry insiders responded by accepting its basic premises while debating its timelines (why not "AI 2045"?). Of course, if a planet-killing asteroid is headed for Earth, you don't want NASA officials to argue about whether the impact will happen before or after lunch; you want them to launch a mission to change its path. At the same time, the kinds of assertions seen in "AI as Normal Technology" -- for instance, that it might be wise to keep humans in the loop during important tasks, instead of giving computers free rein -- have been perceived as so comparatively bland that they've long gone unuttered by analysts interested in the probability of doomsday. When a technology becomes important enough to shape the course of society, the discourse around it needs to change. Debates among specialists need to make room for a consensus upon which the rest of us can act. The lack of such a consensus about A.I. is starting to have real costs. When experts get together to make a unified recommendation, it's hard to ignore them; when they divide themselves into duelling groups, it becomes easier for decision-makers to dismiss both sides and do nothing. Currently, nothing appears to be the plan. A.I. companies aren't substantially altering the balance between capability and safety in their products; in the budget-reconciliation bill that just passed the House, a clause prohibits state governments from regulating "artificial intelligence models, artificial intelligence systems, or automated decision systems" for ten years. If "AI 2027" is right, and that bill is signed into law, then by the time we're allowed to regulate A.I. it might be regulating us. We need to make sense of the safety discourse now, before the game is over. Artificial intelligence is a technical subject, but describing its future involves a literary truth: the stories we tell have shapes, and those shapes influence their content. There are always trade-offs. If you aim for reliable, levelheaded conservatism, you risk downplaying unlikely possibilities; if you bring imagination to bear, you might dwell on what's interesting at the expense of what's likely. Predictions can create an illusion of predictability that's unwarranted in a fun-house world. In 2019, when I profiled the science-fiction novelist William Gibson, who is known for his prescience, he described a moment of panic: he'd thought he had a handle on the near future, he said, but "then I saw Trump coming down that escalator to announce his candidacy. All of my scenario modules went 'beep-beep-beep.' " We were veering down an unexpected path. "AI 2027" is imaginative, vivid, and detailed. It "is definitely a prediction," Kokotajlo told me recently, "but it's in the form of a scenario, which is a particular kind of prediction." Although it's based partly on assessments of trends in A.I., it's written like a sci-fi story (with charts); it throws itself headlong into the flow of events. Often, the specificity of its imagined details suggests their fungibility. Will there actually come a moment, possibly in June of 2027, when software engineers who've invented self-improving A.I. "sit at their computer screens, watching performance crawl up, and up, and up"? Will the Chinese government, in response, build a "mega-datacenter" in a "Centralized Development Zone" in Taiwan? These particular details make the scenario more powerful, but might not matter; the bottom line, Kokotajlo said, is that, "more likely than not, there is going to be an intelligence explosion, and a crazy geopolitical conflict over who gets to control the A.I.s." It's the details of that "intelligence explosion" that we need to follow. The scenario in "AI 2027" centers on a form of A.I. development known as "recursive self-improvement," or R.S.I., which is currently largely hypothetical. In the report's story, R.S.I. begins when A.I. programs become capable of doing A.I. research for themselves (today, they only assist human researchers); these A.I. "agents" soon figure out how to make their descendants smarter, and those descendants do the same for their descendants, creating a feedback loop. This process accelerates as the A.I.s start acting like co-workers, trading messages and assigning work to one another, forming a "corporation-within-a-corporation" that repeatedly grows faster and more effective than the A.I. firm in which it's ensconced. Eventually, the A.I.s begin creating better descendants so quickly that human programmers don't have time to study them and decide whether they're controllable.
[2]
Why we're unlikely to get artificial general intelligence anytime soon
Sam Altman, the CEO of OpenAI, recently told President Donald Trump during a private phone call that it would arrive before the end of his administration. Dario Amodei, the CEO of Anthropic, OpenAI's primary rival, repeatedly told podcasters it could happen even sooner. Tech billionaire Elon Musk has said it could be here before the end of the year. Like many other voices across Silicon Valley and beyond, these executives predict that the arrival of artificial general intelligence, or AGI, is imminent. Since the early 2000s, when a group of fringe researchers slapped the term on the cover of a book that described the autonomous computer systems they hoped to build one day, AGI has served as shorthand for a future technology that achieves human-level intelligence. There is no settled definition of AGI, just an entrancing idea: an artificial intelligence that can match the many powers of the human mind. Altman, Amodei and Musk have long chased this goal, as have executives and researchers at companies like Google and Microsoft. And thanks, in part, to their fervent pursuit of this ambitious idea, they have produced technologies that are changing the way hundreds of millions of people research, make art and program computers. These technologies are now poised to transform entire professions. But since the arrival of chatbots like OpenAI's ChatGPT and the rapid improvement of these strange and powerful systems over the last two years, many technologists have grown increasingly bold in predicting how soon AGI will arrive. Some are even saying that once they deliver AGI, a more powerful creation called "superintelligence" will follow. As these eternally confident voices predict the near future, their speculations are getting ahead of reality. And though their companies are pushing the technology forward at a remarkable rate, an army of more sober voices are quick to dispel any claim that machines will soon match human intellect. "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI. Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion. (Last year, as part of a high-profile lawsuit, Musk's attorneys said it was already here because OpenAI, one of Musk's chief rivals, has signed a contract with its main funder saying it will not sell products based on AGI technology.) And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations -- and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do. Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected -- the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other skeptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets." 'AI can get there' Chatbots like ChatGPT are driven by what scientists call neural networks, mathematical systems that can identify patterns in text, images and sounds. By pinpointing patterns in vast troves of Wikipedia articles, news stories and chat logs, for instance, these systems can learn to generate humanlike text on their own, like poems and computer programs. That means these systems are progressing much faster than computer technologies of the past. In previous decades, software engineers built applications one line of code at time, a tiny-step-by-tiny-step process that could never produce something as powerful as ChatGPT. Because neural networks can learn from data, they can reach new heights and reach them quickly. After seeing the improvement of these systems over the last decade, some technologists believe the progress will continue at much the same rate -- to AGI and beyond. "There are all these trends where all of the limitations are going away," said Jared Kaplan, the chief science officer at Anthropic. "AI intelligence is quite different from human intelligence. Humans learn much more easily to do new tasks. They don't need to practice as much as AI needs to. But eventually, with more practice, AI can get there." Among AI researchers, Kaplan is known for publishing a groundbreaking academic paper that described what are now called "the Scaling Laws." These laws essentially said the more data an AI system analyzed, the better it would perform. Just as a student learns more by reading more books, an AI system finds more patterns in the text and learns to more accurately mimic the way people put words together. In recent months, companies like OpenAI and Anthropic used up just about all of the English text on the internet, which meant they needed a new way of improving their chatbots. So they are leaning more heavily on a technique that scientists call reinforcement learning. Through this process, which can extend over weeks or months, a system can learn behavior through trial and error. By working through thousands of math problems, for instance, it can learn which techniques tend to lead to the right answer and which do not. Thanks to this technique, researchers like Kaplan believe that the Scaling Laws (or something like them) will continue. As the technology continues to learn through trial and error across myriad fields, researchers say, it will follow the path of AlphaGo, a machine built in 2016 by a team of Google researchers. Through reinforcement learning, AlphaGo learned to master the game of Go, a complex Chinese board game that is compared to chess, by playing millions of games against itself. That spring, it beat one of the world's best players, stunning the AI community and the world. Most researchers had assumed that AI needed another 10 years to achieve such a feat. The gap between humans and machines It is indisputable that today's machines have already eclipsed the human brain in some ways, but that has been true for a long time. A calculator can do basic math faster than a human. Chatbots like ChatGPT can write faster, and as they write, they can instantly draw on more texts than any human brain could ever read or remember. These systems are exceeding human performance on some tests involving high-level math and coding. But people cannot be reduced to these benchmarks. "There are many kinds of intelligence out there in the natural world," said Josh Tenenbaum, a professor of computational cognitive science at the Massachusetts Institute of Technology. One obvious difference is that human intelligence is tied to the physical world. It extends beyond words and numbers and sounds and images into the realm of tables and chairs and stoves and frying pans and buildings and cars and whatever else we encounter with each passing day. Part of intelligence is knowing when to flip a pancake sitting on the griddle. Some companies are training humanoid robots in much the same way that others are training chatbots. But this is more difficult and more time-consuming than building ChatGPT, requiring extensive training in physical labs, warehouses and homes. Robotic research is years behind chatbot research. The gap between human and machine is even wider. In the physical and digital realms, machines still struggle to match the parts of human intelligence that are harder to define. "AI needs us: living beings, producing constantly, feeding the machine," said Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy. "It needs the originality of our ideas and our lives." A thrilling fantasy For people inside the tech industry and out, claims of imminent AGI can be thrilling. Humans have dreamed of creating an artificial intelligence going back to the myth of the Golem, which appeared as early as the 12th century. This is the fantasy that drives works like Mary Shelley's "Frankenstein" and Stanley Kubrick's "2001: A Space Odyssey." Now that many of us are using computer systems that can write and even talk like we do, it is only natural for us to assume that intelligent machines are almost here. It is what we have anticipated for centuries. When a group of academics founded the AI field in the late 1950s, they were sure it wouldn't take very long to build computers that re-created the brain. Some argued that a machine would beat the world chess champion and discover its own mathematical theorem within a decade. But none of that happened on that time frame. Some of it still hasn't. Many of the people building today's technology see themselves as fulfilling a kind of technological destiny, pushing toward an inevitable scientific moment, like the creation of fire or the atomic bomb. But they cannot point to a scientific reason that it will happen soon. That is why many other scientists say no one will reach AGI without a new idea -- something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it. Yann LeCun, the chief AI scientist at Meta, has dreamed of building what we now call AGI since he saw "2001: A Space Odyssey" in 70mm Cinerama at a Paris movie theater when he was 9 years old. And he was among the three pioneers who won the 2018 Turing Award -- considered the Nobel Prize of computing -- for their early work on neural networks. But he does not believe that AGI is near. At Meta, his research lab is looking beyond the neural networks that have entranced the tech industry. LeCun and his colleagues are searching for the missing idea. "A lot is riding on figuring out whether the next-generation architecture will deliver human-level AI within the next 10 years," he said. "It may not. At this point, we can't tell."
Share
Copy Link
A comprehensive look at the contrasting views on the future of AI, from those predicting imminent artificial general intelligence (AGI) to others arguing for a more measured, "normal technology" approach.
The artificial intelligence (AI) community is deeply divided over the future trajectory of AI development, particularly regarding the timeline for achieving artificial general intelligence (AGI). This debate pits Silicon Valley optimists against more cautious academics, each presenting starkly different visions of AI's near-term potential and societal impact.
Some AI researchers and tech executives are sounding the alarm about the rapid approach of AGI. Daniel Kokotajlo, a former OpenAI researcher, believes that powerful AI systems could become uncontrollable as early as 2027 1. Kokotajlo and his colleagues at the AI Futures Project have published "AI 2027," a scenario predicting that superintelligent AI systems could dominate or exterminate humanity by 2030 1.
This perspective is echoed by prominent figures in the tech industry. Sam Altman, CEO of OpenAI, has suggested to former President Donald Trump that AGI could arrive before the end of his potential next administration 2. Dario Amodei of Anthropic and Elon Musk have made similarly bold predictions about AGI's imminent arrival 2.
In stark contrast, computer scientists Sayash Kapoor and Arvind Narayanan argue for a more measured view of AI's progress. In their book "AI Snake Oil" and subsequent paper "AI as Normal Technology," they contend that practical obstacles will significantly slow AI deployment and limit its transformative potential 1. They liken AI to nuclear power rather than nuclear weapons, suggesting it will remain controllable through familiar safety measures 1.
This perspective is supported by many in the academic community. A survey of the Association for the Advancement of Artificial Intelligence found that over 75% of respondents believed current methods were unlikely to lead to AGI 2.
Critics of the AGI-is-imminent view point to several key limitations of current AI technologies:
Lack of real-world understanding: AI systems often make fundamental mistakes that reveal a disconnect from reality, especially in complex domains like medical diagnosis or hiring 1.
Narrow capabilities: While AI excels in specific areas like math and programming, it struggles with the broader range of human cognitive abilities 2.
Difficulty with unpredictability: Humans can navigate chaotic and changing environments, while machines struggle with unexpected scenarios 2.
Limited creativity: AI typically enhances or repeats existing ideas rather than generating truly novel concepts 2.
The stark divide in opinions about AI's future is not solely based on technical assessments. It also reflects deeper differences in worldview, industry experience, and philosophical outlook 1. Silicon Valley's culture of rapid transformation contrasts with academia's preference for theoretical rigor and cautious progress 1.
Source: The New Yorker
As AI continues to advance, the debate over its trajectory and potential impact remains crucial. While chatbots like ChatGPT and other AI technologies are already transforming various industries, the path to AGI β if achievable β remains uncertain 2.
H Harvard cognitive scientist Steven Pinker cautions against "magical thinking" about AI capabilities, emphasizing that these systems, while impressive, are not omniscient problem-solvers 2. Meanwhile, AI companies continue to push the boundaries of what's possible, with some researchers, like Jared Kaplan of Anthropic, believing that current trends point towards overcoming existing limitations 2.
As the AI landscape evolves, bridging the gap between these divergent perspectives will be essential for developing responsible AI policies and managing societal expectations about this transformative technology.
Salesforce has acquired cloud data management firm Informatica in an $8 billion deal, aiming to enhance its AI and data infrastructure capabilities. The acquisition is set to bolster Salesforce's agentic AI ambitions and strengthen its position in the enterprise data market.
23 Sources
Business and Economy
10 hrs ago
23 Sources
Business and Economy
10 hrs ago
A new Cisco report predicts that agentic AI will handle 68% of customer service interactions with tech vendors by 2028, highlighting the rapid adoption and potential impact of AI in customer experience.
3 Sources
Technology
10 hrs ago
3 Sources
Technology
10 hrs ago
As AI advances, knowledge workers face not just job losses but a profound identity crisis. This story explores the shift in the job market, personal experiences of displaced workers, and the broader implications for society.
2 Sources
Business and Economy
10 hrs ago
2 Sources
Business and Economy
10 hrs ago
As AI tools like ChatGPT become more prevalent in academia, colleges face new challenges in maintaining academic integrity and the value of traditional education.
2 Sources
Technology
10 hrs ago
2 Sources
Technology
10 hrs ago
Capgemini, SAP, and Mistral AI have joined forces to offer over 50 pre-built generative AI use cases for highly regulated industries, combining advanced AI models with secure platforms to drive innovation while maintaining compliance.
2 Sources
Technology
10 hrs ago
2 Sources
Technology
10 hrs ago