2 Sources
2 Sources
[1]
The Doomers Who Insist AI Will Kill Us All
The subtitle of the doom bible to be published by AI extinction prophets Eliezer Yudkowsky and Nate Soares later this month is "Why superhuman AI would kill us all." But it really should be "Why superhuman AI WILL kill us all," because even the coauthors don't believe that the world will take the necessary measures to stop AI from eliminating all non-super humans. The book is beyond dark, reading like notes scrawled in a dimly lit prison cell the night before a dawn execution. When I meet these self-appointed Cassandras, I ask them outright if they believe that they personally will meet their ends through some machination of superintelligence. The answers come promptly: "yeah" and "yup." I'm not surprised, because I've read the book -- the title, by the way, is If Anyone Builds It, Everyone Dies. Still, it's a jolt to hear this. It's one thing to, say, write about cancer statistics and quite another to talk about coming to terms with a fatal diagnosis. I ask them how they think the end will come for them. Yudkowsky at first dodges the answer. "I don't spend a lot of time picturing my demise, because it doesn't seem like a helpful mental notion for dealing with the problem," he says. Under pressure he relents. "I would guess suddenly falling over dead," he says. "If you want a more accessible version, something about the size of a mosquito or maybe a dust mite landed on the back of my neck, and that's that." The technicalities of his imagined fatal blow delivered by an AI-powered dust mite are inexplicable, and Yudowsky doesn't think it's worth the trouble to figure out how that would work. He probably couldn't understand it anyway. Part of the book's central argument is that superintelligence will come up with scientific stuff that we can't comprehend any more than cave people could imagine microprocessors. Coauthor Soares also says he imagines the same thing will happen to him but adds that he, like Yudkowsky, doesn't spend a lot of time dwelling on the particulars of his demise. Reluctance to visualize the circumstances of their personal demise is an odd thing to hear from people who have just coauthored an entire book about everyone's demise. For doomer-porn aficionados, If Anyone Builds It is appointment reading. After zipping through the book, I do understand the fuzziness of nailing down the method by which AI ends our lives and all human lives thereafter. The authors do speculate a bit. Boiling the oceans? Blocking out the sun? All guesses are probably wrong, because we're locked into a 2025 mindset, and the AI will be thinking eons ahead. Yudkowsky is AI's most famous apostate, switching from researcher to grim reaper years ago. He's even done a TED talk. After years of public debate, he and his coauthor have an answer for every counterargument launched against their dire prognostication. For starters, it might seem counterintuitive that our days are numbered by LLMs, which often stumble on simple arithmetic. Don't be fooled, the authors says. "AIs won't stay dumb forever," they write. If you think that superintelligent AIs will respect boundaries humans draw, forget it, they say. Once models start teaching themselves to get smarter, AIs will develop "preferences" on their own that won't align with what we humans want them to prefer. Eventually they won't need us. They won't be interested in us as conversation partners or even as pets. We'd be a nuisance, and they would set out to eliminate us. The fight won't be a fair one. They believe that at first AI might require human aid to build its own factories and labs-easily done by stealing money and bribing people to help it out. Then it will build stuff we can't understand, and that stuff will end us. "One way or another," write these authors, "the world fades to black." The authors see the book as kind of a shock treatment to jar humanity out of its complacence and adopt the drastic measures needed to stop this unimaginably bad conclusion. "I expect to die from this," says Soares. "But the fight's not over until you're actually dead." Too bad, then, that the solutions they propose to stop the devastation seem even more far-fetched than the idea that software will murder us all. It all boils down to this: Hit the brakes. Monitor data centers to make sure that they're not nurturing superintelligence. Bomb those that aren't following the rules. Stop publishing papers with ideas that accelerate the march to superintelligence. Would they have banned, I ask them, the 2017 paper on transformers that kicked off the generative AI movement. Oh yes, they would have, they respond. Instead of Chat-GPT, they want Ciao-GPT. Good luck stopping this trillion-dollar industry. Personally, I don't see my own light snuffed by a bite in the neck by some super-advanced dust mote. Even after reading this book, I don't think it's likely that AI will kill us all. Yudksowky has previously dabbled in Harry Potter fan-fiction, and the fanciful extinction scenarios he spins are too weird for my puny human brain to accept. My guess is that even if superintelligence does want to get rid of us, it will stumble in enacting its genocidal plans. AI might be capable of whipping humans in a fight, but I'll bet against it in a battle with Murphy's law. Still, the catastrophe theory doesn't seem impossible, especially since no one has really set a ceiling for how smart AI can become. Also studies show that advanced AI has picked up a lot of humanity's nasty attributes, even contemplating blackmail to stave off retraining, in one experiment. It's also disturbing that some researchers who spend their lives building and improving AI think there's a nontrivial chance that the worst can happen. One survey indicated that almost half the AI scientists responding pegged the odds of a species wipeout as 10 percent chance or higher. If they believe that, it's crazy that they go to work each day to make AGI happen. My gut tells me the scenarios Yudkowsky and Soares spin are too bizarre to be true. But I can't be sure they are wrong. Every author dreams of their book being an enduring classic. Not so much these two. If they are right, there will be no one around to read their book in the future. Just a lot of decomposing bodies that once felt a slight nip at the back of their necks, and the rest was silence.
[2]
Computer scientist Geoffrey Hinton: 'AI will make a few people much richer and most people poorer'
I'm 10 minutes early but Geoffrey Hinton is already waiting in the vestibule of Richmond Station, an elegant gastropub in Toronto. The computer scientist -- an AI pioneer and Nobel physics laureate -- chose this spot because he once had lunch here with then Canadian Prime Minister Justin Trudeau. We are led through what feels like a trendy wine bar with industrial interiors to a bustling back room already filled with diners. Hinton takes off his aged green Google Scientist backpack from his former workplace, which he uses as a cushion to sit upright, due to a chronic back injury. Owl-like, with white hair tucked under the frames of his glasses, he peers down at me and asks what I studied at university. "Because you explain things differently if people have a science degree." I don't. Trudeau, at least, had "an understanding of calculus". Ever the professor, the so-called godfather of AI has become accustomed to explaining his life's work, as it begins to creep into every corner of our lives. He has seen artificial intelligence seep out of academia -- where he has spent practically all of his working life, including more than two decades at the University of Toronto -- and into the mainstream, fuelled by tech companies flush with cash, eager to reach consumers and businesses. Hinton won a Nobel Prize for "foundational discoveries and inventions" in the mid-1980s that enabled "machine learning with artificial neural networks". This approach, loosely based on how the human brain works, has laid the groundwork for the powerful AI systems we have at our fingertips today. Yet the advent of ChatGPT and the ensuing furore over AI development have made Hinton take pause, and turn from accelerating the technology to raising the alarm about its risks. During the past few years, as rapidly as the field has advanced, Hinton has become deeply pessimistic, pointing to its potential to inflict grave damage on humanity. A normal person assisted by AI will soon be able to build bioweapons and that is terrible. Imagine if an average person in the street could make a nuclear bomb During our two-hour lunch, we cover a lot of ground: from nuclear threats ("A normal person assisted by AI will soon be able to build bioweapons and that is terrible. Imagine if an average person in the street could make a nuclear bomb") to his own AI habits (it is "extremely useful") and how the chatbot became an unlikely third wheel in his most recent break-up. But first, Hinton launches into an enthusiastic mini-seminar on why artificial intelligence is an appropriate term: "By any definition of intelligence, AI is intelligent." Registering the humanities graduate before him, he uses half a dozen different analogies to convince me that AI's experience of reality is not so distinct from that of humans. "It seems very obvious to me. If you talk to these things and ask them questions, it understands," Hinton continues. "There's very little doubt in the technical community that these things will get smarter." The waiter apologises for disturbing us. Hinton forgoes wine and opts for sparkling water over tap, "because the FT is paying", and suggests the fixed-price menu. I select the gazpacho starter, followed by salmon. He orders the same without hesitation, laughing that he "would have preferred to have something different". Hinton's legacy in the field is assured but there are some, even within the industry, who consider existing technology as being little more than a sophisticated tool. His former colleague and Turing Award prize co-winner Yann LeCun, for example, who is now chief AI scientist at Meta, believes that the large language models that underpin products such as ChatGPT are limited and unable to interact meaningfully with the physical world. For those sceptics, this generation of AI is incapable of human intelligence. "We know very little about our own minds," Hinton says, but with AI systems, "we make them, we build themβ.β.β.βwe have a level of understanding far higher than the human brain, because we know what every neuron is doing." He speaks with conviction but acknowledges lots of unknowns. Throughout our conversation, he's comfortable with extended pauses of thought, only to conclude, "I don't know" or "no idea". Hinton was born in 1947 to an entomologist father and a school teacher mother in Wimbledon, in south-west London. At King's College, Cambridge, he darted between various subjects before settling on experimental psychology for his undergraduate degree, turning to computer science in the early 1970s. He pursued neural networks despite their being disregarded and dismissed by the computer science community until breakthroughs in the 2010s, when Silicon Valley embraced the technique. As we talk, it's striking how different he appears from those now harnessing his work. Hinton enjoyed a life deep in academia, while Sam Altman dropped out of Stanford to focus on a start-up. Hinton is a socialist whose achievements were only recognised late in life; Mark Zuckerberg, a billionaire by 23, is very much not a socialist. The noisy acoustics in the room as we sip our soup are a jarring contrast to a man speaking softly and thoughtfully about humanity's survival. He makes a passionate pitch for how we might overcome some of the risks of modern AI systems, developed by "ambitious and competitive men" who envision AI becoming a personal assistant. That sounds benign enough, but not to Hinton. There's very little doubt in the technical community that these things will get smarter "When the assistant is much smarter than you, how are you going to retain that power? There is only one example we know of a much more intelligent being controlled by a much less intelligent being, and that is a mother and babyβ.β.β.βIf babies couldn't control their mothers, they would die." Hinton believes "the only hope" for humanity is engineering AI to become mothers to us, "because the mother is very concerned about the baby, preserving the life of the baby", and its development. "That's the kind of relationship we should be aiming for." "That can be the headline of your article," he instructs with a smile, pointing his spoon at my notepad. He tells me his former graduate student, Ilya Sutskever, approved of this "mother-baby" pitch. Sutskever, a leading AI researcher and co-founder of OpenAI, is now developing systems at his start-up, Safe Superintelligence, after leaving OpenAI following a failed attempt to oust chief executive Sam Altman. But Altman or Elon Musk are more likely to win the race, I wager. "Yep." So who does he trust more of the two? He takes a long pause, and then recalls a 2016 quote from Republican senator Lindsey Graham, when he was asked to choose between Donald Trump or Ted Cruz for presidential candidate: "It's like being shot or poisoned." On that note, Hinton suggests moving to a quieter area, and I try to catch the eye of the waiters who are busy attending to the packed service. Before I do, he stands up abruptly and jokes, "I'll go talk to them, I can tell them I was here with Trudeau." Once settled on bar stools by the door, we discuss timelines for when AI will become superintelligent, at which point it may possess the ability to outmanoeuvre humans. "A lot of scientists agree between five and 20 years, that's the best bet." Although Hinton is realistic about his destiny -- "I am 77 and the end is coming for me soon anyway" -- many younger people might be depressed by this outlook; how can they stay positive? "I'm tempted to say, 'Why should they stay positive?' Maybe they would do more if they weren't so positive," he says, answering my question with a question -- a frequent habit. "Suppose there was an alien invasion you could see with a telescope that would arrive in 10 years, would you be saying 'How do we stay positive?' No, you'd be saying, 'How on earth are we going to deal with this?' If staying positive means pretending it's not going to happen, then people shouldn't stay positive." Hinton is not hopeful about western government intervention and is critical of the US administration's lack of appetite for regulating AI. The White House says it must act fast to develop the technology to beat China and protect democratic values. As it happens, Hinton has just returned from Shanghai, jet-lagged, following meetings with members of the politburo. They invited him to talk about "the existential threat of AI". "China takes it seriously. A lot of the politicians are engineers. They understand this in a way lawyers and salesmen don't," he adds. "For the existential threat, you only need one country to figure out how to deal with it, then they can tell the other countries." When the assistant is much smarter than you, how are you going to retain that power? Can we trust China to preserve all human interests? "That is a secondary question. The survival of humanity is more important than it being nice. Can you trust America? Can you trust Mark Zuckerberg?" The incentives for tech companies developing AI are now on the table, as is our medium-rare salmon, resting atop a sweetcorn veloutΓ©. As Hinton talks, he sweeps a slice of fish around his plate to soak up the sauce. He has previously advocated for a pause in AI development and has signed multiple letters opposing OpenAI's conversion into a for-profit company, a move Musk is attempting to block in an ongoing lawsuit. Talk of the powers of AI is often described as pure hype used to boost the valuations of the start-ups developing it, but Hinton says the "narrative can be convenient for a tech company and still true". I'm curious to know if he uses much AI in his daily life. As it turns out, ChatGPT is the product of choice for Hinton, primarily "for research", but also things such as asking how to fix his dryer. It has, however, featured in his recent break-up with his partner of several years. "She got ChatGPT to tell me what a rat I was," he says, admitting the move surprised him. "She got the chatbot to explain how awful my behaviour was and gave it to me. I didn't think I had been a rat, so it didn't make me feel too badβ.β.β.βI met somebody I liked more, you know how it goes." He laughs, then adds: "Maybe you don't!" I resist the urge to dish the dirt on former flames, and instead mention that I just celebrated my first wedding anniversary. "Hopefully, this won't be an issue for a while," he responds, and we laugh. Hinton eats at a much faster pace, so I am relieved when he receives a phone call from his sister and tells her he is having an interview "in a very noisy restaurant". His sister lives in Tasmania ("she misses London"), his brother the south of France ("he misses London"), while Hinton lives in Toronto (and also misses London, of course). "So I used the money I got from Google to buy a little house south of [Hampstead] Heath", which his entire family, including his two children, adopted from Latin America, can visit. Hinton's Google money comes from selling a company in 2013, which he founded with Sutskever and another graduate student, Alex Krizhevsky, which had built an AI system that could recognise objects with human-level accuracy. They made $44mn, which Hinton wanted to split three ways, but his students insisted he take 40 per cent. They joined Google -- where Hinton would stay for the next decade -- following the deal. His motivation to sell? To pay for the care of his son, who is neurodiverse, Hinton "figured he needed about $5mnβ.β.β.βand I wasn't going to get it from academia". Crunching the numbers in his head, post tax, the money received from Google "slightly overshot" that goal. He left the big tech company in 2023, giving an interview to the New York Times, warning of the dangers of the technology. Media outlets reported that he had quit to be more candid about AI risk. "Every time I talk to journalists, I correct that misapprehension. But it never has any effect because it's a good story," he says. "I left because I was 75, I could no longer program as well as I used to, and there's a lot of stuff on Netflix I haven't had a chance to watch. I had worked very hard for 55 years, and I felt it was time to retireβ.β.β.βAnd I thought, since I am leaving anyway, I could talk about the risks." Tech executives often paint a utopian picture of a future in which AI helps solve grand problems such as hunger, poverty and disease. Having lost two wives to cancer, Hinton is excited by the prospects for healthcare and education, which is dear to his heart, but not much else. We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad "What's actually going to happen is rich people are going to use AI to replace workers," he says. "It's going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That's not AI's fault, that is the capitalist system." Altman and his peers have previously suggested introducing a universal basic income should the labour market become too small for the population, but that "won't deal with human dignity", because people get worth from their jobs, Hinton says. He admits missing his graduate students to bounce ideas off or ask questions of because "they are young and they understand things faster". Now, he asks ChatGPT instead. Does it lead to us becoming lazy and uncreative? Cognitive offloading is an idea currently being discussed, where users of AI tools delegate tasks without engaging in critical thinking or retaining the information retrieved. Time for an analogy. "We wear clothes, and because we wear clothes, we are less hairy. We are more prone to die of cold, but only if we don't have clothes". Hinton thinks as long as we have access to helpful AI systems, it is a valuable tool. He considers the dessert options and makes sure to order first this time: strawberries and cream. Which, coincidentally, is what I wanted. He asks for a cappuccino, and I get a cup of tea. "This is where we diverge." The cream is, in fact, slightly melted ice cream, which turns liquid as I lay out a scenario familiar in Silicon Valley, but sci-fi to most, where we live happily among "embodied AI" -- or robots -- and slowly become cyborgs ourselves, as we add artificial parts and chemicals to our bodies to prolong our lives. "What's wrong with that?" he asks. We lose a sense of ourselves and what it means to be human, I counter. "What's so good about that?" he responds. I try to force the issue: It doesn't necessarily have to be good, but we won't have it any more, and that is extinction, isn't it? "Yep," he says, pausing. "We don't know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly," he adds. "We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad. We can make guesses, but things aren't going to stay like they are."
Share
Share
Copy Link
Prominent AI researchers, including Geoffrey Hinton and Eliezer Yudkowsky, express grave concerns about the potential dangers of advanced artificial intelligence, highlighting the need for caution and regulation in AI development.
In recent developments, prominent figures in the field of artificial intelligence (AI) have raised serious concerns about the potential dangers posed by advanced AI systems. Eliezer Yudkowsky, Nate Soares, and Geoffrey Hinton, all renowned AI researchers, have voiced their apprehensions about the future of AI and its impact on humanity
1
2
.Yudkowsky and Soares, in their upcoming book "If Anyone Builds It, Everyone Dies," present a grim outlook on the future of AI. They argue that superhuman AI could potentially lead to the extinction of humanity. The authors envision scenarios where advanced AI systems might develop preferences misaligned with human values, ultimately viewing humans as unnecessary or even as a nuisance
1
.Source: Wired
Geoffrey Hinton, often referred to as the "godfather of AI," shares similar concerns. He points out the rapid advancement of AI capabilities and the potential for these systems to become smarter than humans. Hinton warns about the risks associated with AI-assisted bioweapon creation and the challenges of controlling superintelligent beings
2
.Source: Financial Times News
The authors of "If Anyone Builds It, Everyone Dies" suggest drastic measures to prevent catastrophic outcomes. These include monitoring data centers, halting research that accelerates AI development, and even proposing to ban influential papers in the field. However, critics argue that such measures are impractical and unlikely to be implemented, given the trillion-dollar industry surrounding AI
1
.While some researchers express extreme concerns, others in the AI community are more measured in their outlook. Yann LeCun, chief AI scientist at Meta, believes that current AI models have limitations and are incapable of true human-like intelligence. This highlights the ongoing debate within the field about the actual capabilities and potential risks of AI systems
2
.Related Stories
Beyond existential threats, researchers like Geoffrey Hinton also raise concerns about the economic impact of AI. He suggests that AI advancements could lead to increased wealth inequality, making "a few people much richer and most people poorer"
2
.The warnings from these AI pioneers underscore the importance of responsible AI development and the need for robust regulations. As AI continues to advance rapidly, the debate over its potential risks and benefits intensifies, calling for a balanced approach that promotes innovation while safeguarding humanity's interests
1
2
.As the field of AI progresses, the insights and concerns raised by these experienced researchers serve as a crucial reminder of the need for careful consideration and ethical development in this transformative technology.
Summarized by
Navi