Curated by THEOUTPOST
On Tue, 17 Dec, 12:03 AM UTC
4 Sources
[1]
Former Google CEO Warns We Need to Pull the Plug on AI If It Starts to Evolve
"The technologists should not be the only ones making these decisions." Eric Schmidt, the one-time head of Google, is warning that humans may have to "unplug" artificial intelligence before it's too late. In an interview with ABC's George Stephanopoulos, Schmidt suggested that AI technology is innovating so rapidly that it may pass us by before we recognize the dangers it poses. "I've done this for 50 years [and] I've never seen innovation at this scale," the ex-Google CEO said. "This is literally a remarkable human achievement." Along with former Microsoft executive Greg Mundie and the late Henry Kissinger, Schmidt warned in a new book that along with the incredible benefits AI may bring to humanity, such as the rapid discovery of new medications, the technology will also become more self-sufficient. "We're soon going to be able to have computers running on their own, deciding what they want to do," he said. "When the system can self-improve, we need to seriously think about unplugging it." During the interview, Stephanopoulos asked, as anyone who's seen a sci-fi movie about killer AIs could imagine, whether a superintelligent AI would be capable of heading off any attempts to destroy it. "Wouldn't that kind of system have the ability to counter our efforts to unplug it?" Stephanopoulos asked. "Well, in theory, we better have somebody with the hand on the plug," Schmidt responded. And who should that unplugger be? "The future of intelligence... should not be left to people like me, right?" Schmidt said. "The technologists should not be the only ones making these decisions," he said. "We need a consensus about how to put the right guardrails on these things to preserve human dignity. It's very important." Himself an AI investor, Schmidt suggested -- in a meta twist -- that AI itself may be able to act as a watchdog for the technology. "Humans will not be able to police AI," he said, "but AI systems should be able to police AI." It's a pretty strange take for someone who has cowritten two whole books about the dangers posed by the technology -- but maybe that's just what Silicon Valley has done to his brain.
[2]
Former Google CEO: A time will come to consider 'unplugging' AI system
Former Google CEO Eric Schmidt said that the power of artificial intelligence can reach a "dangerous" point in the future and that humanity should be ready to step away from it should the time come. "When the system can self-improve, we need to seriously think about unplugging it," he said on ABC News on Sunday. Schmidt co-authored a book with former Secretary of State Henry Kissinger about artificial intelligence called "Genesis," in which he discusses the "incredible power" of AI while "preserving human dignity and values." "It's going to be hugely hard. It's going to be very difficult to maintain that balance" between AI's power and preserving human dignity and values, he told ABC's George Stephanopoulos. "Because the system has moved so quickly." Stephanopoulos also brought up China's developmental progress, to which Schmidt said that although the U.S. used to be ahead, China has caught up during the past year and is on track to surpass American technological programs. As AI "scientists" begin to do their own research over human scientists, Schmidt said it was crucial for the U.S. to get to this threshold first. "The Chinese are clever, and they understand the power of a new kind of intelligence for their industrial might, their military might, and their survelliance system," Schmidt said. He added that there should be more intervention for adding guardrails to AI instead of leaving it in the hands of technological leaders like himself. "Humans will not be able to police AI, but AI systems should be able to police AI," he said. Given the competition with China, Schmidt said President-elect Trump's administration could be good for AI policy.
[3]
A former Google CEO on when 'we need to seriously think about unplugging' AI
Why generative AI is in its 'Angry Birds' phase, according to DataStax CEO Chet Kapoor Eric Schmidt, who spent a decade as the artificial intelligence pioneer's chief executive, said humans need to take advantage of AI "while preserving human dignity and values" during an interview on ABC News (DIS-0.26%)' "This Week." Social media moved quickly to change the global zeitgeist, Schmidt said, "and now imagine a much more intelligent, much stronger way of sending messages, inventing things, the rate of innovation, drug discovery and all of that, plus all sorts of bad things, like weapons and cyber attacks." Soon, Schmidt said, there will be computers that can run "on their own, deciding what they want to do." Currently, the industry is focused on AI agents -- or software that can complete complex tasks autonomously -- but the technology will have "more powerful goals." "Eventually, you say to the computer, 'learn everything and do everything,' and that's a dangerous point," Schmidt said. "When the system can self-improve, we need to seriously think about unplugging it." Asked if an AI system that powerful would have the ability to counter efforts to shut it down, Schmidt said, "in theory, we better have somebody with a hand on the plug." As AI becomes more intelligent, "each and every person is going to have the equivalent of a polymath in their pocket," Schmidt said, but it's not clear "what it means to give that kind of power to every individual." There is a concern now that a company racing to develop AI will decide to skip steps in safety testing, Schmidt said, and end up releasing a system that is harmful. The former Google leader said governments are "not yet" doing what they need to do to regulate AI on the way to superintelligence, but that "they will, because they'll have to." Meanwhile, Schmidt said although he personally thought the U.S. was "a couple of years ahead of China," the country has been able to catch up in the last six months despite efforts by both the Trump and Biden administrations to curb advanced chips and other technologies from entering China. "It is crucial that America wins this race, globally and in particular, ahead of China," Schmidt said. The incoming Trump administration "will be largely focused on China versus the U.S.," Schmidt said, adding that it "is a good thing," and that as long as the U.S. values individual freedom, "we should be okay."
[4]
Eric Schmidt Says AI Is Becoming Dangerously Powerful as He Hawks His Own AI Defense Startup
The former Google CEO says he is worried about the power of artificial intelligence, but is happy to sell you the solution. Whenever a leader in technology comes out publicly warning of the potential dangers of artificial intelligenceâ€"or perhaps "superintelligence"â€"it is important to remember they are also on the other side selling the solution. We have already seen this with OpenAI's Sam Altman pressing Washington on the need for AI safety regulations whilst simultaneously hawking costly ChatGPT enterprise subscriptions. These leaders are in essence saying, "AI is so powerful that it could be dangerous, just imagine what it could do for your company!" We have another example of this type of thing with Eric Schmidt, the 69-year-old former Google CEO whom more recently has been known to date women less than half his age and lavish them with money to start their own tech investment funds. Schmidt has been making the rounds on weekday news shows to warn of the potential unforeseen dangers AI poses as it advances to the point where "we're soon going to be able to have computers running on their own, deciding what they want to do" and "every person is going to have the equivalent of a polymath in their pocket." Schmidt made the comments on ABC's "This Week." He also made an appearance on PBS last Friday where he talked about how the future of warfare will see more AI-powered drones, with the caveat that humans should remain in the loop and maintain "meaningful" control. Drones have become much more commonplace in the Ukraine-Russia war, as they are used for surveillance and dropping explosives without humans needing to get close to the front line. "The correct model, and obviously war is horrific, is to have the people well behind and have the weapons well up front, and have them networked and controlled by AI," Schmidt said. "The future of war is AI, networked drones of many different kinds." Schmidt, conveniently, has been developing a new company of his own called White Stork that has provided Ukraine with drones that use AI in "complicated, powerful ways." Putting aside that generative artificial intelligence is deeply flawed and almost certainly not close to overtaking humans, he is perhaps correct in one sense. Artificial intelligence does tend to behave in ways the creators do not understand or have been unable to predict. Social media provides a perfect case study for this. When the algorithms know only to optimize for maximum engagement and do not care about ethics, they will encourage behaviors that are anti-social, like extremist viewpoints intended to outrage and get attention. As companies like Google introduce "agentic" bots that can navigate a web browser on their own, there is potential for them to behave in ways that are unethical or otherwise just harmful. But Schmidt is talking about his book in these interviews. In his ABC interview, he says that when the AI systems begin to "self-improve," it may be worth considering pulling the plug. But he goes on to say, "In theory, we better have somebody with the hand on the plug." Schmidt has spent a lot of money investing in AI startups while simultaneously lobbying Washington on AI laws. He certainly hopes the companies he is invested in will be the ones holding the plug.
Share
Share
Copy Link
Eric Schmidt, former Google CEO, expresses concerns about AI's rapid evolution and potential dangers, suggesting the need for an "unplug" option while also promoting AI solutions.
Eric Schmidt, the former CEO of Google, has recently made headlines with his warnings about the rapid advancement of artificial intelligence (AI) and its potential risks. In a series of interviews and a new book, Schmidt has emphasized the need for careful consideration and control over AI development 1.
Schmidt, with 50 years of experience in the tech industry, describes the current pace of AI innovation as unprecedented. He acknowledges AI as a remarkable human achievement but cautions about its potential to become self-sufficient and autonomous 1.
"We're soon going to be able to have computers running on their own, deciding what they want to do," Schmidt stated in an interview with ABC News. He added, "When the system can self-improve, we need to seriously think about unplugging it" 2.
Schmidt advocates for maintaining human control over AI systems. He suggests that there should be someone "with the hand on the plug" to shut down AI if necessary. However, he also acknowledges the challenge of controlling a superintelligent AI that might be capable of countering human efforts to deactivate it 1.
In his book "Genesis," co-authored with former Secretary of State Henry Kissinger, Schmidt discusses the challenge of balancing AI's "incredible power" with the preservation of human dignity and values. He emphasizes the difficulty of maintaining this balance due to the rapid advancement of AI technology 2.
Schmidt also touches on the global competition in AI development, particularly between the United States and China. He expresses concern that China has caught up with the U.S. in AI capabilities and stresses the importance of American leadership in this field for national security reasons 3.
It's worth noting that while Schmidt warns about AI risks, he is also involved in AI-related business ventures. He has invested in AI startups and is developing an AI-powered drone company called White Stork 4. This dual role of warning about dangers while promoting AI solutions is not uncommon among tech leaders.
Schmidt emphasizes that decisions about AI's future should not be left solely to technologists. He calls for a broader consensus on implementing appropriate guardrails for AI technology. "The technologists should not be the only ones making these decisions," he stated, highlighting the need for a more inclusive approach to AI governance 1.
Reference
Yoshua Bengio, a renowned AI researcher, expresses concerns about the societal impacts of advanced AI, including power concentration and potential risks to humanity.
3 Sources
Eric Schmidt, former Google CEO, suggests abandoning climate targets in favor of AI development, believing that AI could potentially solve climate issues despite its increasing energy demands.
7 Sources
Henry Kissinger's last book, co-authored with tech leaders, explores the profound implications of AI surpassing human intelligence and the need for ethical considerations in this new era.
2 Sources
Former Google CEO Eric Schmidt raises concerns about the impact of AI companions on young men, highlighting potential risks of radicalization and the need for regulatory changes.
8 Sources
Leading computer scientists and AI experts issue warnings about the potential dangers of advanced AI systems. They call for international cooperation and regulations to ensure human control over AI development.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved