Curated by THEOUTPOST
On Wed, 7 May, 12:06 AM UTC
2 Sources
[1]
Paul Tudor Jones warns that AI is an 'existential' threat, needs government regulation
Billionaire investor Paul Tudor Jones said Tuesday that he has grown increasingly worried about the potential dangers of artificial intelligence, warning that the risks go beyond disruption to the stock market and economy. The hedge fund manager told CNBC's Andrew Ross Sorkin he views AI as an "existential" threat after attending a conference where experts on the topic discussed both the benefits and drawbacks of the emerging technology. "The one that disturbed me the most, is that AI clearly poses an imminent threat -- security threat, imminent -- in our lifetimes to humanity. And that was the one that really, really got me going," Jones said on " Squawk Box ." The hedge fund manager said he agrees with the idea that there is a 10% chance that AI will kill half the world's population in the next 20 years. He also said a recent podcast discussion between Elon Musk and Joe Rogan also colored his view. Countering the threat of AI requires more spending on security from corporations and new government regulation, Jones said. "President Trump has to get in the game," he said. Jones isn't alone in warning of the potential dangers of AI or to push for government intervention. Earlier this year, the emergence of the DeepSeek AI model out of China, for example, led to a sell-off in tech stocks and calls to treat AI as a geopolitical issue. Jones is the founder and chief investment officer of Tudor Investment, and rose to prominence for winning trades at the time of the 1987 stock market crash. He is also a philanthropist and the co-founder of Just Capital and the Robin Hood Foundation. "I'm not a tech expert," Jones said Tuesday. "I'm not. But I've spent my whole life managing risk ... I'm as good as there is on macro risk management. And we just have to realize, to their credit, all these folks in AI are telling us that we're creating something that's really dangerous -- it's going to be really great too -- but we're helpless to do anything about it."
[2]
Paul Tudor Jones Warns AI Could 'Kill 50% Of Humanity' Without Proper Oversight: 'We're Creating Something That's Really Dangerous'
Feel unsure about the market's next move? Copy trade alerts from Matt Maley -- a Wall Street veteran who consistently finds profits in volatile markets. Claim your 7-day free trial now. Although billionaire hedge fund manager Paul Tudor Jones believes AI can be a force for good, he also issued a stark warning on Tuesday suggesting it has the potential to wipe out 50% of humanity. What Happened: Tuesday on CNBC's "Squawk Box," Jones, the founder and chief investment officer of Tudor Investment, warned of the negative potential of AI based on takeaways from a technology conference he attended two weeks ago. Although he acknowledged that he's "not a tech expert," he said the conference featured a panel of four of the "leading modelers" of the top AI models being used today, who all agreed that there is at least a small chance that AI could cause significant harm to the human race. There were three big takeaways from the panel, he said. First, AI can be a force for good and we are going to start seeing it in education and healthcare in the very near future. Second, these AI models are increasing their efficiency and performance by 25% to 500% every three or four quarters. Lastly, AI "clearly poses an imminent threat" to humanity, he said. When asked about what they were doing to prepare for the security threat that AI poses, one of the panel members said he was buying 100 acres in the Midwest and raising chickens and cattle. According to Jones, the panelist also said that the world is not likely to take the threat of AI really seriously until an "accident" occurs where "50 to 100 million people die. " He added that no one else on the panel pushed back against that idea. "Afterwards, we had a breakout session, which was really interesting. All 40 people got up in a room like this and they had a series of propositions and you had to either agree with or disagree with the proposition," Jones said. "One of the propositions was there's a 10% chance in the next 20 years that AI will kill 50% of humanity ... the vast majority of the room moved to the disagree side ... all four modelers were on the agree side." Check This Out: Elon Musk Wants His Legacy To Be Advancing Civilization, Says Without 'Truth-Seeking AI,' The Future Could Be 'Dangerous' Jones told CNBC that the room then debated the proposition. One of the modelers suggested that it's possible that someone could "bio hack" a weapon that could take out half of humanity, given how quickly the models are growing and commoditizing knowledge. "I don't know, 10% seems reasonable to me," Jones said. He emphasized that he's not an expert in technology, but noted that he's spent his entire life managing risk. "We just have to realize, to their credit, all these folks in AI are telling us we're creating something that's really dangerous -- it's going to be really great too -- but we're helpless to do anything about it. That's, to their credit, what they're telling us, and yet we're doing nothing right now and it's really disturbing," Jones said. Jones told CNBC that there was about $250 billion dollars spent on AI development among the Magnificent Seven tech giants in 2024. According to the four modelers at the tech conference, the AI spend on security was less than $1 billion, he said. To mitigate the potential downside of AI, the leading AI companies need to dramatically increase spending on AI security and President Donald Trump has to increase regulations on AI development, Jones said. "I just want to say one last thing. I'm really concerned about these open-source models and how they are commoditizing and making what were previously indecipherable pockets of knowledge easily accessible," he said. "You can have a bad actor like Osama bin Laden take these things with his cult following somewhere down the road, or you can have innocent actors like hopefully those researchers in Wuhan laboratory make a mistake, who made a mistake, and they can be real threats to humanity." Read Next: Palo Alto Networks Acquires Protect AI For $500M+, A Startup Backed By Salesforce, Samsung, And 01 Advisors In AI Security Push CrowdStrike Launches New AI Tools To Strengthen Cybersecurity, Expand Threat Detection Image created using artificial intelligence via Midjourney. Got Questions? AskWhich AI security companies could thrive post-warning?How might tech giants react to increased scrutiny?Will AI development regulations boost compliance firms?Which investments in healthcare could benefit from AI?How can education tech firms leverage AI advancements?What role will cybersecurity firms play in AI safety?Which startups could emerge from AI security needs?Is there a potential for defensive stocks in this climate?How will investors respond to AI threat discussions?Which companies might pivot to AI oversight solutions?Powered ByMarket News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
Billionaire investor Paul Tudor Jones expresses grave concerns about AI's potential dangers, citing a 10% chance it could kill half of humanity in 20 years. He urges for increased corporate security spending and government regulation to mitigate risks.
Billionaire investor Paul Tudor Jones has issued a stark warning about the potential dangers of artificial intelligence (AI), describing it as an "existential" threat that requires immediate attention and regulation. Jones, founder and chief investment officer of Tudor Investment, expressed his concerns during an interview on CNBC's "Squawk Box" 1.
Jones's concerns were fueled by his recent attendance at a technology conference where leading AI experts discussed the benefits and risks of emerging AI technologies. The hedge fund manager highlighted a disturbing proposition discussed at the conference: a 10% chance that AI could kill 50% of humanity within the next 20 years 2.
One of the key points emphasized by Jones was the rapid advancement of AI capabilities. He noted that AI models are increasing their efficiency and performance by 25% to 500% every three to four quarters 2. This exponential growth, combined with the potential for misuse or accidents, contributes to the perceived threat.
To address these risks, Jones advocated for two primary actions:
Increased corporate spending on AI security: Jones revealed that while the "Magnificent Seven" tech giants spent approximately $250 billion on AI development in 2024, less than $1 billion was allocated to AI security 2.
Government regulation: Jones emphasized the need for government intervention, specifically calling on President Trump to "get in the game" and implement new regulations for AI development 1.
Jones highlighted several scenarios that underscore the potential dangers of AI:
Bio-hacking: The possibility of someone using AI to create a biological weapon capable of causing massive harm 2.
Open-source models: Concerns about the commoditization of previously indecipherable knowledge, making it accessible to bad actors 2.
Accidental misuse: The potential for unintended consequences, similar to laboratory accidents 2.
Jones is not alone in his apprehension about AI's potential risks. The emergence of advanced AI models, such as DeepSeek AI from China, has led to market volatility and calls to treat AI as a geopolitical issue 1. This highlights the growing recognition of AI's impact beyond just technological advancements.
While emphasizing the dangers, Jones also acknowledged the potential benefits of AI, particularly in fields such as education and healthcare. However, he stressed the importance of proactive measures to mitigate risks, stating, "We're creating something that's really dangerous -- it's going to be really great too -- but we're helpless to do anything about it" 2.
As the debate around AI safety and regulation continues to intensify, Jones's warnings add to the growing chorus of voices calling for a more cautious and regulated approach to AI development and deployment.
Geoffrey Hinton, a pioneer in AI, expresses growing concerns about the rapid advancement of artificial intelligence and its potential risks to humanity, including a 10-20% chance of AI seizing control from humans.
3 Sources
3 Sources
Geoffrey Hinton, Nobel laureate and "Godfather of AI," raises alarm about the rapid advancement of AI technology, estimating a 10-20% chance of human extinction within 30 years. He urges for increased government regulation and AI safety research.
3 Sources
3 Sources
As tech giants pour billions into AI development, investors and analysts are questioning the return on investment. The AI hype faces a reality check as companies struggle to monetize their AI ventures.
5 Sources
5 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
Yoshua Bengio, a renowned AI researcher, expresses concerns about the societal impacts of advanced AI, including power concentration and potential risks to humanity.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved