9 Sources
[1]
The attacks on Sam Altman are a warning for the AI world
Lauren Feiner is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform. Before allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman's home, the 20-year-old accused attacker wrote about his fear that the AI race would cause humans to go extinct, The San Francisco Chronicle found. Two days later, Altman's home appeared to be targeted a second time, according to The San Francisco Standard. Only a week earlier, an Indianapolis councilman reported 13 shots fired at his door, with a note that read, "No Data Centers," after he'd supported a rezoning petition for a data center developer. These unsettling incidents have set off alarms in and around the AI industry. There's long been a vocal resistance to the technology, fueled by fears of job displacement, climate impact, and unconstrained development absent of safety guardrails. AI workers themselves have warned about serious risks. The vast majority of critiques and demonstrations against AI have been nonviolent -- including local resistance to energy-intensive AI data centers and protests urging a slowdown of the rapidly accelerating technology. Protestors have targeted AI companies directly with tactics like hunger strikes. Groups that advocate against accelerated AI development explicitly denounced violence following the attacks on Altman's home. Further investigation will take place to determine the attackers' motivations. But the limited information made public so far suggests an escalation of the backlash against the technology, and, perhaps, risk to industry players themselves. Over the past few years, there have been a handful of other notable incidents rising to the level of threats and harassment aimed at local officials, according to a database of reports compiled by Princeton University's Bridging Divides Initiative. Last year, for example, a community utility authority board member in Ypsilanti, Michigan, reported that masked protesters visited his home to protest a "high performance computing facility," according to MLive, and one protester allegedly smashed a printer on their lawn. Shortly after the first attack on Altman's home, the CEO appeared to partially blame critical media coverage for the violence. Days earlier, The New Yorker had published a lengthy investigation that compiled over a hundred interviews and found that many people who had worked with him distrusted him and found inconsistencies in his actions. "There was an incendiary article about me a few days ago," Altman wrote on his personal blog. "Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside. Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives." (He later walked back his rhetoric toward the article in response to a critique on X, writing, "That was a bad word choice and i wish i hadn't used it.") Others took up the theme as well. White House AI adviser Sriram Krishnan, for example, wrote on X, "I think the doomers need to take a serious look at what they have helped incite and not just rely on 'we condemn this and have said this is not the rational response'. This is the logical outcome of 'If we build it everyone dies'" -- a reference to a 2025 book by AI researchers Eliezer Yudkowsky and Nate Soares. But Altman also recognized the way his industry could fuel highly emotional reactions from the general public. "A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology," he wrote. "This is quite valid, and we welcome good-faith criticism and debate... While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally." OpenAI itself was founded on dire warnings about the technology's impact. Cofounder Elon Musk warned in 2017 that AI posed "a fundamental risk to the existence of civilization." Musk later joined an open letter calling for a pause on AI development after the release of ChatGPT, after he'd left OpenAI's board, before launching his new AI company xAI. Following the attack on Altman's home, Musk said he agreed on X with a post that said, "This is wrong. I dislike Sam as much as the next guy but violence is unacceptable." Even beyond apocalyptic scenarios, AI is reshaping the world's social fabric in unpredictable ways. Many reports have detailed the psychological spirals that talking to an AI system for days on end can send people down, including allegations of AI-induced psychosis, suicide, and murder. That's layered on top of real-life experiences of job loss due to AI, plus more existential concern about the world AI will create. "Take any labor movement that has been potentially rightly concerned about disruption and change, and then supercharge that with the AI apocalypse, and then supercharge that with chatbot sycophancy and romantic partners that are telling you to kill your ex-husband or telling you to marry your therapist or whatever it is. It's not a huge surprise that we're seeing scary acts like this," says Purdue University assistant political science professor Daniel Schiff. Schiff says that while we'd never want to see such violent attacks, he hopes that recent events can serve as "a constructive wake up call" for companies and policymakers to be extra thoughtful in the decisions they make about the technology. "It doesn't excuse people who are acting poorly, but it does tell you that something is a little bit off, and not just in the heads of the people who are acting in this way," he says. A suspect in one of the attacks appeared to have joined the open Discord server of PauseAI, a group that supports a pause on frontier AI development until proven safety guardrails are in place. The organization released a statement saying he had no role in the group and had not attended any events. While PauseAI says it "unequivocally condemns this attack and all forms of violence, intimidation and harassment," it also called out "a handful of commentators have seized on this incident to paint the broader movement for AI safety as dangerous or extremist." PauseAI organizes protests, townhalls, and encourages followers to call policymakers about their concerns with AI. Its efforts give people with real concerns for the future a way to act peacefully, it says in its public statement. "The alternative to organised, peaceful movements is not silence," the group writes. "It is isolated, desperate individuals acting alone, without community, without accountability and without anyone urging restraint or offering peaceful paths for action. That is a far more dangerous world and it is exactly the world we are striving to prevent." While not specific to AI-related violence, there are tested ways to build resilience against political violence. The Bridging Divides Initiative recommends community leaders and officials coordinate responses to risks in advance, and take part in de-escalation training. While Schiff doesn't anticipate extreme rhetoric around AI ending, he suggests trying to turn down the temperature by pursuing positive ways to prepare collectively for the changes AI can bring, such as determining the appropriate social safety nets to deal with job displacement. "We unleashed Pandora's box," Schiff says. "Let's figure out how we're going to open this box more carefully in the future."
[2]
Sam Altman's weekend: second home attack, a furious blog post, and a growing backlash
The OpenAI chief faces criticism, confrontation, and escalating crises * A critical New Yorker profile reignited long-standing concerns about Sam Altman's leadership and trustworthiness * There were two separate attempted attacks at Altman's home over the weekend * Altman responded with an intense and personal blog post calling for de-escalation of rhetoric around AI For Sam Altman, the weekend was a chaotic and at times dangerous series of crises. The OpenAI chief executive had faced questions after a deep New Yorker investigation leading to an intense and emotional blog post response, all amid two attacks at his home in roughly forty-eight hours. The New Yorker story drew on over 100 interviews and documents to revisit the events around Altman's brief ouster from OpenAI in 2023. It cast Altman as an executive surrounded by doubts about his honesty and commitment to safety over power For Altman, whose public image has long depended on sounding like a calm adult around squabbling children, the article threatened something more serious than embarrassment. It sharpened a wider backlash already building around OpenAI, including criticism from AI safety advocates, artists, publishers, regulators, and rivals who argue the company has become too powerful and too slippery. An attack and angry words Then at 4 a.m. on Friday, police reported someone threw a Molotov cocktail at Altman's home and fled, appearing at OpenAI's headquarters and threatening to burn the building down. Court documents say he carried writings opposed to artificial intelligence and warning of "our impending extinction." Altman responded not with a corporate statement but with a very personal blog post. He posted a family photo and wrote, "Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me." It was a striking move from a chief executive who normally prefers polished futurism to raw confession. He also made clear that he saw a connection between the surrounding rhetoric and the violence. "Words have power too," Altman wrote. "There was an incendiary article about me a few days ago." He said he had initially brushed aside the suggestion that the story appeared "at a time of great anxiety about AI" and had made things "more dangerous" for him. "Now I am awake in the middle of the night and pissed," he wrote, "and thinking that I have underestimated the power of words and narratives." Apology and counterpoints Altman did not simply lash out. He mixed grievance with confession. "I am not proud of being conflict averse," he wrote. "I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company," he wrote. "I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation." He also wrote, "I am sorry to people I've hurt and wish I had learned more faster." He presented himself as both fallible and indispensable. He insisted that "the fear and anxiety about AI is justified" and argued that power over AI "cannot be too concentrated," even as OpenAI remains one of the firms doing the most to concentrate it. Then, on Sunday morning, before the story could cool, the weekend got worse. San Francisco police responded to reports of possible shots fired near Altman's home and arrested two people. No injuries were reported, but the symbolism was impossible to miss. A tech leader arguing that the temperature around AI had become dangerous suddenly had multiple violent incidents at his home and office. Backlash all over Altman is facing angry tirades from multiple directions now, though much of it has nothing to do with the violence directly. Issues of trust, accountability, and the amount of power that now sits inside a handful of AI companies are ripe. Public feeling around AI is no longer just argumentative or academic. Altman closed his post with a line that feels almost glib in context. "We should de escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally." In a matter of days, Sam Altman's role shifted from a familiar kind of tech-world lightning rod to something more exposed and uneasy. The criticism surrounding him is not going away, and neither is the broader anxiety about AI that feeds it. What changed over the weekend is how visible and volatile that tension became, spilling beyond articles and arguments into something harder to contain. Altman may have called for a cooling of rhetoric, but the moment suggests the conversation around him, and the technology he represents, is only getting louder, sharper, and more difficult to control. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[3]
AI companies know they have an image problem. Will funding policy papers and thinktanks dig them out?
The aggressive effort by major players aims to reshape the narrative as polls show increasing public disapproval of AI OpenAI made a surprise announcement this week - not an update to ChatGPT or another multibillion-dollar datacenter - but a policy paper that called for a reimagining of the social contract based around "a slate of people-first ideas". It's the latest move in an aggressive effort by the major AI players to reshape the narrative around their industry, as polls show public disapproval of AI increasing. OpenAI's 13-page paper, titled Industrial Policy for the Intelligence Age, follows its surprise acquisition of tech-friendly podcast TBPN and its announcement of plans to open a Washington DC office that will feature a dedicated space called the OpenAI workshop for non-profits and policymakers to learn about and discuss the company's technology. OpenAI's rival Anthropic has meanwhile announced its own thinktank, the Anthropic Institute, which similarly proclaimed an intention to explore how the growth of AI would disrupt society. As disruptions from AI become more tangible and calls for greater scrutiny of big tech companies grows louder, the industry appears to be both recognizing the widespread discontent and looking for ways to reframe the debate. Sam Altman, OpenAI's CEO, talked about the public perception problems facing AI firms at investment firm BlackRock's conference in Washington DC last month: "You can see a bunch of potential headwinds. AI is not very popular in the US right now. Datacenters are getting blamed for electricity price hikes, almost every company that does layoffs is blaming AI whether or not it really is about AI," he said. Still, the company's marketing push is not only about burnishing its image. In developing thinktanks and research institutes, while at the same time spending millions on lobbying efforts, some experts also see AI firms attempting to undercut independent efforts to regulate the industry. "The OpenAI paper has a lot of the sounds of wanting more regulatory oversight," said Sarah Myers West, co-executive director at the non-profit AI Now Institute, which advocates for more public accountability over the AI industry. "But then when you look under the hood, they have lobbied very successfully for an administration that has taken a very aggressive deregulatory stance toward AI." OpenAI and Anthropic did not respond to a request for comment. OpenAI's paper marks a shift in tone that appears to reflect worries within the company around how its technology is being publicly received. Rather than talk about how workers can adapt to the new technology to avoid falling out of the labor market, the document talks about "building a resilient society" and asks policymakers to create guardrails on safe AI. The policy ideas include headline-generating proposals such as a four-day work week and the creation of a "public wealth fund" that would return profits directly to citizens - a spin on the tech industry hobbyhorse of universal basic income. The paper stresses the proposals shouldn't be considered as firm answers on how to address AI's impact on society, but rather "a starting point for a broader conversation about how to ensure that AI benefits everyone". "Unless policy keeps pace with technological change, the institutions and safety nets needed to navigate this transition could fall behind," the paper states. "Ensuring that AI expands access, agency, and opportunity is a central challenge as we move towards superintelligence." Critics of the paper characterize the arguments more of a public relations ploy than an actual policy document. And they argue, at its crux, it shifts responsibility away from the company and towards the public and lawmakers. Much of the paper describes OpenAI's vision of an AI-dominated world as something of a foregone conclusion. While presenting lofty goals for government and society, OpenAI is framing its technology as an inevitable force to be contended with rather than a product that can be regulated both internally and through legislation, experts argue. "What they've done very cannily here is sort of outline a set of social welfare goals while abdicating any responsibility or any meaningful commitment of resources toward those goals," Myers West said. In fact, critics argue, while the company is advocating for lawmakers and the public to take up responsibility, it is lobbying hard behind closed doors for more lax regulations and try to block state regulation that rein them in. "If we wait around for Congress to act, then these companies will just be able to grow unregulated," said Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center. "Which is, of course, what they want." OpenAI spent nearly $3m on lobbying in 2025. The company's president, Greg Brockman, co-founded a pro-AI Super Pac that raised more than $125m last year. The Pac has already run ads in New York against congressional candidate Alex Bores, who is in favor of AI regulation. The company is backing a bill in Illinois that would shield AI firms from liability in cases where an AI model causes serious societal harms such as creating a chemical weapon or causing mass death, Wired reported last week. Lack of awareness at the state government level around the still-nascent technology has provided the AI industry an opening to influence how regulation may look, according to Fitzgerald. "They're taking advantage, essentially, of the fact that these folks have short sessions and no staff, to convince them that any regulation of AI will stifle innovation," Fitzgerald said. OpenAI is not alone in its lobbying effort. Rival Anthropic has poured more than $3m into its own lobbying efforts and backed a different Super Pac, one with a different set of goals more welcoming of regulation. Despite Anthropic's recent fight with the Department of Defense over red lines on military use of its models, the AI industry also remains closely aligned overall with Donald Trump's White House, and the administration continues to act in its interest. The Trump administration has attempted to kill state level regulation of AI in multiple ways, adopting the industry's argument that a patchwork of laws would hamper the technology and stifle economic growth. Trump signed a legally contested executive order last year that attempts to block states from imposing limits on AI. In recent months, the White House pressured a Republican Utah state senator not to propose a bill that calls for transparency and child protection regulations on AI. The building out of thinktanks, public relations pushes and increases in lobbying all come as the AI industry grapples with a pervasive image problem in its home country and is already becoming a focus of political campaigns during upcoming midterm elections. Polls have shown a deep and growing distrust among the public towards AI, not just in regards to its potential effects on labor but also as a societal force. A Pew Research Center survey released last September found that only 16% of Americans believe that AI will help people think more creatively, while only 5% of Americans believe it will help people better form meaningful relationships with one another. An NBC News poll last month additionally found that only 26% of voters had a favorable opinion of AI and that the technology's net negative rating was 2 percentage points below US Immigration and Customs Enforcement (ICE). The exact reasons why people feel so negatively toward AI - whether it was the industry's initial sales pitch that its technology could destroy the world, economic anxieties around job displacement or longstanding antipathy toward big tech - are harder to pin down. What is clear is that the AI industry has begun to look at the fledgling movement of datacenter opposition, AI-restricting bill proposals and public distaste with growing concern. The industry in recent years has staffed up to make its case to lawmakers, and the public. Corporate-owned labs have pulled in formerly independent academics and researchers. In the last few years, Myers West said these labs have also moved away from publishing in peer-reviewed journals toward in-house publications and greater control. Even though respected researchers have left academia and non-profits to work at these companies, the dynamic creates broader questions about the incentives and corporate ownership of their research. "I don't think that you could say that they're in any way meaningfully independent," Myers West said.
[4]
From Molotov cocktails to data center shutdowns, the AI backlash is turning revolutionary | Fortune
For years, the resistance to artificial intelligence (AI) looked manageable. There were academics writing open letters, Hollywood writers striking over contract language, the think-tank reports warning of job displacement. Tech executives nodded, pledged responsibility, and kept building as fast as they could. Then someone threw a firebomb at Sam Altman's house. On Friday, a 20-year-old man named Daniel Moreno-Gama traveled from Spring, Texas, to San Francisco's Pacific Heights neighborhood and hurled an incendiary device at the gate of OpenAI CEO Sam Altman's $27 million home, igniting a fire on the exterior gate. No one was injured, but Moreno-Gama was arrested approximately an hour later outside OpenAI's headquarters -- where he was allegedly trying to shatter the building's glass doors with a chair and threatening to burn the facility to the ground. He is now facing state charges of attempted murder and federal charges that could include domestic terrorism. Authorities afterward found a manifesto warning of humanity's "extinction" at the hands of AI and expressing an urge to commit murder, and a disturbing personal Substack. The next morning, Altman posted a plea for sanity on his X account, attaching a photo of his husband and young child. "Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me," Altman wrote. To no avail. Early Sunday morning, two more Gen-Zers, one 23 and the other 25, were arrested after shooting a gun near the Russian Hill home of Sam Altman (it is unclear at this time if the shooting was targeted). After the attacks, pundits and professional opinion-havers pointed fingers in every direction: at the StopAI crowd, a radical group that has staged protests and flash-subpoena-deliveries to try to halt the pace of artificial intelligence altogether; at the news media, which has critically covered Altman and his peers; and at Altman himself, for stoking fear about AI displacement with his sometimes-apocalyptic rhetoric. Among the older commentariat, however, the dominant note was remorse and well wishes for Altman. But in the younger, less formal corners of the internet, like Instagram and TikTok, the comments under every post about the attacks generally run in one direction. "He's not scared enough." "Based do it again." "FREE THAT MAN HE DID NOTHING WRONG." "Finally some good news on my feed." Those comments are ugly, but for those who've been paying attention to the anti-AI backlash build, not shocking. At all. The middle distribution of Gen-Z's feelings about AI range from apprehension to downright hatred. Despite the fact that more than half of Gen Z living in the U.S. uses AI regularly, according to a recently released Gallup poll, less than a fifth feel hopeful about the technology. About a third says the technology makes them angry. And nearly half say it makes them afraid. Gallup's own senior education researcher, Zach Hrynowski, blamed the bad vibes at least partially on the dwindling job market. The oldest Zoomers, he told Axios, are the angriest, as they are "acutely aware" of the ability of a technology to transform cultural norms without a second thought, unlike a Gen Xer who is trained to see new technology as toys and are still "playing around with AI." Indeed, the job prospects for the recently graduated Gen-Z are abysmal; Bloomberg just reported that 43% of young graduates are "underemployed," meaning taking on jobs that require less education than they have. But that can't explain all of the vitriol. Perhaps some of it is the yawning gap beween promise and reality, symbolized by Altman himself. The OpenAI CEO has suggested that AI will usher in an era of "universal basic compute," that people will barely need to work, that the future will be almost frictionless. That isn't happening as of 2026. Instead, inflation remains stubbornly untamable, as it has throughout the decade; consumers have never felt worse about their financial state, and Gen Z feels like they're entering a "starter economy" without plentiful jobs or affordable homes. And so there's a real mismatch, as Alex Hanna, a professor and researcher who studies the social impacts of AI, put it, "between consumer confidence and people's pocketbooks and budgets, and what the technologists and the AI companies say the future is supposed to look like." This is not just a Gen Z problem, either. In the American heartland, data centers are being proposed at a pace that local communities never anticipated and for which they were never asked permission, and they're increasingly pushing back. The numbers are serious. According to a report from 10a Labs' Data Center Watch, at least $18 billion worth of data center projects have been blocked and another $46 billion delayed over the past two years due to local opposition. At least 142 activist groups across 24 states are now actively organizing to block data center construction and expansion. A Heatmap Pro review of public records found that 25 data center projects were canceled following local pushback in 2025 alone, four times as many as in 2024, with 21 of those cancellations occurring in the second half of the year as electricity costs grew. The concerns driving this resistance are less about existential AI risk and more about typical kitchen-table complaints; communities consistently cite higher utility bills, water consumption, noise, impacts on property values, and green space destruction as their primary objections. Water use is mentioned as a top concern in more than 40% of contested projects, according to a Heatmap Pro review of public records. Meanwhile, Hanna noted, companies keep lording over the threat of AI replacing workers as "leverage." She added, "Employers are making room for AI investments. They want to show that they can lay off people and do what they're currently doing with a decrease in headcount." That dynamic became evident in February, when a Substack analyst firm called Citrini Research published an AI doomsday scenario that went so viral it caused a multibillion-dollar market selloff. Days later, Jack Dorsey obliged the anxiety by cutting Block nearly in half, hinting that the cuts were due to AI innovation, and Wall Street gave him a standing ovation: the stock rallied as much as 25% the next day. Block was an outlier, but a pattern has begun to emerge; AI was cited in more than 55,000 U.S. layoffs in 2025 -- more than 12 times the number attributed to the technology just two years earlier, according to Challenger, Gray & Christmas. All that being said, Morgan Stanley's Michael Gapen wrote earlier this week that the AI story is not having a macro impact on the economy just yet, while Goldman Sachs economists forecast the long-term disruption at 6%-7% of jobs in the U.S. But the anger is also more intimate than just jobs. Much has been made of Gen Z turning 2026 into the year of friction; having real experiences, with real people, to make things feel hard and awkward again instead of optimized into a primordial soup flow-of-consciousness state-of-being. Hanna pointed to a recent TechCrunch report about a woman whose ex-boyfriend used OpenAI to fabricate a psychological profile of her and send it to her friends and family -- with the chatbot validating his grievances in what Hanna described as operating "in a sycophantic manner, telling him he was right and she was wrong." The backlash, Hanna argued, is not down to one thing. There are workers who feel threatened, consumers who thought more would come, and there are people who have had AI deployed against them in intimate ways. Lumping all of these together -- with the fringe extinction-risk crowd, or the StopAI protesters -- misses what's actually driving the force. "I think the vast majority of people who are angry at AI are regular consumers," Hanna said. "People who were promised one thing, especially online, and they're just getting a completely different experience."
[5]
'It has a real 'ring of power' dynamic to it, and makes people do crazy things' says Sam Altman on AI after alleged arson attack on home
"We should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally." Last week, someone reportedly threw an incendiary device at OpenAI chief Sam Altman's house. Though Altman reports "it bounced off the house and no one got hurt", he has taken to his blog to share his thoughts on the incident. Altman initially lays out his beliefs. He claims that "working towards prosperity for everyone" is a "moral obligation" and that AI "will be the most powerful tool for expanding human capability and potential that anyone has ever seen." He then argues that people are justifiably worried about AI and that it will have negative consequences, but it must be democratised and must be adaptable. Altman says, "No one understands the impacts of superintelligence yet, but they will be immense." He then reflects on his own character, noting he has made many mistakes in his time with OpenAI and that he is "a flawed person in the center of an exceptionally complex situation." Altman notes artificial general intelligence (AGI), AI that can match human intelligence, has a certain power to it that makes people act in peculiar ways. The OpenAI CEO says, "My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: 'Once you see AGI you can't unsee it." He even likens it to the one ring in The Lord of the Rings, which is wielded by Sauron to dominate others and corrupts most of those who touch it. An interesting comparison for someone trying to sell the benefits of AGI, though Altman is quick to point out that it's not like the ring itself, but just a good summary of the argument around who controls AGI, if anyone. "It has a real 'ring of power' dynamic to it, and makes people do crazy things. I don't mean that AGI is the ring itself, but instead the totalizing philosophy of 'being the one to control AGI." Altman says that his solution is simply to share the one ring -- an AGI hula hoop. He argues that "individual empowerment' and democratic systems must be leveraged to share responsibility and ownership. He says, "We want to be a voice and a stakeholder, but not to have all the power." Though this does make one wonder why an unelected leader would be a stakeholder in the tool if the goal is to democratise it? According to the San Francisco Police, we don't yet have the reasoning behind why the person who threw an incendiary device at Altman's house decided to do so. The 20-year-old male currently believed to be responsible is under arrest and being investigated. Altman finishes his blog with a plea not to throw any more. "While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally." Though it's worth noting that the debate on these tools is happening, whilst critics fervently argue against OpenAI's safety policies. Just last month, a hardware leader left due to OpenAI's ongoing contracts with the US government and potential for "surveillance of Americans without judicial oversight and lethal autonomy without human authorization". It also reportedly kicked around the idea of pitting world leaders against each other, by, as our Tyler Wilde puts it, "positioning OpenAI as a kind of nuclear weapon that they'd better compete to invest in, lest they be left behind." Seems like the debate has been going on for some time and likely won't end soon.
[6]
'Find a way to work together' -- Sam Altman's message to the Department of Defense and Anthropic
Altman weighs in on why AI companies can't afford to keep escalating their fights with government * Sam Altman urged the government and Anthropic to de-escalate tensions and work together on AI governance * He argued that governments should hold power over AI and national security decisions * He said he still mostly trusts the government, while accepting that many don't Relations between Anthropic and the U.S. government have become an unusually combustible flashpoint in the broader fight over AI regulations and control. The escalating fight began when negotiations with the Pentagon over how Anthropic's Claude AI model could be used broke down over the company's refusal to remove safeguards against fully autonomous weapons or mass domestic surveillance. Responses from Washington, including an executive directive banning federal agencies from using Anthropic's technology and labeling the company a "supply chain risk," led to lawsuits alleging constitutional violations, and a federal judge has since temporarily blocked the Pentagon's actions. OpenAI CEO Sam Altman apparently sees harmony as necessary on both ends of the argument. "Find a way to work together. like stop, stop the stuff on both, stop the escalation on both sides and find a way to work together," Altman said in an interview with Laurie Segall. AI's security demands AI companies have hyped the technology's potential in realms like national security, even as they lobby for a light regulatory touch. Altman has apparently concluded that the companies cannot have it both ways. If AI is as geopolitically consequential as everyone keeps insisting, then governments are going to want a hand on the wheel. "I don't think it works for our industry to say, Hey, this is the most powerful technology humanity has ever built," Altman said. "It is going to be the high order bit in geopolitics. It is going to be the greatest cyber weapon the world has ever built. It is going to, you know, be the determinant of future wars and protection. And we are not giving it to you." Of course, whether people feel comfortable with the government controlling such consequential technology is another question. Altman said he still mostly trusts the system of checks and balances, though he did acknowledge that many people currently "really don't trust the government to follow the law." It's a position that stands out compared to some AI leaders who are more suspicious of the government. Nonetheless, he thinks it would be a mistake not to help the government with national security, especially in cyber infrastructure. "I think we have to work with government, but the intensity of the current mood of mistrust, I was miscalibrated on and I understand something there now," he said. Trust in AI control Essentially, Altman and others aligned with him want to work with governments, even as the public distrust over the misuse of AI grows. "One of the most important questions the world will have to answer in the next year is, are AI companies or are governments more powerful? And I think it's very important that the governments are more powerful," Altman said. "The future of the world, and the decisions about the most important elements of national security should be made through a democratically elected process. And the people that have been appointed as part of that process, not me, and not the CEO of some other lab." Altman kept coming back to the issue of the way the power of AI is arriving faster than institutions, governments, or most humans can calibrate for it. The systems are getting more capable, and their potential for misuse grows in tandem. The stakes are higher and more serious all the time. Big fights among those who are supposed to devise safe regulations and the companies, at least theoretically, trying to steer the technology in an ethical direction, represent an enormous problem. A diplomatic shrug urging diametrically opposed sides to "find a way to work together" won't likely resolve matters. Still, at least it means Altman knows the answer won't be obvious, even if he phrased it as a request to ChatGPT. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[7]
AI products are reaching further into our lives. Does it matter who controls the companies behind them? | Van Badham
The joke on the internet asks: "What are the seven most terrifying words in the English language?" The answer: "Ronan Farrow's been asking questions about you." The investigative journalist has a piece in The New Yorker this week, where the subject of said inquiries is Sam Altman, the billionaire founder and CEO of OpenAI, the company that owns ChatGPT. Farrow's new piece suggests timely, broader questions of who has power, who should have it, who absolutely shouldn't ... and what we do if they have it, anyway. OpenAI's products now reach into everything, from your smartphone to defence contracts to law enforcement. Its operations have a growing hunger for electric power; its datacentres are spreading across the planet; and the labour market implications of its potential to replace jobs suggest an industrial upheaval for white-collar workers on a world-changing scale. The commercial momentum of this company is such that, despite a projected loss of $14bn in 2026 reported in early March - tripling estimates made in 2025 - OpenAI still held an eye-watering market valuation of $852bn by March's end. Farrow's piece claims the OpenAI board had doubts about whether they could trust Altman when they fired him in 2023. As per Farrow, Altman then convened a "war room" comprising of crisis communicators - and some influential company investors - to defend his reputation. He was reinstated five days later, reportedly, pressure from investor Microsoft and a threat from 700 staff to resource any competing Altman venture were critically persuasive in discussions. Three years later, the company, with a CEO its own board did not allegedly trust, has publicly concluded a deal with the US military to use its technology in classified operations. The deal was announced in the wake of its AI rival, Anthropic, expressing concern that the US government could, potentially, employ its own proprietary AI tools as instruments of "mass surveillance" and for "fully autonomous weapons". The Trump administration emphatically ceased business with Anthropic, and OpenAI leapt in. Facing a backlash, Altman described the original deal OpenAI concluded with Pete Hegseth's department as "opportunistic and sloppy". The company subsequently released a statement reassuring the public its Pentagon agreement had "more guardrails than any previous agreement for classified AI deployments, including Anthropic's". At OpenAI's word, the company believes "strongly in democracy" and that the "only good path forward requires deep collaboration between AI efforts and the democratic process". How perplexing! As Jake Laperruque from Tech Policy observes, OpenAI's cited "red lines" against mass domestic surveillance, direct autonomous weapons systems and high-stakes automated decisions seem to be largely indistinguishable from those "that caused the planned Anthropic agreement not only to fail, but to explode in shocking fashion". I'm also curious regarding the company's interpretation of "deep collaboration" with the democratic process. Perhaps we could glean the nature of it with the information that OpenAI top executive, Greg Brockman, was revealed as a $25m donor to a Trump fundraising vehicle in January. Brockman is also a participant in an AI "SuperPAC" fundraising vehicle that in 2025 raised $125m to further its goal of backing candidates who support national AI regulations rather than state-by-state rules. In December last year, Trump signed an executive order limiting state regulations of AI, preferring a "minimally burdensome national standard" to regulate technology. I'm sure it's just a coincidence. So are they all, all honourable men. And yet, somehow, concerns do nag about the character of decision-making processes regarding a technology that OpenAI's own staff researchers believe is a "threat to humanity". Ethical anxiety has inspired activist/historian Rutger Bregman to start a "QuitGPT" campaign for a worldwide boycott of Altman's company. Meanwhile, questions remain over the role of AI tools such as Palantir's Maven in US strikes on Iran, including the bombing of a girls' school in Minab. The rubble of that school is the grotesque terrain over which the debate over who gets entrusted with power over tools that could kill us all must be asked - because AI is just one of the mechanisms for our own mass annihilation proliferating now. Those who gain power over these may be good people, bad people, misunderstood people or the overwhelmingly more common mixture of every kind of person on any given day. Whether their talents are for computer programming or demagoguery, every social organisation, from the local tech startup to the collective representatives of nation-states, has to affirm meaningful social, political, legal and economic guardrails that channel their available options away from human fallibility and collectively minimise the harm they can do. Dear god, haven't we learned by now that self-regulated enterprises do not regulate in the interest of anyone or anything beyond their commercial or political self-interest? Sanctions, recalls, suspensions and multiple supervisory stakeholders with the authority to enforce these are what keeps us alive. The moment demands a global and unified willingness to regulate the complex risks posed. It's a problem we cannot outsource to Farrow or AI. Our shared fates depend on sitting down with one another and all our human fallibilities, and working it out for ourselves.
[8]
Monday Morning Moan - madness, Molotovs and Mordor. Is one AI 'Ring to rule them all' really worth all this?
Might that be the most important thing anyone has said about AI this year to date? Ok, it's a hoary Lord of Rings metaphor, but look beyond the cliché and consider the sentiment in relation to that other sought-after magical totem, Artificial General Intelligence (AGI), always famously five minutes, five weeks, five months, or five years around the corner, depending on which AI-hungry Gandalf is chasing attention at any given time. Despite their Tolkien-ite tone, those five opening words didn't come from Middle Earth's hobbit Samwise Gamgee, but from another Sam altogether - OpenAI's Sam Altman, a man who, to say the least, hasn't had a good few days. Now, I've got my doubts about Altman, his execution of his role, and his perception of his responsibilities in the AI sector, but he doesn't deserve what happened to him late last week in any way, shape or form. According to police reports, Altman's San Francisco home, which he shares with his husband and young son, was the subject of an attack in the early hours of the morning by a man throwing a Molotov cocktail. Fortunately no-one was injured. Some hours later the same individual was arrested after threatening to burn down OpenAI's HQ. The attack was confirmed by Altman himself on Saturday in an emotional blogpost in which he posted a picture of his husband and child, stating: Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me. The first person did it last night, at 3:45 am in the morning. Thankfully it bounced off the house and no-one got hurt. At which point, let me add diginomica's voice to that last sentiment. While this is now clearly an active police matter and therefore we need to be careful what we say here, let's just make our stance irrevocably clear here - if anyone reading this thinks that such actions are in any way acceptable or justified, please find another website to visit. You're not wanted here. But of course there have been plenty of voices raised online to accuse Altman of playing the victim here. The briefest of trawls on X - about all I could stomach, frankly - made that clear. Alongside the predictable disgustingly homophobic and anti-semitic filth that apparently counts as free speech when exercised by the pond life that infests Elon Musk's hellsite of a bigots echo chamber these days, were many other negative reactions, including: Did this actually happen or did you pay someone to distract from the article proving you are a sociopath? [NB - yes, it did happen! Shall we move on to questioning the moon landings next?] Or: Oh congratulations, Sam. You finally got your "victim moment." Molotov. Threatening letters. Husband and kid on camera. A blog post with family photos already queued up. What a perfect little stage. Lighting, emotion, music -- all f*cking on point. You got attacked? In the same week you're getting dragged by the whole country, investigated by a state AG, sued by Musk, and sh*t on by your own users? That timeline is cleaner than a Hollywood script. Meanwhile others were more concise in their opinions: Scam Altman. Now, Altman did make an unforced error in his blogpost that he came to regret, when he said: There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside. Now I am awake in the middle of the night and pissed, and thinking that I have under-estimated the power of words and narratives. That "incendiary article" was The New Yorker exposé by Pullitzer Prize winner Ronan Farrow, the man whose investigative journalism brought down Harvey Weinstein and helped trigger the 'Me Too' movement. His treatise last week on Altman is an excoriating read that does not present the OpenAI boss in a favorable light, to say the least! Altman certainly can't be expected to have enjoyed The New Yorker piece, but to link, even by implication, "incendiary" news copy to an incendiary attack on him and his family is stretching a point and undermining his own position here. To his credit, he appears to have realised this and later said: That was a bad word choice and I wish I hadn't used it. It has been a tough day and I am not thinking the most clearly that I ever have. OK, fair enough. I can't imagine any of us would be fully on top of our game if the same sort of thing happened to us. But the bigger problem for Altman when it comes to the article is that it simply isn't something that can be dismissed as a 'gotcha' hit piece or - heaven help us! - that old stand-by when confronted with something you don't care for, Fake News! It's 18 months of hard work, based on hundreds of interviews, internal and external memos, HR documents, Slack messages, and private notes. There's no way that The New Yorker's legal team let that piece out into the wild without having gone over everything with the finest of tooth combs. This wasn't just quickly run through ChatGPT for checking, that's for certain! But that aside, Altman proceeds to use the majority of his blog seemingly to dive deeper into his worldview, his hopes, his fears, yada yada yada. Now, some of this comes across as cloyingly sickly and banal in the extreme, with echoes of 1970s beauty pageants as he expresses his ambition to be "working towards prosperity for everyone". Yeah, yeah, yeah - that's why OpenAI went from being a not-for-profit to a 'very-much-up-for-as-much-profit-as-we-can-get'. That's may be an aspiration that is a long way from being realised, but the direction of intended travel is clear. Then there are some platitudes about fears around AI and the compulsory gestures in the vague general direction of regulation and safety, with Altman quick to insist that this needs to be everyone's responsibility. He's right, although I do wonder if his rationale for this is coming from the same place as mine, but let's leave that for another day. But he's on a roll now: AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions...I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future. Fine words. Backed up by...? Pausing only for a pre-court drive-by sniping at his bête noire Musk - "...remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI..." - Altman declares himself in favor of collective responsibility, which certainly isn't a narrative that chimes easily with some of the output of Farrow's investigation. But Altman goes further - it seems Mr Sam hates falling out with anyone, despite the long, long list of those who left OpenAI under varying circumstances in its short life. He insists: I am not proud of being conflict-averse, which has caused great pain for me and OpenAI. I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company. I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission. In other words, mea culpa, mea maxima culpa!!! Everyone got that before the Musk vs OpenAI trial gets underway in a couple of weeks? There's more in case you need it: We knew going into this how huge the stakes of AI were, and that the personal dis-agreements between well-meaning people I cared about would be amplified greatly. But it's another thing to live through these bitter conflicts and often to have to arbitrate them, and the costs have been serious. I am sorry to people I've hurt and wish I had learned more faster. So far, so self-aware/self-centered/self-pitying - delete as applicable according to your personal assessment. But then things get more interesting as Altman turns his attention to OpenAI's place in the emerging AI industry. Abandoning Tolkien for a moment, another authorial genius is called upon as Altman ruminates on "why there has been so much Shakespearean drama between the companies in our field" in such a short space of time. Now from where I sit, once you've seen one AGI press release pronouncement, you don't need to worry about unseeing it; there will be another along in ten minutes to take its place! But for Altman, this appears to be a 'see the face of God' moment - and we're back to tortuous Tolkien imagery to back it up: It has a real 'ring of power' dynamic to it, and makes people do crazy things. I don't mean that AGI is the ring itself, but instead the totalizing philosophy of "being the one to control AGI". The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. As noted above, throwing a Molotov Cocktail at someone's house, anyone's house, is wrong. End of story, no arguments, go directly to jail! That said, Altman's essay is a fascinating, if flawed, insight into his thinking, as he calls for stabilization while simultaneously pushing out and pursuing the most societally de-stabiliizing technology ever seen. The contradictions come thick and fast. The 'riches for all' angle might ring a bit truer if Altman hadn't trashed the non-profit origins of OpenAI in favor of chasing ever more enormous funding rounds, in the process seeding outrageous growth expectations to a Wall Street 'greed is good' brigade that is going to hit back in the nastiest way possible when those voracious appetites are not fed. As for prosperity for all coming down the tracks, maybe that will come as some crumb of comfort to the millions of content creators around the world whose copyright concerns have been trampled over as AI models run wild on unauthorised training exercises? Or maybe it won't... And there's a claim Altman makes at one point in his tome that "empowering all people, and advancing science and technology are moral obligations for me". For some commentators, claims of morality might sit uneasily at the moment, what with memories still fresh of OpenAI's unseemly rush to fill the Anthropic-shaped hole at the US Department of War, apparently ready to sign up to terms that its rival saw as ethical red lines that could not be crossed. He does make one plea that surely we can surely all get behind: While we have that debate [about who controls that AI Ring to rule them all], we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally. I will admit here that members of my own profession have to accept some responsibility for the fevered nature of the debate around AI's impact. Not a day passes without some mainstream media headline or another along the lines of, 'The bots are coming for your job!'. During the Industrial Revolution, the Spinning Jenny came for the jobs and the Luddites took to violent protest, and lost - and that was without the benefit of Uncle Elon's propaganda platform to boost dis-content and mis-information - free speech, remember! - out to the widest possible audience. But alongside such lurid media reaction, there are other factors in play that are not helpful. A US Government that overnight declares that a US AI success story is now to be considered an enemy of the state run by "left-wing nut jobs" - © D.J.Trump - because it won't toe the line when it comes to changing a contract, is not conducive to a culture of calm, rational, and utterly essential ethical debate at a time when that debate is desperately needed! Now is the time when we do need to have ferocious disagreements about AI and its journey of travel, but in a productive, civilised, and societally/globally beneficial way. Instead we have people thinking it's OK to firebomb a house in which a toddler is sleeping in his bedroom, just because his dad is head of an AI firm. The Eye of Sauron really is watching over us, isn't it?
[9]
Anthropic's dispute with US government exposes deeper rifts over AI governance, risk and control - SiliconANGLE
Anthropic's dispute with US government exposes deeper rifts over AI governance, risk and control The escalating dispute between Anthropic PBC and the U.S. Department of Defense is exposing a fundamental tension in the artificial intelligence market: who ultimately controls how powerful AI systems are used. What began as a contracting and policy disagreement has evolved into a broader debate over national security, corporate responsibility and the limits of self-governance in emerging technologies. At the center of the conflict is the Pentagon's designation of Anthropic as a "supply chain risk," a move that effectively bars the company's models from use in defense-related systems. President Donald Trump later ordered all federal agencies to stop doing business with Anthropic. That decision has been challenged in court and is now under a preliminary injunction, but its implications are already reverberating across enterprise information technology and AI development practices. A Gartner Inc. report in late March said the episode underscores how deeply embedded AI models have become in software systems and the vulnerabilities to policy shocks that creates. "Anthropic's exclusion underscores how quickly embedded model dependencies can convert into structural technical debt," the firm wrote, noting that even minor changes in model behavior can require "broad functional revalidation" and potentially disrupt production systems. At the heart of the dispute is Anthropic's insistence on restricting how its models can be used, particularly in areas such as mass surveillance and autonomous weapons. That stance has triggered a wider debate over whether private companies should define ethical boundaries for technologies with societal and geopolitical implications. SiliconANGLE contacted numerous AI experts and industry executives. Though most declined to comment on the politically loaded issue, those who agreed to be quoted largely backed Anthropic's right to dictate restrictions on the use of its technology. Several argued that the Pentagon's framing of the issue as a supply chain risk is overstated. The conflict appears less about security vulnerabilities and more about disagreements over acceptable use, said David Linthicum, a cloud and AI subject matter expert. "If a company says it does not want its AI used for certain military or domestic surveillance purposes, that is a policy and governance issue," he said. Carlos Montemayor, a philosophy professor at San Francisco State University, took a more critical view of the government's position, suggesting the designation may be punitive. "The government is punishing Anthropic for not following orders," he said, calling the move unjustified and potentially a signal to other AI providers to align with federal expectations. That divergence in interpretation reflects a broader ambiguity: Should AI systems be treated like interchangeable software components or as strategic assets subject to tighter alignment with state priorities? Linthicum supports giving companies the right and responsibility to set limits. "If a company builds powerful technology, it has every right to say what it will and will not support," he said. However, he emphasized that those decisions shouldn't occur in isolation. Governments, courts and customers all have roles in shaping acceptable use. Valence Howden, an advisory fellow at Info-Tech Research Group Inc., echoed that view, arguing that organizations "have a responsibility to define the ethical boundaries and use cases of their technologies," particularly as AI systems take on more autonomous roles. Others were less comfortable with corporate self-regulation, though. Montemayor argued that allowing companies to set their own ethical frameworks is "unacceptable and dangerous," given the scale and impact of AI systems. "From an ethical perspective, companies should not dictate from their narrow engineering and commercial point of view what is right or wrong for societies around the globe," he said. Montemayor called for international regulation grounded in human rights principles, warning that current approaches create "too much uncertainty about the future of this technology." Gartner analysts suggest that these decisions often come down to business tradeoffs. Contractual restrictions on how technology can be used are common but enforcing them is difficult. In Anthropic's case, limitations around autonomous weapons may reflect not only ethical concerns but also technical constraints. "Frontier AI systems are simply not reliable enough to power fully autonomous weapons," wrote Anthropic Chief Executive Dario Amodei. At first glance, broad government restrictions on doing business with Anthropic may appear to be a devastating blow to the company, but despite the potential loss of lucrative government contracts, several experts believe Anthropic's stance could strengthen its position in the enterprise market. Marc Fernandez, chief strategy officer at Neurologyca Science & Marketing SL, framed the issue in terms of long-term trust. "Holding the line on restrictions is going to be expensive [for Anthropic]in the short term," he said, but clear boundaries can signal reliability in high-stakes environments. "Over time, that kind of reliability becomes a massive competitive advantage." Linthicum agreed that consistency matters. "A lot of enterprise customers want to know that a vendor has clear values and will stick to them under pressure," he said. Anthropic's position could thus make it "more attractive to many customers, not less," provided its policies are clearly defined and consistently applied. Info-Tech's Howden also highlighted the trust factor, noting that maintaining restrictions "has likely benefited them Anthropic an industry that hasn't always been built on trust and honesty." Some observers said the dispute reflects a deeper misunderstanding of what AI systems are and how they should be governed. Anaconda Inc. Chief Executive David DeSanto, noted in a LinkedIn post that the Pentagon appears to be treating AI like "the next version of Microsoft Excel -- a tool you buy, own and use however you want," he said. "But that's not what this technology is." Unlike spreadsheets, AI systems are capable of "judgment and autonomous action," requiring new governance frameworks that can't be retrofitted onto existing procurement and oversight models. That gap, DeSanto said, is evident not only in government but across enterprises, where leaders often assume they can "bolt AI onto existing infrastructure and figure out the hard stuff like governance responsibilities later." Anaconda Field Chief Technology Officer Steve Croce warned against "normalization of deviance," or the tendency for organizations to lower their guard as long as systems continue to function without obvious failures. "When companies like Anthropic start to pull back safety standards, it sets a precedent," he wrote. Enterprises need to prioritize "AI sovereignty," or the ability to define and enforce their own guardrails, rather than relying on external providers. Beyond the ethical and political dimensions, the Anthropic dispute is likely to force organizations to confront practical challenges in AI adoption, Gartner notes. Unlike productivity software, replacing a model is not simply a matter of switching back ends. It often requires requalifying entire workflows, retraining systems and recalibrating performance benchmarks. "A forced model swap is not just a verification task," the firm noted. "It is a requalification of the AI-dependent system." This creates a paradox: Organizations that invest heavily in optimizing AI-driven workflows may achieve higher productivity, but face greater disruption when policy changes force them to switch providers. As a result, Gartner recommends that engineering leaders treat "provider volatility as an immediate continuity risk" and design systems for portability, modularity and rapid substitution. It's clear that AI is no longer just a technical issue but a governance challenge that cuts across business strategy, national security and societal values. The outcome of this dispute will likely help shape how those often competing priorities are balanced in the years ahead.
Share
Copy Link
OpenAI CEO Sam Altman faced two violent incidents at his home within 48 hours, including a Molotov cocktail attack by a 20-year-old concerned about AI-driven extinction. The attacks signal an escalation in AI backlash, fueled by job displacement concerns, data center opposition, and growing public distrust. Altman responded with a personal blog post calling for de-escalation of rhetoric while the industry grapples with mounting resistance.
The AI backlash transformed from academic debates to physical violence when a 20-year-old threw a Molotov cocktail at Sam Altman's $27 million San Francisco home early Friday morning. Daniel Moreno-Gama traveled from Spring, Texas, to Pacific Heights, igniting a fire on the exterior gate before being arrested an hour later outside OpenAI's headquarters, where he allegedly attempted to shatter glass doors with a chair while threatening to burn the facility down
4
. Authorities discovered writings warning of humanity's "extinction" at the hands of AI and expressing an urge to commit murder1
. The attacks on Sam Altman's home continued Sunday morning when two more individuals, aged 23 and 25, were arrested after shooting a gun near his Russian Hill residence2
.
Source: The Verge
The OpenAI CEO broke from his typically polished public persona to share a deeply personal blog post. Altman posted a family photo featuring his husband and young child, writing: "Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me"
4
. He connected the violence to surrounding rhetoric, specifically referencing a New Yorker investigation published days earlier that compiled over 100 interviews revealing widespread distrust among those who had worked with him1
. "There was an incendiary article about me a few days ago," Altman wrote. "Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives"2
. He later walked back the rhetoric after criticism on X.The violent incidents reflect deeper public anxiety, particularly among younger Americans facing economic uncertainty. Less than a fifth of Gen Z feels hopeful about AI technology, while about a third say it makes them angry and nearly half report feeling afraid, according to a recent Gallup poll
4
. Job displacement concerns drive much of this sentiment, as 43% of young graduates are currently underemployed, taking jobs requiring less education than they possess. Online reactions to the attacks skewed notably different across age demographics, with younger users on Instagram and TikTok posting comments like "He's not scared enough" and "FREE THAT MAN HE DID NOTHING WRONG"4
. Gallup senior education researcher Zach Hrynowski attributed the anger to older Gen Zers being "acutely aware" of technology's ability to transform cultural norms without consent.
Source: SiliconANGLE
Beyond threats to executives, AI backlash manifests in local communities blocking infrastructure projects. At least $18 billion worth of data center projects have been blocked and another $46 billion delayed over the past two years due to local opposition, according to 10a Labs' Data Center Watch
4
. An Indianapolis councilman reported 13 shots fired at his door with a note reading "No Data Centers" after supporting a rezoning petition for a data center developer1
. In Ypsilanti, Michigan, masked protesters visited a community utility authority board member's home to protest a "high performance computing facility," with one allegedly smashing a printer on the lawn1
. At least 142 activist groups have formed to resist these projects.As AI backlash intensifies, companies are launching aggressive efforts to reshape public perception. OpenAI released a 13-page policy paper titled "Industrial Policy for the Intelligence Age" calling for a reimagining of the social contract with "people-first ideas" including a four-day work week and a "public wealth fund" returning profits directly to citizens
3
. The company announced plans for a Washington DC office featuring an "OpenAI workshop" for non-profits and policymakers, while rival Anthropic launched the Anthropic Institute think tank to explore AI's societal impact3
. Sam Altman acknowledged the perception problem at BlackRock's conference last month: "AI is not very popular in the US right now. Datacenters are getting blamed for electricity price hikes, almost every company that does layoffs is blaming AI whether or not it really is about AI" .Related Stories
Critics characterize OpenAI's policy initiatives as public relations maneuvers that shift responsibility away from the company. "What they've done very cannily here is sort of outline a set of social welfare goals while abdicating any responsibility or any meaningful commitment of resources toward those goals," said Sarah Myers West, co-executive director at AI Now Institute
3
. While publicly advocating for AI policy and regulation, OpenAI spent nearly $3 million on lobbying efforts in 2025, with president Greg Brockman co-founding a pro-AI Super PAC that raised more than $125 million3
. A hardware leader recently left OpenAI citing concerns about ongoing contracts with the US government and potential for "surveillance of Americans without judicial oversight and lethal autonomy without human authorization"5
.Altman used his blog post to reflect on the philosophical tensions surrounding artificial general intelligence (AGI) and superintelligence development. "Once you see AGI you can't unsee it," he wrote, comparing the dynamics to The Lord of the Rings: "It has a real 'ring of power' dynamic to it, and makes people do crazy things. I don't mean that AGI is the ring itself, but instead the totalizing philosophy of 'being the one to control AGI'"
5
. He proposed sharing control through "individual empowerment" and democratic systems, though critics question why an unelected leader would remain a stakeholder if democratization is the goal. White House AI adviser Sriram Krishnan criticized "doomer" rhetoric, writing on X: "I think the doomers need to take a serious look at what they have helped incite... This is the logical outcome of 'If we build it everyone dies'"1
.
Source: diginomica
The incidents expose a fundamental disconnect between industry promises and lived reality. Altman has suggested AI will usher in an era of "universal basic compute" where people barely need to work, yet inflation remains high, consumer confidence is low, and Gen Z faces a "starter economy" without plentiful jobs or affordable homes
4
. AI researcher Alex Hanna identified "a real mismatch between consumer confidence and people's pocketbooks and budgets, and what the technologists and the AI companies say the future is supposed to look like"4
. Groups advocating against accelerated AI development explicitly denounced the violence, and protests have largely remained nonviolent, including hunger strikes targeting AI companies1
. Altman concluded his blog post acknowledging valid criticism: "A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology. While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally" .Summarized by
Navi
[3]
[4]
11 Apr 2026•Business and Economy

09 May 2025•Policy and Regulation

10 Apr 2026•Entertainment and Society

1
Health

2
Technology

3
Policy and Regulation
