2 Sources
2 Sources
[1]
Palantir CEO Uses Slur to Describe People Who Don’t Think the Government Will Take Their Company
The United States is at war with Iran, Anthropic is at war with the Department of Defense, and Palantir CEO Alex Karp is at war with linear thinking despite having lots to say about both of the other battles. During an appearance at a16z's American Dynamism Summit 2026, Karp offered some advice to his industry as the proliferation of artificial intelligence pushes us toward potential inflection points in the private and public sectors: either do what the Trump administration asks of you or be prepared to be nationalized. "If Silicon Valley believes we are going to take away everyone's white-collar job ... and you're gonna screw the military, if you don't think that's gonna lead to nationalization of our technology, you're retarded," Karp said while speaking at the summit. "You might be particularly retarded, because you have a 160 IQ." Though it requires some parsing (and kinda ignoring a sidebar in which Karp described white-collar workers as "primarily Democratic shaped people that you and I grew up with, highly educated people who went to elite schools or went to schools that are almost elite for one party"), Palantir's CEO seems to be making the case that if AI firms don't cooperate with the federal government, they risk simply being absorbed by it, because no government would let companies amass the type of power and control that the tech industry is on the precipice of obtaining without requiring reciprocity. Of course, if you view the Trump administration as authoritarian or fascistic, you might call that being a collaborator. Karp seemed to be at least in part responding to the threats levied by the Department of Defense in its showdown with Anthropic. The Pentagon demanded that the company behind Claude provide unfettered access to its AI model, including in ways that would violate the company's safeguards to prevent participation in mass domestic surveillance and in developing fully autonomous weapons that would operate without human involvement. In response to Anthropic's unwillingness to drop those red lines, Defense Secretary Pete Hegseth threatened to, among other things, invoke the Defense Production Act to force the company to build a model for the military’s desired purposes. That would be a form of nationalization, in which Anthropic would no longer control the weights and levers of its own technology. It's one that Karp and his fellow tech execs would seemingly like to avoid, or at least they wanted to avoid that outcome in the past. But the way to avoid it, in Karp's configuration, is to go along with what the government asks of you. "There is a lot of subtlety here behind the curtain, and I've been heavily involved in that subtlety. Where [AI] can be deployed, what can be deployedâ€"there is a difference between the US military and surveillance," Karp said. In other words, operate as if the government owns your company because if you don't, it will. During his rambling explanation of why tech companies should go along with what the Trump administration is asking of them, Karp added, "Despite what everyone thinks, Palantir is the anti-surveillance company." Which sure seems like a strange thing for the guy in charge of the company that built a database of protesters and is helping ICE track down immigrants, but maybe that tells you how seriously you should take him. The suspicions that one might have of Karp's advice for full compliance with the government's carte blanche demands are likely not lessened by the voices who have joined him. Fellow Trump-aligned tech exec Palmer Luckey, the head of military tech darling Anduril, took to X in the wake of Anthropic's decision to reject the Department of Defense's terms to say that, actually, the government can make companies do whatever and that's a good thing. He cited President Harry S. Truman's executive order to nationalize the railroads, and in another post said "seemingly innocuous terms" like insisting the government can't use your tech to target civilians are "actually moral minefields that lever differences of cultural tradition into massive control." Karp put it this way: "The danger for our industry is that you get a famous horseshoe effect, where there is only one thing people agree on, and that's that this is not paying the bills and our industry should be nationalized." While there is certainly something troubling about the Karps and Palmers of the world insisting that the prudent thing for Anthropic and other companies to do is abandon their red lines, it is also a more honest position than the more weaselly execs who suck up to whoever is in office. Anduril and Palantir are modern-day arms dealers. Moral concerns don't really fit with the bottom line in that business.
[2]
Palantir CEO Alex Karp Issues Explosive Warning To Silicon Valley - Palantir Technologies (NASDAQ:PLTR)
Palantir Technologies Inc. (NASDAQ:PLTR) CEO Alex Karp did not mince words at a recent industry summit. He told Silicon Valley that trying to gut white-collar employment while also cutting off the military is a fast track to having your technology seized by the government. The remarks were delivered at the a16z American Dynamism Summit. Karp's Blunt Warning To The Tech Industry "If Silicon Valley believes we're going to take everyone's white collar jobs AND screw the military...If you don't think that's going to lead to the nationalization of our technology -- you're retarded," Karp said. Palantir CTO Shyam Sankar has also pushed back on the doom-and-gloom narrative around AI and jobs. Speaking on the All-In podcast in July 2025, Sankar argued AI gives workers "superpowers" rather than pink slips. The comment lands at a charged moment. The AI industry is navigating mounting tension between its commercial ambitions, its workforce impact, and its complicated relationship with the U.S. defense establishment. Anthropic CEO Had Already Sounded The Alarm On Jobs Karp's remarks follow a warning issued in January by Anthropic CEO Dario Amodei. In a roughly 20,000-word essay, Amodei argued that the risks AI poses are not being taken seriously -- and that a labor market "shock" unlike anything seen before is coming. "New technologies often bring labor market shocks, and in the past, humans have always recovered from them, but I am concerned that this is because these previous shocks affected only a small fraction of the full possible range of human abilities, leaving room for humans to expand to new tasks," Amodei wrote. Silicon Valley's Pentagon Feud The Pentagon quickly pivoted to OpenAI. CEO Sam Altman announced on Sunday that OpenAI had shifted to classified Pentagon projects, calling the move urgent. Image via Shutterstock This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.
Share
Share
Copy Link
Alex Karp delivered a stark warning to Silicon Valley at the a16z American Dynamism Summit, arguing that AI companies refusing to work with the military while eliminating white-collar jobs risk having their technology seized by the government. His comments come amid escalating tensions between the Pentagon and Anthropic over AI model access and ethical safeguards.
Alex Karp, CEO of Palantir, delivered a provocative message to the tech industry at the a16z American Dynamism Summit 2026, asserting that AI companies cannot simultaneously eliminate white-collar employment and refuse military cooperation without facing government nationalization of their technology
1
. Speaking bluntly at the event, Karp warned that tech executives who believe they can amass unprecedented power through AI while denying the military access are fundamentally misreading the political landscape2
.
Source: Gizmodo
The Palantir CEO's remarks reflect growing tensions between Silicon Valley and the Trump administration over government demands on AI firms. Karp argued that if the tech industry believes it can take away everyone's white-collar job while refusing to cooperate with defense agencies, companies should prepare for the federal government to simply absorb their technology
1
. His position suggests that compliance with government requests represents the only viable path forward for AI companies seeking to maintain control over their own innovations.Karp's warning to Silicon Valley appears partly motivated by the ongoing confrontation between the Pentagon and Anthropic. The Department of Defense has demanded unfettered Department of Defense access to the company's Claude AI model, including uses that would violate Anthropic's ethical safeguards against mass domestic surveillance and fully autonomous weapons operating without human involvement
1
. When Anthropic refused to abandon these red lines, Defense Secretary Pete Hegseth threatened to invoke the Defense Production Act to force the company to build a model for military purposes.This standoff illustrates the exact scenario Karp described—a form of government nationalization where Anthropic would lose control over its own technology. The Pentagon quickly shifted its focus to OpenAI, whose CEO Sam Altman announced the company had moved to classified Pentagon projects, calling the transition urgent
2
. The contrast between Anthropic's resistance and OpenAI's cooperation highlights the divergent strategies emerging across the AI industry.Karp elaborated on the nuances of working with government agencies, noting his heavy involvement in determining where AI can be deployed and what can be deployed, while emphasizing distinctions between US military applications and surveillance
1
. He framed the choice as operating as if the government owns your company, because if you don't comply voluntarily, it will take ownership regardless. This perspective positions Palantir and similar firms as modern-day arms dealers, where moral concerns take a backseat to business realities.
Source: Benzinga
Palmer Luckey, head of military tech firm Anduril, echoed Karp's stance on social media, arguing that the government can compel companies to act and that this represents sound policy
1
. Luckey cited President Harry S. Truman's executive order nationalizing railroads and suggested that seemingly innocuous terms like preventing civilian targeting actually represent "moral minefields" that create massive control issues. Both executives present military cooperation not as an ethical choice but as a pragmatic necessity in an era where government power over transformative technologies appears inevitable.Related Stories
The threat of gutting white-collar jobs through AI automation adds urgency to Karp's nationalization warning. Anthropic CEO Dario Amodei had already sounded the alarm on jobs in a roughly 20,000-word essay published in January, arguing that AI poses risks not being taken seriously and that a labor market shock from AI unlike anything previously experienced is approaching
2
. "New technologies often bring labor market shocks, and in the past, humans have always recovered from them, but I am concerned that this is because these previous shocks affected only a small fraction of the full possible range of human abilities, leaving room for humans to expand to new tasks," Amodei wrote.Palantir CTO Shyam Sankar has pushed back on this narrative, arguing on the All-In podcast in July 2025 that AI gives workers "superpowers" rather than eliminating their positions
2
. Yet Karp's political calculus suggests that regardless of whether AI actually destroys employment, the perception that it will—combined with refusal to support national defense—creates a politically untenable position. He warned of a "horseshoe effect" where the only point of agreement across the political spectrum becomes that the tech industry isn't paying its bills and should face government nationalization1
.The stakes extend beyond individual companies. As AI firms navigate mounting tension between commercial ambitions, workforce impact, and relationships with the defense establishment, they face fundamental questions about autonomy and accountability. Whether Karp's prediction proves accurate may depend on how aggressively the Trump administration pursues control over AI development and whether companies like Anthropic maintain their ethical safeguards or follow OpenAI's path toward deeper Pentagon integration. What remains clear is that the AI industry's relationship with government power is entering uncharted territory, where surveillance concerns, military applications, and economic disruption converge to reshape the boundaries between private innovation and public control.
Summarized by
Navi
29 Jul 2025•Technology

06 Jun 2025•Technology

21 Jan 2026•Entertainment and Society

1
Policy and Regulation

2
Technology

3
Technology
