Curated by THEOUTPOST
On Sat, 9 Nov, 12:02 AM UTC
2 Sources
[1]
AI companies get comfortable offering their technology to the military
Social network giant Meta and leading artificial intelligence startup Anthropic are making it easier for U.S. military and intelligence to tap their algorithms. Artificial intelligence companies that have previously been reticent to allow military use of their technology are shifting policies and striking deals to offer it to spy agencies and the Pentagon. On Thursday, Anthropic, a leading AI start-up that has raised billions of dollars in funding and competes with ChatGPT developer OpenAI, announced it would sell its AI to U.S. military and intelligence customers through a deal with Amazon's cloud business and government software maker Palantir. Earlier this week, Meta changed its policies to allow military use of its free, open source AI technology Llama that competes with technology offered by OpenAI and Anthropic. And OpenAI has a deal to sell ChatGPT to the Air Force, after earlier this year changing its policies to allow some military uses of its software. The deals and policy changes add to a broad shift that has seen tech companies work more closely with the Pentagon, despite some employees protesting their work contributing to military applications. Anthropic changed its policies in June to allow some intelligence agency uses for its technology but still bans customers from using it for weapons or domestic surveillance. OpenAI also prohibits its technology from being used to develop weapons. Anthropic and OpenAI spokespeople did not comment beyond referring to the policies. Arms control advocates have long called for an international ban on using AI in weapons. The U.S. military has a policy that humans must maintain meaningful control over weapons technology but has resisted an outright ban, saying that it would allow potential enemies to gain a technological edge. Tech leaders and politicians from both parties have increasingly argued that U.S. tech companies must ramp up the development of military tech to maintain the nation's military and technological competitiveness with China. In an October blog post, Anthropic CEO Dario Amodei argued that democratic nations should aim to develop the best AI technology to give them a military and commercial edge over authoritarian countries, which he said would probably use AI to abuse human rights. "If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies," Amodei wrote in the blog post. Anthropic's backers include Google and Amazon, which has invested $4 billion into the start-up. Amazon founder Jeff Bezos owns The Post. The U.S. military uses AI for a broad range of purposes, from predicting when to replace parts on aircraft to recognizing potential targets on the battlefield. Palantir, which Anthropic is partnering with to get its technology to government customers, sells AI technology that can automatically detect potential targets from satellite and aerial imagery. The war in Ukraine has triggered a new interest in adapting cheap, commercially available technology like small drones and satellite internet dishes to military use. A wave of Silicon Valley start-ups have sprung up to try to disrupt the U.S. defense industry and sell new tools to the military. Military leaders in the United States and around the world expect future battlefield technology to be increasingly independent of human oversight. Though humans are still generally in control of making final decisions about choosing targets and firing weapons, arms control advocates and AI researchers worry that the increased use of AI could lead to poor decision-making or lethal errors and violate international laws. Google, Microsoft and Amazon compete fiercely for military cloud computing contracts, but some tech employees have pushed back on such work. In 2018, Google said it would not renew a Pentagon contract providing image-analysis of drone imagery that was protested by employees. The company has continued to expand its military contracts. This year Amazon and Google were targeted by protests over Israeli government contracts by workers who said they could assist the country's military forces. OpenAI and Anthropic, part of a newer generation of AI developers, have embraced military and intelligence work relatively early in their corporate development. Some other companies in the current AI boom, such as data provider Scale AI, have made willingness to work with the military a major focus of their business.
[2]
AI companies get comfortable offering their technology to the military
Artificial intelligence companies that have previously been reticent to allow military use of their technology are shifting policies and striking deals to offer it to spy agencies and the Pentagon. On Thursday, Anthropic, a leading AI start-up that has raised billions of dollars in funding and competes with ChatGPT developer OpenAI, announced it would sell its AI to U.S. military and intelligence customers through a deal with Amazon's cloud business and government software maker Palantir. Earlier this week, Meta changed its policies to allow military use of its free, open source AI technology Llama that competes with technology offered by OpenAI and Anthropic. And OpenAI has a deal to sell ChatGPT to the Air Force, after earlier this year changing its policies to allow some military uses of its software. The deals and policy changes add to a broad shift that has seen tech companies work more closely with the Pentagon, despite some employees protesting their work contributing to military applications. Anthropic changed its policies in June to allow some intelligence agency uses for its technology but still bans customers from using it for weapons or domestic surveillance. OpenAI also prohibits its technology from being used to develop weapons. Anthropic and OpenAI spokespeople did not comment beyond referring to the policies. Arms control advocates have long called for an international ban on using AI in weapons. The U.S. military has a policy that humans must maintain meaningful control over weapons technology but has resisted an outright ban, saying that it would allow potential enemies to gain a technological edge. Tech leaders and politicians from both parties have increasingly argued that U.S. tech companies must ramp up the development of military tech to maintain the nation's military and technological competitiveness with China. In an October blog post, Anthropic CEO Dario Amodei argued that democratic nations should aim to develop the best AI technology to give them a military and commercial edge over authoritarian countries, which he said would probably use AI to abuse human rights. "If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies," Amodei wrote in the blog post. Anthropic's backers include Google and Amazon, which has invested $4 billion into the start-up. Amazon founder Jeff Bezos owns The Post. The U.S. military uses AI for a broad range of purposes, from predicting when to replace parts on aircraft to recognizing potential targets on the battlefield. Palantir, which Anthropic is partnering with to get its technology to government customers, sells AI technology that can automatically detect potential targets from satellite and aerial imagery. The war in Ukraine has triggered a new interest in adapting cheap, commercially available technology like small drones and satellite internet dishes to military use. A wave of Silicon Valley start-ups have sprung up to try to disrupt the U.S. defense industry and sell new tools to the military. Military leaders in the United States and around the world expect future battlefield technology to be increasingly independent of human oversight. Though humans are still generally in control of making final decisions about choosing targets and firing weapons, arms control advocates and AI researchers worry that the increased use of AI could lead to poor decision-making or lethal errors and violate international laws. Google, Microsoft and Amazon compete fiercely for military cloud computing contracts, but some tech employees have pushed back on such work. In 2018, Google said it would not renew a Pentagon contract providing image-analysis of drone imagery that was protested by employees. The company has continued to expand its military contracts. This year Amazon and Google were targeted by protests over Israeli government contracts by workers who said they could assist the country's military forces. OpenAI and Anthropic, part of a newer generation of AI developers, have embraced military and intelligence work relatively early in their corporate development. Some other companies in the current AI boom, such as data provider Scale AI, have made willingness to work with the military a major focus of their business.
Share
Share
Copy Link
Leading AI companies like Anthropic, Meta, and OpenAI are changing their policies to allow military use of their technologies, marking a significant shift in the tech industry's relationship with defense and intelligence agencies.
In a significant shift within the artificial intelligence industry, leading companies are now opening their doors to military and intelligence collaborations. Anthropic, a major AI startup, has announced a partnership with Amazon's cloud business and Palantir to offer its AI technology to U.S. military and intelligence customers 12. This move follows similar policy changes by other tech giants, signaling a new era in the relationship between Silicon Valley and the defense sector.
Meta, the social media behemoth, recently modified its policies to permit military use of its open-source AI technology, Llama 1. OpenAI, known for developing ChatGPT, has also entered into an agreement with the U.S. Air Force, having adjusted its policies earlier this year to allow certain military applications 12.
These developments mark a departure from the previous reluctance of some tech companies to engage with military contracts. Anthropic, for instance, updated its policies in June to accommodate intelligence agency uses while maintaining restrictions on weapons development and domestic surveillance 12.
The shift towards military collaboration is driven by various factors:
National competitiveness: Tech leaders and politicians argue that U.S. companies must advance military tech to maintain an edge over countries like China 12.
Democratic values: Anthropic's CEO, Dario Amodei, contends that democracies should lead in AI development to counter potential abuses by authoritarian regimes 12.
Economic opportunities: The defense sector represents a significant market for AI technologies, with applications ranging from predictive maintenance to target recognition 12.
Despite the enthusiasm from some quarters, the trend has not been without controversy:
Arms control advocates continue to push for an international ban on AI in weapons systems 12.
Some tech employees have protested their companies' involvement in military projects, as seen with Google's image analysis contract for the Pentagon in 2018 12.
Concerns persist about the potential for AI to lead to poor decision-making or lethal errors in military contexts 12.
This shift is reshaping the landscape of both the tech and defense industries:
A new wave of Silicon Valley startups is emerging, aiming to disrupt traditional defense contractors 12.
The war in Ukraine has sparked interest in adapting commercial technologies like drones and satellite internet for military use 12.
Major cloud providers like Google, Microsoft, and Amazon are competing intensely for military contracts, despite some internal resistance 12.
As AI continues to evolve, the debate over its role in military applications is likely to intensify, balancing national security interests against ethical considerations and the potential risks of increasingly autonomous battlefield technology.
Reference
[1]
[2]
OpenAI, the creator of ChatGPT, has entered into a partnership with defense technology company Anduril Industries to develop AI solutions for military applications, raising concerns among employees and industry observers about the ethical implications of AI in warfare.
29 Sources
29 Sources
Meta has announced a significant policy change, allowing US national security agencies and defense contractors to use its open-source AI model, Llama, for military purposes. This decision marks a departure from Meta's previous stance prohibiting such applications.
37 Sources
37 Sources
Google has quietly removed its commitment not to use AI for weapons or surveillance, signaling a shift towards potential military applications amidst growing competition and national security concerns.
40 Sources
40 Sources
Anthropic, Palantir, and AWS collaborate to integrate Claude AI models into US government intelligence and defense operations, raising questions about AI ethics and national security.
15 Sources
15 Sources
The U.S. Department of Defense has awarded a contract to Scale AI for "Thunderforge," a flagship program integrating AI agents into military planning and operations, marking a significant shift towards AI-powered warfare.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved