Curated by THEOUTPOST
On Thu, 5 Dec, 12:05 AM UTC
29 Sources
[1]
OpenAI employees question the ethics of military deal with startup Anduril
Internal discussions showed some workers expressed discomfort with the company's artificial intelligence technology being used by a weapons maker. SAN FRANCISCO -- Hours after ChatGPT-maker OpenAI announced a partnership with weapons developer Anduril on Wednesday, some employees raised ethical concerns about the prospect of artificial intelligence technology they helped develop being put to military use. On an internal company discussion forum, employees pushed back on the deal and asked for more transparency from leaders, messages viewed by The Washington Post show. OpenAI has said its work with Anduril will be limited to using AI to enhance systems the defense company sells the Pentagon to defend U.S. soldiers from drone attacks. Employees at the AI developer asked in internal messages how OpenAI could ensure Anduril systems aided by its technology wouldn't also be directed against human-piloted aircraft, or stop the U.S. military from deploying them in other ways. One OpenAI worker said the company appeared to be trying to downplay the clear implications of doing business with a weapons manufacturer, the messages showed. Another that they were concerned the deal would hurt OpenAI's reputation, according to the messages. A third said that defensive use cases still represented militarization of AI, and noted that the fictional AI system Skynet, which turns on humanity in the Terminator movies, was also originally designed to defend against aerial attacks on North America. OpenAI executives quickly acknowledged the concerns, messages seen by The Post show, while also writing that the company's work with Anduril is limited to defensive systems intended to save American lives. Other OpenAI employees in the forum said that they supported the deal and were thankful the company supported internal discussion on the topic. "We are proud to help keep safe the people who risk their lives to keep our families and our country safe," OpenAI CEO Sam Altman said in a statement. Anduril CEO Brian Schimpf said in a statement that the companies were "addressing critical capability gaps to protect U.S. and allied forces from emerging aerial threats, ensuring service members have the tools they need to stay safe in an evolving threat landscape." The debate inside OpenAI comes after the ChatGPT maker and other leading AI developers including Anthropic and Meta changed their policies to allow military use of their technology. Existing AI technology still lags far behind Hollywood depictions but OpenAI's leaders have been vocal about the potential risks of its algorithms being used in unforeseen ways. A company report issued alongside an upgrade to ChatGPT this week warned that making AI more capable has the side effect of "increasing potential risks that stem from heightened intelligence." The company has invested heavily in safety testing, and said that the Anduril project was vetted by its policy team. OpenAI has held feedback sessions with employees on its national security work in the past few months, and plans to hold more, Liz Bourgeois, an OpenAI spokesperson said. In the internal discussions seen by The Post, the executives stated that it was important for OpenAI to provide the best technology available to militaries run by democratically-elected governments, and that authoritarian governments would not hold back from using AI for military uses. Some workers countered that the U.S. has sold weapons to authoritarian allies. By taking on military projects, OpenAI could help the U.S. government understand AI technology better and prepare to defend against its use by potential adversaries, executives also said. Silicon Valley companies are increasingly becoming more comfortable selling to the military, a major shift from 2018 when Google declined to renew a contract to sell image-recognition tech to the Pentagon after employee protests. Google, Amazon, Microsoft and Oracle are all part of a multibillion dollar contract to provide cloud services and software to the Pentagon. Google fired a group of employees earlier this year who protested against its work with the Israeli government over concerns about how its military would use the company's technology. Anduril is part of a wave of companies that also includes Palantir and start-ups like Shield AI that has sprung up to arm the U.S. military with AI and other cutting-edge technology. They have challenged conventional defense contractors, selling directly to the military and framing themselves as patriotic supporters of U.S. military dominance. Analysts and investors predict defense tech upstarts may thrive under the incoming Trump administration because it appears willing to disrupt the way the Pentagon does business. OpenAI and rival elite AI research labs have generally positioned their technology as having the potential to help all people, improving economic productivity and leading to breakthroughs in education and medicine. The dissent inside the company suggests that not all its employees are ready to see their work folded into military projects. ChatGPT's developer was founded as a nonprofit dedicated to ensuring that AI benefits all of humanity before later starting a commercial division and taking on billions in funding from Microsoft and others. For years the company prohibited its technology from being used by the military. In January, OpenAI revised its policies, saying it would allow some military uses, such as helping veterans find information on health benefits. Use of its technology to develop weapons and harm people or property remains forbidden, the company says. In June, the ChatGPT developer added Paul M. Nakasone, a retired four-star Army general and former director of the National Security Agency, to the nonprofit board of directors that is still pledged to OpenAI's original mission. The company has also hired staff to work on national security policy.
[2]
OpenAI partners with weapons start-up Anduril on military AI
The defense company will add artificial intelligence technology from the ChatGPT maker to its anti-drone products. SAN FRANCISCO -- ChatGPT creator OpenAI and high-tech military manufacturer Anduril Industries will codevelop new artificial-intelligence technology for the Pentagon, adding to a trend for leading tech companies to take on military projects. The partnership will bring together OpenAI's AI capabilities, among the most advanced in the industry, with Anduril's drones, detection units and military software, the two companies said in a joint statement Wednesday. They declined to share any financial details about the terms of their partnership. The deal aims to improve Anduril technology used to detect and shoot down drones that threaten American forces and those of allies, the statement said -- tools the Pentagon buys from the military start-up to help counter the proliferation of cheap drones on battlefields all over the world. After an Iranian-made drone killed three U.S. service members at a base in Jordan this year, an assessment by the military found that the drone probably had not been detected and that no weapon existed on the base to destroy it. Anduril sells sensor towers, electronic warfare communications-jammers and drones that are meant to shoot down enemy drones or missiles, and offers software called Lattice designed to help soldiers watch over the battlefield and control multiple drones and sensors at once. The OpenAI-Anduril deal is the latest in a string of recent announcements from tech companies about stepping up their work with the military. They come as the Pentagon looks to infuse more Silicon Valley innovation into weaponry to arm U.S. forces and allies with more potent, plentiful and affordable technology. In November, OpenAI competitor Anthropic, developer of the chatbot Claude, said it would partner with Amazon and government software provider Palantir to sell its AI algorithms to the military. The same month, Facebook owner Meta changed its policies to allow the military to use its open source AI technology. OpenAI barred its own products from being used for any military application until earlier this year, when it changed its policies to allow some military uses. Despite the new partnership, the company says its technology may still not be used to develop weapons, or to harm people or property. Liz Bourgeois, a spokesperson for OpenAI, said the partnership complies with the company's rules because it is narrowly focused on systems that defend from unmanned aerial threats. The deal doesn't cover other use cases, Bourgeois said. Just a few years ago, many Silicon Valley leaders were uninterested in dealing with the military. It was seen as a hidebound and unprofitable customer incompatible with the fast-moving industry, and some tech workers protested defense contracts. Google in 2018 was pressured into declining to renew a deal to sell image-recognition technology to the Pentagon. Silicon Valley leaders who take a pragmatic approach to the industry's role in society and continued Pentagon efforts to woo tech firms have recently made military deals more common. The impact of technology such as image-recognition software and cheap drones on battlefields in Ukraine and Gaza as well as China's rising technological prowess has inspired some young start-up founders to build companies focused on weapons and defense rather than social media or e-commerce apps like the generation before them. Donald Trump's reelection last month has founders and investors among those companies anticipating a surge of new support from the U.S. government in the form of new contracts and loosened regulation. The new wave of military-friendly techies frame themselves as patriots trying to rejuvenate American manufacturing and help the country cement its superpower status. "The accelerating race between the United States and China in advancing AI makes this a pivotal moment," OpenAI and Anduril said in a joint statement on their new partnership. "If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades." Not everyone in the tech industry is ready to embrace military work. A group of Google employees was fired this year after protesting the company's contract to sell software to the Israeli government. Prominent AI researchers have joined arms-control advocates to push for a preemptive ban on AI-enabled weapons, out of concern machines will eventually become able to independently decide to kill humans.
[3]
OpenAI partners with defense company Anduril
Sam Altman, chief executive officer of OpenAI, during a fireside chat organized by Softbank Ventures Asia in Seoul, South Korea, on Friday, June 9, 2023. OpenAI and Anduril on Wednesday announced a partnership allowing the defense tech company to deploy advanced artificial intelligence systems for "national security missions." It's part of a broader, and controversial, trend of AI companies not only walking back bans on military use of their products, but also entering into partnerships with defense industry giants and the U.S. Department of Defense. Last month, Anthropic, the Amazon-backed AI startup founded by ex-OpenAI research executives, and defense contractor Palantir announced a partnership with Amazon Web Services to "provide U.S. intelligence and defense agencies access to [Anthropic's] Claude 3 and 3.5 family of models on AWS." This fall, Palantir signed a new five-year, up to $100 million contract to expand U.S. military access to its Maven AI warfare program. The OpenAI-Anduril partnership announced Wednesday will "focus on improving the nation's counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time," according to a release, which added that "Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness." Anduril, co-founded by Palmer Luckey, did not answer a question about whether reducing the onus on human operators will translate to fewer humans in the loop on high-stakes warfare decisions. Luckey founded Oculus VR, which he sold to Facebook in 2014. OpenAI said it was working with Anduril to help human operators make decisions "to protect U.S. military personnel on the ground from unmanned drone attacks." The company said it stands by the policy in its mission statement of prohibiting use of its AI systems to harm others. The news comes after Microsoft-backed OpenAI in January quietly removed a ban on the military use of ChatGPT and its other AI tools, just as it had begun to work with the U.S. Department of Defense on AI tools, including open-source cybersecurity tools. Until early January, OpenAI's policies page specified that the company did not allow the usage of its models for "activity that has high risk of physical harm" such as weapons development or military and warfare. In mid-January, OpenAI removed the specific reference to the military, although its policy still states that users should not "use our service to harm yourself or others," including to "develop or use weapons." The news comes after years of controversy about tech companies developing technology for military use, highlighted by the public concerns of tech workers -- especially those working on AI. Employees at virtually every tech giant involved with military contracts have voiced concerns after thousands of Google employees protested Project Maven, a Pentagon project that would use Google AI to analyze drone surveillance footage. Microsoft employees protested a $480 million army contract that would provide soldiers with augmented-reality headsets, and more than 1,500 Amazon and Google workers signed a letter protesting a joint $1.2 billion, multiyear contract with the Israeli government and military, under which the tech giants would provide cloud computing services, AI tools and data centers.
[4]
OpenAI Strikes Deal With Military Contractor to Provide AI for Attack Drones
OpenAI has hopped into bed with a defense contractor that makes swarming killer drones. What could possibly go wrong? In a statement announcing the partnership with that contractor, Anduril -- which was cofounded by Oculus VR's Palmer Luckey and takes its name from the glowing sword given to Aragorn by Elves in "The Lord of the Rings" -- OpenAI CEO Sam Altman waxed prolific about how drones are important for democracy. "OpenAI builds AI to benefit as many people as possible, and supports US-led efforts to ensure the technology upholds democratic values," Altman said. "Our partnership with Anduril will help ensure OpenAI technology protects US military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free." As Anduril cofounder and CEO Brian Schimpf said in the statement, the ChatGPT maker's AI models will help the firm improve its air defense systems, essentially making the Ukraine-proven battle drones smarter and faster. "Together, we are committed to developing responsible solutions that enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations," Schimpf said. In an interview with Wired, a former OpenAI employee who spoke anonymously to protect their identity said the company's AI models would help Anduril "assess drone threats more quickly and accurately, giving operators the information they need to make better decisions while staying out of harm's way." Before this year, OpenAI prohibited any use of its models for "military or warfare" or "weapons development." After The Intercept reported in January that that policy had been lifted, however, the company announced at Davos that it would be providing the Pentagon with cybersecurity tools -- a mask-off moment that, according to Wired's insiders, turned off employees at the firm but never resulted in outright protest. Though an OpenAI spokesperson insisted in a statement to the MIT Technology Review that the partnership "is consistent with our policies and does not involve leveraging our technology to develop systems designed to harm others," the firm will, once the technologies are integrated, be fully involved in the business of warfare. All told, it seems very much like helping the creators of a company that sells attack drones operate better -- and that seems like a glaring loophole in its policy against using its tech to "harm yourself or others."
[5]
OpenAI partners with Palmer Luckey's defense firm, paving the way for AI-driven military technologies
A hot potato: The proliferation of AI brings plenty of justifiable concerns, especially as the technology increasingly makes its way into the military. In what sounds worryingly like a cyberpunk dystopia, ChatGPT maker Open AI has just partnered with a major defense contractor, a deal that could lead to anti-aerial defenses that use ChatGPT-like AI models to help decide if an enemy should be killed. On Wednesday, Oculus founder Palmer Luckey's Anduril Industries defense technology company announced a "strategic partnership to develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions." The companies will initially be focused on developing anti-drone technologies. These defenses will mostly be used against unmanned drones and other aerial threats. The partnership will focus on improving the United States' counter-unmanned aircraft systems (CUAS) and their ability to detect, assess, and respond to potentially lethal aerial threats in real-time. Using AI models to identify and destroy unmanned drones might not sound like a bad thing, but the statement also mentions the threats from legacy manned platforms, i.e., aircraft with human crews. The AI models will rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness, according to the companies. OpenAI revealed it was collaborating with the United States Defense Department on cybersecurity projects in January, having modified its policies to allow for certain military applications of its technology. Sam Altman's firm continued to prohibit it from being used to develop weapons, though. But it appears that its strict stance against what is essentially ChatGPT-powered weaponry is wavering. Other AI companies are rushing into lucrative defense sector partnerships, including Anthropic, which has partnered with Palantir. Google DeepMind also has contracts with military customers, something that 200 of its employees strongly opposed in a letter sent to Google's higher-ups earlier this year. There have been calls to ban autonomous/AI weapons for years now. In 2015, Elon Musk, Stephen Hawking, and Steve Wozniak were just three of 1,000 high-profile artificial intelligence experts that signed an open letter calling for a ban on "offensive autonomous weapons." AI has made huge advancements since then, appearing in more weapons and military vehicles, including AI-piloted jet fighters. The technology still makes mistakes, of course, which is a concern when it's controlling weapons of death. The biggest fear has long been that AI could be used in nuclear missile systems. In May, the US said it would never happen and called on Russia and China to make the same pledge. But the Pentagon said last month that it wants AI to enhance nuclear command and control decision-making capabilities.
[6]
OpenAI signs deal with Palmer Luckey's Anduril to develop military AI
The companies will work on tech that defends against drone attacks. OpenAI has partnered with defense startup Anduril Industries to develop AI for the Pentagon. The companies said on Wednesday that they'll combine OpenAI's models, including GPT-4o and OpenAI o1, with Anduril's systems and software to improve the US military's defenses against unpiloted aerial attacks. The deal comes less than a year after OpenAI softened its stance on using its models for military purposes. Although the ChatGPT maker's policies still prohibit its models from developing or using weapons, it deleted a line in January that explicitly banned integrating its tech into "military and warfare" use. The company said at the time it was already working with DARPA on cybersecurity tools. In October, the company hired a former Palantir security officer and was reportedly pitching its products to the US military and national security establishment. An OpenAI spokesperson told The Washington Post that the deal complies with the company's rules because it focuses on systems that defend against pilotless aerial threats. The company said the partnership doesn't cover other uses. According to The Washington Post, the OpenAI-Anduril partnership will aim to improve the latter's tech for detecting and shooting down drones threatening the US military and its allies. The Pentagon already buys Anduril's Roadrunner drone interceptor (pictured above) to help counter the rise of smaller drones on the world's battlefields. The startup sells sentry towers, comms jammers, military drones and an autonomous submarine, among other projects. The companies framed the partnership as a way to defend US military personnel and counter China's advancing AI. "Our partnership with OpenAI will allow us to utilize their world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world," Anduril CEO Brian Schimpf wrote in a statement. "Together, we are committed to developing responsible solutions that enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations." Anduril was co-founded by Oculus Rift inventor (and Oculus VR co-founder) Palmer Luckey. That headset laid the foundation for the Meta Quest lineup, which today holds the lion's share of the VR and AR market. Luckey left Meta (then Facebook) in 2017, months after news broke that he donated $10,000 to a group aiming to post 4chan-style anti-Hillary Clinton memes on roadside billboards. "OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values," OpenAI CEO Sam Altman wrote in a statement. "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free."
[7]
OpenAI to partner with military defense tech company
OpenAI and military defense technology company Anduril Industries said Wednesday that they would work together to use artificial intelligence for "national security missions." The ChatGPT-maker and Anduril will focus on improving defenses against drone attacks, the companies said in a joint release. The partnership comes nearly a year after OpenAI did away with wording in its policies that banned use of its technology for military or warfare purposes. Founded in 2017, Anduril is a technology company that builds command and control systems and a variety of drones, counting the United States, Australia and the United Kingdom among its customers, according to its website. OpenAI said in October that it was collaborating with the US military's research arm DARPA on cyber defenses for critical networks. "AI is a transformational technology that can be used to strengthen democratic values or to undermine them," OpenAI said in a post at the time. "With the proper safeguards, AI can help protect people, deter adversaries, and even prevent future conflict." The companies said the deal would help the United States maintain an edge over China, a goal that OpenAI chief Sam Altman has spoken of in the past. "Our partnership with Anduril will help ensure OpenAI technology protects US military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free," Altman said in Wednesday's release. Anduril was co-founded by Palmer Luckey, after Facebook bought his previous company Oculus VR in a $2 billion deal. The new partnership will bring together OpenAI's advanced AI models with Anduril systems and software, according to the companies. "Our partnership with OpenAI will allow us to utilize their world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world," Anduril co-founder and chief executive Brian Schimpf said in the release. Schimpf said the collaboration would allow "military and intelligence operators to make faster, more accurate decisions in high-pressure situations."
[8]
OpenAI Is Working With Anduril to Supply the US Military With AI
The ChatGPT maker is the latest AI giant to reveal it's working with the defense industry, following similar announcements by Meta and Anthropic. OpenAI, maker of ChatGPT and one of the most prominent artificial intelligence companies in the world, said today that it has entered a partnership with Anduril, a defense startup that makes missiles, drones, and software for the United States military. It marks the latest in a series of similar announcements made recently by major tech companies in Silicon Valley, which has warmed to forming closer ties with the defense industry. "OpenAI builds AI to benefit as many people as possible, and supports US-led efforts to ensure the technology upholds democratic values," Sam Altman, OpenAI's CEO, said in a statement Wednesday. OpenAI's AI models will be used to improve systems used for air defense, Brian Schimpf, co-founder and CEO of Anduril, said in the statement. "Together, we are committed to developing responsible solutions that enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations," he said. OpenAI's technology will be used to "assess drone threats more quickly and accurately, giving operators the information they need to make better decisions while staying out of harm's way," says a former OpenAI employee who left the company earlier this year and spoke on the condition of anonymity to protect their professional relationships. OpenAI altered its policy on the use of its AI for military applications earlier this year. A source who worked at the company at the time says some staff were unhappy with the change, but there were no open protests. The US military already uses some OpenAI technology, according to reporting by The Intercept. Anduril is developing an advanced air defense system featuring a swarm of small, autonomous aircraft that work together on missions. These aircraft are controlled through an interface powered by a large language model, which interprets natural language commands and translates them into instructions that both human pilots and the drones can understand and execute. Until now, Anduril has been using open-source language models for testing purposes. Anduril is not currently known to be using advanced AI to control its autonomous systems or to allow them to make their own decisions. Such a move would be more risky, particularly given the unpredictability of today's models. A few years ago, many AI researchers in Silicon Valley were firmly opposed to working with the military. In 2018, thousands of Google employees staged protests over the company supplying AI to the US Department of Defense through what was then known within the Pentagon as Project Maven. Google later backed out of the project.
[9]
OpenAI to partner with military defense tech company
San Francisco (AFP) - OpenAI and military defense technology company Anduril Industries said Wednesday that they would work together to use artificial intelligence for "national security missions." The ChatGPT-maker and Anduril will focus on improving defenses against drone attacks, the companies said in a joint release. The partnership comes nearly a year after OpenAI did away with wording in its policies that banned use of its technology for military or warfare purposes. Founded in 2017, Anduril is a technology company that builds command and control systems and a variety of drones, counting the United States, Australia and the United Kingdom among its customers, according to its website. OpenAI said in October that it was collaborating with the US military's research arm DARPA on cyber defenses for critical networks. "AI is a transformational technology that can be used to strengthen democratic values or to undermine them," OpenAI said in a post at the time. "With the proper safeguards, AI can help protect people, deter adversaries, and even prevent future conflict." The companies said the deal would help the United States maintain an edge over China, a goal that OpenAI chief Sam Altman has spoken of in the past. "Our partnership with Anduril will help ensure OpenAI technology protects US military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free," Altman said in Wednesday's release. Anduril was co-founded by Palmer Luckey, after Facebook bought his previous company Oculus VR in a $2 billion deal. The new partnership will bring together OpenAI's advanced AI models with Anduril systems and software, according to the companies. "Our partnership with OpenAI will allow us to utilize their world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world," Anduril co-founder and chief executive Brian Schimpf said in the release. Schimpf said the collaboration would allow "military and intelligence operators to make faster, more accurate decisions in high-pressure situations."
[10]
OpenAI announces deal with defense startup to create anti-drone tech
OpenAI takes the next step in an industry-wide military pivot. Credit: Anton Petrus / Moment via Getty Images OpenAI has entered into its first major defense partnership, a deal that could see the AI giant making its way into the Pentagon. The joint venture was recently announced by billion-dollar Anduril Industries, a defense startup owned by Oculus VR co-founder Palmer Lucky that sells sentry towers, communications jammers, military drones, and autonomous submarines. The "strategic partnership" will incorporate OpenAI's AI models into Anduril systems to "rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness." Anduril already supplies anti-drone tech to the U.S. government. It was recently chosen to develop and test unmanned fighter jets and awarded a $100 million contract with the Pentagon's Chief Digital and AI Office. OpenAI clarified to the Washington Post that the partnership will only cover systems that "defend against pilotless aerial threats" (read: detect and shoot down drones), notably avoiding the explicit association of its technology with human-casualty military applications. Both OpenAI and Anduril say the partnership will keep the U.S. on par with China's AI advancements -- a repeated goal that's echoed in the U.S. government's "Manhattan Project"-style investments in AI and "government efficiency." "OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values," wrote OpenAI CEO Sam Altman. "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free." In January, OpenAI quietly removed policy language that banned applications of its technologies that pose high risk of physical harm, including "military and warfare." An OpenAI spokesperson told Mashable at the time: "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under 'military' in our previous policies." Over the last year, the company has reportedly been pitching its services in various capacities to the U.S. military and national security offices, backed by a former security officer at software company and government contractor Palantir. And OpenAI isn't the only AI innovator pivoting to military applications. Tech companies Anthropic, makers of Claude, and Palantir recently announced a partnership with Amazon Web Services to sell Anthropic's AI models to defense and intelligence agencies, advertised as "decision advantage" tools for "classified environments." Recent rumors suggest President-elect Donald Trump is eyeing Palantir chief technology officer Shyam Shankir to take over the lead engineering and research spot in the Pentagon. Shankir has previously been critical of the Department of Defense's technology acquisition process, arguing that the government should rely less on major defense contractors and purchase more "commercially available technology."
[11]
OpenAI and Anduril team up to build AI-powered drone defense systems
As the AI industry grows in size and influence, the companies involved have begun making stark choices about where they land on issues of life and death. For example, can their AI models be used to guide weapons or make targeting decisions? Different companies have answered this question in different ways, but for ChatGPT maker OpenAI, what started as a hard line against weapons development and military applications has slipped away over time. On Wednesday, defense-tech company Anduril Industries -- started by Oculus founder Palmer Luckey in 2017 -- announced a partnership with OpenAI to develop AI models (similar to the GPT-4o and o1 models that power ChatGPT) to help US and allied forces identify and defend against aerial attacks. The companies say their AI models will process data to reduce the workload on humans. "As part of the new initiative, Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness," said Anduril in a statement. The partnership comes at a time when AI-powered systems have become a defining feature of modern warfare, particularly in Ukraine. According to their announcement, OpenAI and Anduril will develop defenses primarily against unmanned drones using counter-unmanned aircraft systems (CUAS), but the statement also mentions threats from "legacy manned platforms" -- in other words, crewed aircraft. Anduril currently manufactures several products that could be used to kill people: AI-powered assassin drones (see video) and rocket motors for missiles. Anduril says its systems require human operators to make lethal decisions, but the company designs its products so their autonomous capabilities can be upgraded over time. For now, OpenAI's models may help operators make sense of large amounts of incoming data to support faster human decision-making in high-pressure situations.
[12]
OpenAI to partner with military defense tech company
SAN FRANCISCO (AFP) - OpenAI and military defense technology company Anduril Industries said Wednesday that they would work together to use artificial intelligence for "national security missions." The ChatGPT-maker and Anduril will focus on improving defenses against drone attacks, the companies said in a joint release. The partnership comes nearly a year after OpenAI did away with wording in its policies that banned use of its technology for military or warfare purposes. Founded in 2017, Anduril is a technology company that builds command and control systems and a variety of drones, counting the United States, Australia and the United Kingdom among its customers, according to its website. OpenAI said in October that it was collaborating with the US military's research arm DARPA on cyber defenses for critical networks. "AI is a transformational technology that can be used to strengthen democratic values or to undermine them," OpenAI said in a post at the time. "With the proper safeguards, AI can help protect people, deter adversaries, and even prevent future conflict." The companies said the deal would help the United States maintain an edge over China, a goal that OpenAI chief Sam Altman has spoken of in the past. "Our partnership with Anduril will help ensure OpenAI technology protects US military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free," Altman said in Wednesday's release. Anduril was co-founded by Palmer Luckey, after Facebook bought his previous company Oculus VR in a $2 billion deal. The new partnership will bring together OpenAI's advanced AI models with Anduril systems and software, according to the companies. "Our partnership with OpenAI will allow us to utilize their world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world," Anduril co-founder and chief executive Brian Schimpf said in the release. Schimpf said the collaboration would allow "military and intelligence operators to make faster, more accurate decisions in high-pressure situations."
[13]
OpenAI is deepening its ties to the defense industry
The push for artificial intelligence in defense weapons is getting a boost from OpenAI's new partnership with Anduril Industries. The defense technology company announced a strategic partnership with the ChatGPT maker on Wednesday to improve the ability of its counter-unmanned aircraft systems (CUAS) "to detect, assess and respond to potentially lethal aerial threats in real-time." Both companies will look into how OpenAI's leading-edge AI models, such as GPT-4o and OpenAI o1, can be used "to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness." Through the partnership, the defense company said OpenAI's models will be trained on Anduril's anti-drone systems' threats and operations data. "OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values," OpenAI chief executive Sam Altman said in a statement. "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free." Until January, OpenAI's usage policies said it did not allow usage of its models for "[a]ctivity that has high risk of physical harm" such as "military and warfare." The updated usage policies do not mention military or warfare, but still say users should not use its "service to harm yourself or others," including by developing or using weapons. In November, FedScoop reported that OpenAI and the Air Force Research Laboratory were partnering to provide limited ChatGPT Enterprise for its research and development work. The same month, OpenAI rival Anthropic and data analytics software platform Palantir (PLTR-2.02%) announced a partnership with Amazon Web Services (AMZN+2.45%) to offer the AI startup's Claude AI models to U.S. intelligence and defense agencies.
[14]
OpenAI's surprising move into defense tech: Here's what it means
OpenAI has announced a partnership with defense contractor Anduril to enhance the military's counter-unmanned aircraft systems. This collaboration signals a significant shift in OpenAI's previous stance on military involvement, amidst growing concerns over the use of AI in warfare. The alliance aims to leverage advanced AI to improve real-time threat detection and response capabilities for U.S. national security missions. The partnership's focus lies in developing AI models that can quickly synthesize time-sensitive data to assist human operators in assessing aerial threats. According to OpenAI, this initiative is designed to protect U.S. military personnel from drone attacks while maintaining a commitment to its mission against causing harm. OpenAI CEO Sam Altman stated, "OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values." This partnership marks OpenAI's first collaboration with a defense contractor, following a decision earlier this year to lift its ban on military use of its tools. Previously, OpenAI's policies explicitly prohibited using its technology for military applications, including weapons development. Recently, however, the company revised its guidelines, removing terminology tied to military use, though caution remains as they continue to assert the importance of not using their AI systems to cause harm. Anduril, co-founded by Palmer Luckey, is reportedly valued at approximately $14 billion and has secured a $200 million contract with the Marine Corps specifically for counter-drone systems. This move by OpenAI aligns with a larger trend in the tech industry, as several AI firms have sought partnerships with defense contractors. Notably, Amazon-backed Anthropic formed a similar alliance with Palantir to support U.S. intelligence and defense agencies. Critics of this tech-military collaboration express concern regarding ethical implications, as numerous tech employees have protested against military contracts in the past. Workers from companies like Google and Microsoft have voiced strong objections to projects that utilize technology for military purposes, sparking public debate on the role of technology in warfare. Despite the ethical concerns, the partnership between OpenAI and Anduril aims to bolster the military's capabilities, focusing on improving situational awareness during potential threats. The collaboration seeks to reduce the burden on human operators, allowing them to make informed decisions more swiftly. The specifics of how this partnership will unfold are still being developed, and it remains to be seen how both parties plan to navigate the inherent challenges. AI will hunt for weapons in NY subways OpenAI's shift towards collaboration with defense contractors raises important questions regarding accountability in the use of AI technologies. As noted, many employees within the tech industry have pushed back against engagements with military contracts, emphasizing the need for transparency with regard to how AI technologies might be applied on the battlefield. In January, OpenAI quietly amended its usage policies, which had previously prohibited military applications of its AI models. This decision coincided with OpenAI's growing involvement in projects with the U.S. Department of Defense, aiming to deploy AI systems for purposes including cybersecurity. The ongoing evolution of these partnerships reflects a broader trend among tech companies re-assessing their positions on military contracts. Recognizing the importance of responsible AI usage, OpenAI has reiterated its commitment to ensuring that its technologies are used ethically, specifically stating that the partnership is designed to protect military personnel and enhance national security measures. However, skepticism persists regarding the extent to which AI-powered systems could lead to reduced human oversight in critical decision-making processes.
[15]
OpenAI's new defense contract completes its military pivot
The company prohibited anyone from using its models for "weapons development" or "military and warfare." That changed on January 10, when The Intercept reported that OpenAI had softened those restrictions, forbidding anyone from using the technology to "harm yourself or others" by developing or using weapons, injuring others, or destroying property. OpenAI said soon after that it would work with the Pentagon on cybersecurity software, but not on weapons. Then, in a blog post published in October, the company shared that it is working in the national security space, arguing that in the right hands, AI could "help protect people, deter adversaries, and even prevent future conflict." Today, OpenAI is announcing that its technology will be deployed directly on the battlefield. The company says it will partner with the defense-tech company Anduril, a maker of AI-powered drones, radar systems, and missiles, to help US and allied forces defend against drone attacks. OpenAI will help build AI models that "rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness" to take down enemy drones, according to the announcement. Specifics have not been released, but the program will be narrowly focused on defending US personnel and facilities from unmanned aerial threats, according to Liz Bourgeois, an OpenAI spokesperson. "This partnership is consistent with our policies and does not involve leveraging our technology to develop systems designed to harm others," she said. An Anduril spokesperson did not provide specifics on the bases around the world where the models will be deployed but said the technology will help spot and track drones and reduce the time service members spend on dull tasks. OpenAI's policies banning military use of its technology unraveled in less than a year. When the company softened its once-clear rule earlier this year, it was to allow for working with the military in limited contexts, like cybersecurity, suicide prevention, and disaster relief, according to an OpenAI spokesperson. Now, OpenAI is openly embracing its work on national security. If working with militaries or defense-tech companies can help ensure that democratic countries dominate the AI race, the company has written, then doing so will not contradict OpenAI's mission of ensuring that AI's benefits are widely shared. In fact, it argues, it will help serve that mission. But make no mistake: This is a big shift from its position just a year ago. In understanding how rapidly this pivot unfolded, it's worth noting that while the company wavered in its approach to the national security space, others in tech were racing toward it. Venture capital firms more than doubled their investment in defense tech in 2021, to $40 billion, after firms like Anduril and Palantir proved that with some persuasion (and litigation), the Pentagon would pay handsomely for new technologies. Employee opposition to working in warfare (most palpable during walkouts at Google in 2018) softened for some when Russia invaded Ukraine in 2022 (several executives in defense tech told me that the "unambiguity" of that war has helped them attract both investment and talent).
[16]
OpenAI Partners With Anduril, the Defense Company Behind AI Towers on the US-Mexico Border
Anduril Industries owns a fleet of controversial sentry towers along the U.S.-Mexico border. ChatGPT-maker OpenAI has partnered with defense startup Anduril Industries to develop AI-powered technology for military applications, the companies announced on Wednesday, Dec. 4. Anduril Industries is behind the controversial sentry towers along the U.S.-Mexico border, raising possible ethical questions about OpenAI's involvement in potentially contentious policies like immigration enforcement. OpenAI and Anduril The two companies have announced an alliance to support the military's "rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure, and take lives." OpenAI and Anduril will focus on improving the country's counter-unmanned aircraft systems (CUAS) and their ability to "detect, assess and respond to potentially lethal aerial threats in real-time." The companies highlighted the partnership as a "pivotal moment" as the "accelerating race between the U.S. and China to lead the world in advancing AI" continues. Brian Schimpf, co-founder and CEO of Anduril Industries, said: "Our partnership with OpenAI will allow us to utilize their world-class expertise in AI to address urgent Air Defense capability gaps across the world." AI and the Military The partnership signals a U-turn for OpenAI, which once prohibited its technology from being used in any military applications. However, the ChatGPT-maker removed this from its company guidelines in January, opening the stage for military collaborations. This partnership comes as more and more AI startups consider working with national defense despite it remaining a controversial switch for many in the tech world. In April, Google reportedly fired 50 employees for protesting the company's involvement in Project Nimbus, a $1.2 billion Israeli government cloud contract working with Amazon. Meanwhile, AI startup Anthropic also announced it was working with Amazon and Palantir to provide the Pentagon with AI algorithms. Anduril Towers on the US-Mexico Border Anduril Industries, co-founded by Donald Trump advocate Palmer Luckey, is responsible for the controversial fleet of sentry towers along the U.S.-Mexico border. These AI-powered systems use cameras and AI to detect and differentiate movement from humans and animals up to 2.8km away. However, controversy surrounding the potential for privacy violations, data collection, and the potential for AI bias has arisen around these systems. Although OpenAI has not stated its technology will be used in these towers, the partnership could face future backlash for enabling what some see as ethically questionable uses of AI. The ChatGPT-maker previously told The Wall Street Journal that technology developed alongside Anduril will only be used in defensive applications. CEO Sam Altman also said the company wants to "ensure the technology upholds democratic values."
[17]
OpenAI Teams with Anduril, Signals Shift to Military AI Use for US
OpenAI has partnered with Anduril, a defence technology company, to develop Artificial Intelligence (AI) solutions for US national security missions, specifically focusing on counter-unmanned aircraft systems (CUAS). CUAS refers to technology that can detect and destroy unmanned aerial vehicles like drones. The collaboration plans to combine OpenAI's advanced AI models with Anduril's defence systems and Lattice software platform, improving the US' ability to respond to aerial threats in real-time and reducing the burden on human operators. The press release by Anduril specifically mentioned an ongoing race between the USA and China to advance AI. "If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades," said the release. OpenAI's CEO Sam Altman also stated, "OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values. Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel and will help the national security community understand and responsibly use this technology to keep our citizens safe and free." This news perhaps marks the highest point of OpenAI's growing closeness with the United States' military establishment. In January this year, the tech firm quietly changed its policy to allow for military usage of its AI models. The policy which used to prohibit "activity that has high risk of physical harm" including "military and warfare," was changed to "don't use our service to harm yourself or others." While OpenAI still prohibited the development and use of weapons, as well as surveillance applications and spyware, dropping the prohibition on military use was a telling gesture. The company stated to TechCrunch that it had changed the policy to allow for its collaborations on cybersecurity with DARPA (Defence Advance Research Projects Agency). Since then, the closeness between the Microsoft-backed AI startup and the Pentagon has only increased. In March, news broke that the US Army was experimenting with OpenAI's GPT-4 Turbo and GPT-4 Vision models to simulate war games. Microsoft also reportedly provided the US government with a special AI model to analyse classified and top-secret information. Earlier in June, OpenAI brought on retired U.S. Army General Paul M. Nakasone to serve on the Safety and Security Committee of the company's board of directors. Nakasone served as the former head of the National Security Agency (NSA) and took charge of setting up the US Cyber Command. At the same time, OpenAI had shut down attempts from other countries to use the company's products for their geopolitical ends. In August this year, it claimed to have shut down a covert Iranian Influence Operation (IO) that posted AI-generated political commentary on the US Presidential election. It had also previously suspended the accounts of five state-affiliated threat actors in China, Russia, Iran and North Korea who attempted to use AI for malicious activities. OpenAI also blocked Chinese users from accessing its API, without offering a clear reason. This occurred as the US was mulling a bill calling for imposing export control on US AI systems to prevent access to "foreign adversaries." The most recent partnership with Anduril is perhaps the most overt example of OpenAI aiding the US military. Previous collaborations were either never explicitly announced to the public or limited to cybersecurity applications. CEO Sam Altman also openly expressed his support for "U.S.-led efforts" and their intention to keep "US military personnel" safe.
[18]
OpenAI Partners With Anduril to Build AI for Anti-Drone Systems
OpenAI is partnering with Anduril Industries Inc. to incorporate its artificial intelligence technology into the weapons maker's anti-drone systems, marking the AI developer's most significant push yet into the defense sector. Anduril will lean on OpenAI's technology to better detect and respond to unmanned "aerial threats," largely drones, which have become a central part of modern warfare, the two companies said Wednesday. OpenAI will also use Anduril data to train its software for these defense systems. In recent months, OpenAI has been seeking to expand its partnerships with the US government around national security, saying that it wants to support the public sector in adopting AI that upholds democratic values. OpenAI partnered with the US Air Force Research Laboratory to adopt its ChatGPT enterprise tools for administrative uses. The company also hired a former top Pentagon official to lead its national security policy team and added the former head of the National Security Agency to its board. "Our partnership with Anduril will help ensure OpenAI technology protects US military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free," Sam Altman, OpenAI's chief executive officer, said in a statement. The arrangement comes at what the two companies describe as a "pivotal moment" in the accelerating race between the US and China to dominate AI for military purposes. "If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades," the companies said in a joint statement. Anduril and OpenAI said the partnership would focus on developing and "responsibly" deploying AI for national security missions. In the statement, Anduril co-founder and CEO Brian Schimpf said the partnership would help address "urgent" gaps in air defense capabilities around the world. Defense contracts have historically been controversial with employees at consumer tech companies, including sparking significant protests inside Google in 2018. But the AI industry has recently shown more openness to such deals. In November, OpenAI rival Anthropic announced a partnership with Palantir Technologies Inc. and Amazon.com Inc. to provide US intelligence and defense agencies access to its technologies. Meta Platforms Inc. also opened up its AI models to US defense agencies and contractors last month. OpenAI's partnership with Anduril is specifically for using its technology in a defensive capacity against unmanned drones, a spokesperson said. With Anduril, OpenAI is betting on a leader in Silicon Valley's defense industry. Last valued at $14 billion, the startup makes reusable rockets, drones and submarines and has multiple deals with the Defense Department in the US and allied countries. In September, Anduril announced it would expand its efforts into space and last month won a $99.7 million contract with the US Space Command.
[19]
OpenAI Continues Its Mission of â€~Ethical’ AI by Partnering With a Killer Robot Company
It's a weird look for a company that has claimed that it wants to make AI "safe for everyone." OpenAI has claimed that it's "leading the way" when it comes to the safe, ethical deployment of artificial intelligence. Weirdly, it has also decided to partner with a company that is actively working to develop killer robots for the U.S. military. This week, OpenAI announced a new partnership with Anduril Industries, a defense contractor co-founded by Oculus founder Palmer Luckey. Luckey's little company has managed, in the space of seven years, to build itself into a pivotal player in the defense community. It has done that by churning out drones for the U.S. military, some of which are designed to kill people. The new partnership between the drone builder and Silicon Valley's hottest AI vendor will see the two companies come together to "develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions," a press release associated with the deal states. What that means, practically speaking, is the integration of OpenAI's software into Anduril's platform, Lattice. Lattice is a flexible, AI-fueled software program, designed to serve a variety of defense needs. It appears that OpenAI's high-powered algorithms will now be used to turbocharge Anduril's product. “OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values," said OpenAI's CEO, Sam Altman, in a statement shared Wednesday. "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free." In a statement, Anduril's CEO and co-founder, Brian Schimpf, said the partnership would allow his company to utilize OpenAI's "world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world." Although most of Anduril's products represent defensive technologies designed to protect U.S. service members and vehicles, it also sells what has been dubbed a "Kamikaze" drone. That drone, the Bolt-M, is powered by the company's artificial intelligence software and comes equipped with "lethal precision firepower," which can deliver "devastating effects against static or moving ground-based targets," the company's website brags. LiveScience notes that the Bolt-M is designed to fly into structures and explode. Anduril is also said to be developing "drone swarms" that can augment U.S. Navy missions. This is a weird, if not predictable, development for OpenAI, which has claimed it wants to steward AI's development in a healthy direction but has, since its ascent to the heights of the tech industry, increasingly dispensed with the ethical guardrails that defined its early development.
[20]
Defense firm Anduril partners with OpenAI to use AI in national security missions
(Reuters) - Defense technology company Anduril Industries and ChatGPT-maker OpenAI on Wednesday announced a partnership to develop and deploy advanced artificial intelligence solutions for national security missions. The companies said the partnership will focus on improving the United States' counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real time. The CUAS is designed to help defend against drone strikes by detecting and intercepting them while they are airborne. The AI models will be trained on Anduril's library of data on CUAS threats and operations. "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel," said Sam Altman, OpenAI's CEO. The move comes amid a race between the U.S., its allies and China to develop AI-controlled weapons that will operate autonomously, including warships and fighter jets. Anduril, founded in 2017, develops and fields integrated autonomous solutions across a wide variety of sensors, including drones, effectors and assets, and has experience automating the operations of robotic systems deployed in tactical environments. (Reporting by Aishwarya Jain in Bengaluru; Editing by Krishna Chandra Eluri)
[21]
OpenAI is partnering with defense tech company Anduril
OpenAI, the AI model maker that used to describe its mission as saving the world, is partnering with Anduril, a military contractor, the two companies announced Wednesday. As part of the partnership, OpenAI will integrate its software into Anduril's counterdrone systems, which detect and take down drones. It's OpenAI's first partnership with a defense contractor -- and a significant reversal of its earlier stance towards the military. OpenAI's terms of service once banned "military and warfare" use of its technology, but it softened its position on military use earlier this year, changing its terms of service in January to remove the proscription.
[22]
OpenAI inks deal to upgrade Anduril's anti-drone tech | TechCrunch
OpenAI plans to team up with Anduril, the defense startup, to supply its AI tech to systems the U.S. military uses to counter drone attacks. The Wall Street Journal reports that Anduril will incorporate OpenAI tech into software that assesses and tracks unmanned aircraft. Anduril tells the publication that OpenAI's models could improve the accuracy and speed of responding to drones, reducing collateral damage. OpenAI's technology won't be used with Anduril's other weapons systems as a part of the deal, the companies said. As The WSJ notes, the OpenAI-Anduril tie-up is just the latest example of a major tech company embracing rather than shunning the defense sector. OpenAI previously barred its AI from being used in warfare, but revised that policy in January, and shortly thereafter inked deals with the Pentagon for cybersecurity work and other projects. OpenAI has also sought to bring defense leaders into its executive ranks, including former ex-Defense Department official Sasha Baker and NSA chief Paul Nakasone, who sits on OpenAI's board.
[23]
Defense firm Anduril partners with OpenAI to use AI in national security missions
Dec 4 (Reuters) - Defense technology company Anduril Industries and ChatGPT-maker OpenAI on Wednesday announced a partnership to develop and deploy advanced artificial intelligence solutions for national security missions. The companies said the partnership will focus on improving the United States' counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real time. The CUAS is designed to help defend against drone strikes by detecting and intercepting them while they are airborne. The AI models will be trained on Anduril's library of data on CUAS threats and operations. "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel," said Sam Altman, OpenAI's CEO. The move comes amid a race between the U.S., its allies and China to develop AI-controlled weapons that will operate autonomously, including warships and fighter jets. Anduril, founded in 2017, develops and fields integrated autonomous solutions across a wide variety of sensors, including drones, effectors and assets, and has experience automating the operations of robotic systems deployed in tactical environments. Reporting by Aishwarya Jain in Bengaluru; Editing by Krishna Chandra Eluri Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial IntelligenceArtificial Intelligence
[24]
OpenAI and Anduril Partner to Develop AI Solutions for US National Security
The partnership aims to maintain the US military's technological edge amid global competition. Anduril Industries, a defense technology firm, has announced a strategic partnership with OpenAI, the maker of ChatGPT and AI models such as GPT 4o and OpenAI o1, to develop and deploy advanced artificial intelligence (AI) solutions solutions for national security missions. By combining OpenAI's advanced AI models, with Anduril's defense systems and Lattice software platform, the collaboration aims to improve the nation's defense systems that protect US and allied military personnel from attacks by unmanned drones and other aerial devices. Also Read: Meta Expands Access to Llama AI Models for US Government Use The focus of the partnership will be on improving counter-unmanned aircraft systems (CUAS), enhancing real-time detection, assessment, and response to aerial threats. Leveraging OpenAI's AI models, the initiative will reduce human operator workload, improve situational awareness, and enable faster decision-making in high-pressure defense scenarios. "As part of the new initiative, Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness. These models, which will be trained on Anduril's industry-leading library of data on CUAS threats and operations, will help protect US and allied military personnel and ensure mission success," the companies said in a joint statement on December 4. Also Read: OpenAI CEO Sam Altman Confident in Trump's Support for AI "The accelerating race between the United States and China to lead the world in advancing AI makes this a pivotal moment. If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades. The decisions made now will determine whether the United States remains a leader in the 21st century or risks being outpaced by adversaries who don't share our commitment to freedom and democracy and would use AI to threaten other countries," the official release said. Both companies are committed to AI safety and ethics, ensuring the responsible deployment of these technologies to protect military personnel and uphold democratic values. "Anduril builds defense solutions that meet urgent operational needs for the US and allied militaries," said Brian Schimpf, co-founder and CEO of Anduril Industries. "Our partnership with OpenAI will allow us to utilise their world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world. Together, we are committed to developing responsible solutions that enable military and intelligence operators to make faster, more accurate decisions in high-pressure situations." Also Read: Accenture Federal Services and Google Public Sector Launch Federal AI Solution Factory "OpenAI builds AI to benefit as many people as possible, and supports US-led efforts to ensure the technology upholds democratic values," said Sam Altman, OpenAI's CEO. "Our partnership with Anduril will help ensure OpenAI technology protects US military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free." The partnership underscores the importance of advanced AI in national security and aims to provide the Department of Defense with cutting-edge, effective solutions for modern defense challenges. According to the official release, this collaboration will be guided by technically-informed protocols emphasising trust and accountability in the development and employment of advanced AI for national security missions.
[25]
OpenAI, Anduril partner on AI drone-defense plan
State of play: The companies said they will combine OpenAI's most advanced models and Anduril's military hardware and software to protect the U.S. from unmanned aircraft. What they're saying: Anduril CEO and co-founder Brian Schimpf emphasized the effort's commitment to "responsible solutions" that help "military and intelligence operators to make faster, more accurate decisions in high-pressure situations." Between the lines: OpenAI started out as a nonprofit specifically intended to prioritize safeguards over speed in developing and deploying AI. Flashback: Silicon Valley first emerged 50 years ago as a center of defense contracting, but more recently working with the Pentagon has become a source of controversy in some regions of the industry, notably Google. The bottom line: AI-defense partnerships are spreading. Last month OpenAI rival Anthropic partnered with Palantir to make Anthropic's Claude models available to U.S. intelligence and defense agencies.
[26]
Relax, OpenAI Has the Drones Under Control
OpenAI today announced its strategic partnership with Andruil Industries to supply artificial intelligence solutions for 'national security missions'. Andruil Industries is a defence tech company based out of the United States, founded in 2017 by Palmer Luckey. The partnership is set to enhance the United States defence systems and those of its allies by protecting attacks by 'unmanned drones and other aerial devices'. "[The partnership] will focus on improving the nation's counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time," read the official statement. Andruil, along with OpenAI, will explore the possibility of using AI models to 'reduce the burden of human operators'. The models will be trained on Andruil's data on CUAS threats and operations. Sam Altman, CEO of OpenAI, said, "OpenAI builds AI to benefit as many people as possible and supports US-led efforts to ensure the technology upholds democratic values." Now that he's back from his hiatus, Greg Brockman, the president and co-founder of OpenAI, took to a post on X to relay the announcement. AI companies are investing heavily to protect the United States' national security interests. Only a few weeks ago, Anthropic, Palantir, and AWS announced a partnership to provide U.S. intelligence and defence agencies with Claude 3 and 3.5 models. There were also reports that China was using Meta's open-source models for military applications, which was then followed by Meta's announcement that they're making Llama available for US government agencies, defence projects and other private sectors working on national security. In Meta's announcement, Nick Clegg, President of Global Affairs at Meta, said, "Widespread adoption of American open-source AI models serves both economic and security interests. Other nations -- including China and other competitors of the United States -- understand this as well and are racing to develop their own open-source models, investing heavily to leap ahead of the U.S." OpenAI seems to have a similar sentiment, towards China. "The accelerating race between the United States and China to lead the world in advancing AI makes this a pivotal moment. If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades," read the announcement. They also added that the decisions they are taking now are crucial in determining whether the United States will continue to assert its dominance in the 21st century - 'or risk being outpaced by adversaries who don't share our commitment to freedom and democracy and would use AI to threaten other countries.' We're surely set to see more such announcements, given that Donald Trump is all set to take charge as the new president of the United States. Trump's allies at the American First Policy Institute have drafted an order and aim to create a 'Manhattan Project'-esque effort to propel AI technology, especially in the defense sector.
[27]
OpenAI Joins Forces with Anduril to Boost US Drone Defense
OpenAI Technology Boosts US Anti-Drone Capabilities with Anduril Partnership OpenAI has stated that it is collaborating with Anduril Industries to incorporate artificial intelligence technology into the defense company's anti-drone platforms. The cooperation is expected to improve the identification and confrontation of unmanned aerial threats, including drones. This effort supports technology's increasing role in addressing modern warfare's dynamic. Anduril, a widely used defense technology, intends to use OpenAI services to enhance the efficiency of counter-drone measures to avoid collateral damage. However, the firms stated that OpenAI's technology will not be used in Anduril's other weapon systems. Moreover, OpenAI will use the information collected from Anduril to improve the AI models for national security applications.
[28]
OpenAI-Anduril to build super AI systems for drone defense against China
The partnership aims to bring together OpenAI's advanced models with Anduril's defense systems and Lattice software platform to help U.S. defense systems that protect the military and assets from unmanned drones attacks and other aerial devices. "OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values," said Sam Altman, OpenAI's CEO. A press release by Anduril mentions that it is now a key moment in the contest between China and the U.S. to lead the world in advancing AI. "If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades," the press release reads. It also states that the decisions and advancements made now will decide if the US can stay as a leader in the 21 century or if its gets outpaced by adversaries (like China) - "who don't share our commitment to freedom and democracy and would use AI to threaten other countries."
[29]
OpenAI, Anduril to Develop Anti-Drone AI for U.S. Defense
OpenAI and defense technology startup Anduril said on Wednesday they would jointly develop artificial intelligence for U.S. anti-drone systems to improve their ability to detect and respond to aerial threats. Anduril said two weeks ago it won a U.S. military contract to develop such systems. The AI models will be developed using Anduril's data about the threats anti-drone systems face. The
Share
Share
Copy Link
OpenAI, the creator of ChatGPT, has entered into a partnership with defense technology company Anduril Industries to develop AI solutions for military applications, raising concerns among employees and industry observers about the ethical implications of AI in warfare.
OpenAI, the company behind ChatGPT, has announced a strategic partnership with defense technology firm Anduril Industries to develop artificial intelligence solutions for national security missions 1. This collaboration marks a significant shift in OpenAI's stance on military applications of its technology and has sparked internal debates about the ethical implications of AI in warfare.
The initial focus of the partnership will be on improving counter-unmanned aircraft systems (CUAS) 2. OpenAI's advanced AI models will be integrated into Anduril's defense systems to enhance their ability to detect, assess, and respond to aerial threats in real-time. The companies claim this will help protect U.S. military personnel from drone attacks and improve situational awareness for operators 3.
The announcement has raised concerns among some OpenAI employees, who have questioned the ethics of their technology being used for military purposes [3]. Internal discussions revealed discomfort with the potential for AI to be used in weapons systems, even if intended for defensive purposes. Some employees expressed worry about the deal's impact on OpenAI's reputation and the broader implications of AI militarization.
OpenAI's partnership with Anduril follows a recent trend of AI companies engaging with the defense sector. Earlier this year, OpenAI quietly removed its ban on military use of its technology, allowing for certain applications while still prohibiting the development of weapons 4. This shift aligns with similar moves by other tech giants and AI firms, such as Anthropic and Meta, who have also revised their policies to allow military collaborations [2].
The partnership reflects a growing acceptance of AI in military applications among tech companies, reversing the previous reluctance seen in incidents like Google's withdrawal from Project Maven in 2018 [3]. Proponents argue that AI can enhance defensive capabilities and save lives, while critics warn of the potential risks associated with autonomous weapons systems and the broader militarization of AI 5.
OpenAI CEO Sam Altman defended the partnership, stating that it aligns with the company's mission to benefit humanity and support U.S.-led efforts to uphold democratic values [4]. Anduril CEO Brian Schimpf emphasized the partnership's focus on developing responsible solutions for military and intelligence operators [1].
As AI continues to advance, its role in military and defense applications is likely to expand. This partnership between OpenAI and Anduril may set a precedent for future collaborations between AI developers and defense contractors, potentially reshaping the landscape of military technology and raising important questions about the ethical use of AI in warfare.
Reference
[1]
[2]
Leading AI companies like Anthropic, Meta, and OpenAI are changing their policies to allow military use of their technologies, marking a significant shift in the tech industry's relationship with defense and intelligence agencies.
2 Sources
OpenAI, the leading AI research company, experiences a significant data breach. Simultaneously, the company faces accusations of breaking its promise to allow independent testing of its AI models.
2 Sources
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
OpenAI has cut off API access to an engineer who created a voice-controlled sentry gun using the company's Realtime API, citing a violation of their usage policies prohibiting the development of weapons.
2 Sources
OpenAI, once a non-profit AI research organization, is restructuring into a for-profit entity, raising concerns about its commitment to beneficial AI development and potential safety implications.
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved