4 Sources
4 Sources
[1]
Palantir UK boss says it's up to militaries to decide how AI targeting is used in war
Tech giant Palantir has pushed back against concerns that military use of its AI platforms could lead to unforeseen risks, in an exclusive interview with the BBC, insisting that the way the technology is used is the responsibility of its military customers. It comes as experts have expressed concern over the use of Palantir's AI-powered defence platform - Maven Smart System - during wartime and its reported use in US attacks on Iran. Analysts have warned that the military's use of the platform, which helps personnel plan attacks, leaves little time for "meaningful verification" of its output and could lead to incorrect targets being hit. But the company's UK and Europe head, Louis Mosley, told the BBC in a wide-ranging interview that while AI platforms like Maven have been "instrumental" to the US management of the Iran war, responsibility for how their output is used must always remain "with the military organisation". "There's always a human in the loop, so there is always a human that makes the ultimate decision. That's the current set-up." The Maven Smart System was launched by the Pentagon in 2017 and is designed to speed up military targeting decisions by bringing together masses of data, including a range of intelligence, satellite and drone images. The system analyses this data and can then provide recommendations for targeting. It can also suggest the level of force to use based on the availability of personnel and military hardware, such as aircraft. But scrutiny has grown over the use of such tools in warfare. In February, the Pentagon announced that it would be phasing out Anthropic's Claude AI system - which helps to power Maven - after the company refused to allow use of its AI in autonomous weapons and surveillance. Palantir says alternatives can replace it. Since the war with Iran began in February, the US has reportedly used Maven to plan strikes across the country. Pushed by the BBC on the risk that Maven might suggest incorrect targets - which could include civilians - Mosley insisted that the platform is only meant to serve as a guide to speed up the decision-making process for military personnel and that it should not be seen as an automated targeting system. "You could think of it as a support tool," Mosley said. "It's allowing them to synthesise vast amounts of information that previously they would have had to do manually one by one." However, Mosley deferred to individual militaries when challenged by the BBC on the risk of time-pressured commanders ordering their officers to take Maven's output as being rubber-stamped. "That's really a question for our military customers. They're the ones that decide the policy framework that determines who gets to make what decision," he said. "That's not our role." Since 28 February, the US has launched more than 11,000 strikes against Iran, many reportedly identified by Maven. Adm Brad Cooper, head of the US military in the Middle East, has hailed AI systems for helping officers "sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react". But some worry AI's involvement in mission planning creates significant risks. "This prioritisation of speed and scale and the use of force then leaves very little time for meaningful verification of targets to make sure that they don't include civilian targets accidentally," Prof Elke Schwarz of Queen Mary University of London said. "If there's a risk of killing and you co-opt a lot of your critical thinking to software that will take care of these things for you, then you just become reliant on the software," she added. "It's a race to the bottom." In recent weeks, Pentagon officials have faced questions as to whether AI tools such as Maven were used to identify targets in the deadly strike on a school in the Iranian town of Minab. Iranian officials said the strike killed 168 people, including around 110 children, on the opening day of the war. In Congress, a number of senior Democrats have called for increased scrutiny of AI platforms like Maven. Rep Sara Jacobs - a member of the House Armed Services Committee - called for clearly enforced rules and regulations about how and when AI systems are used. "AI tools aren't 100% reliable -- they can fail in subtle ways and yet operators continue to over-trust them," she told NBC News last month. "We have a responsibility to enforce strict guardrails on the military's use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions." But Mosley pushed back against suggestions that the speed of his company's platform is rushing decision making at the Pentagon and potentially creating dangerous situations. He instead argued that the speed at which commanders are now taking action is a "consequence of the increased efficiency" that Maven has enabled. Citing "operational security", the Pentagon declined to comment when approached by the BBC on how AI systems like Maven will be used in future or who would be held responsible should something go wrong. But officials in the US appear to be moving forward with plans to further integrate Maven into its systems. Last week, the Reuters news agency reported that the Pentagon had designated Maven as "an official program of record" - establishing it as a technology to be integrated long-term across the US military. In a letter obtained by Reuters, deputy Defence Secretary Steve Feinberg said the platform would provide commanders "with the latest tools necessary to detect, deter, and dominate our adversaries in all domains". Additional reporting by Jemimah Herd
[2]
AI at war: Five things to know about Project Maven
Washington (United States) (AFP) - A Pentagon AI program called Project Maven is at the center of the US strikes against Iran and potentially one of the most consequential transformations of modern warfare. Project Maven is the Pentagon's flagship artificial intelligence program, launched in 2017 as a narrow experiment to help military analysts make sense of the torrent of drone footage pouring in from conflict zones. Operators were drowning in imagery, searching frame by frame for objects of interest that might appear for only a moment before vanishing. Maven was built to find the needle in the haystack. Eight years later, the program has evolved into something far more expansive: an AI-assisted targeting and battlefield management system that has vastly accelerated what is known in war-making as the kill chain -- the process from initial detection to destruction. How does it work? Maven functions like both the air traffic control of battle and its cockpit. Aalok Mehta, director of the CSIS Wadhwani AI Center, described the system as "essentially an overlay" that fuses sensor data, enemy troop intelligence, satellite imagery, and information on troop deployment. In practice, that means rapidly scanning satellite feeds to detect troop movements or identify targets, while also "taking a snapshot of the operational theater" to determine the best course of action for striking a specific target. In a recent demonstration posted online, a Pentagon official described how Maven "magically" turns an observed threat into a targeting workflow, weighing available assets and presenting a commander with options. The emergence of ChatGPT was another leap forward, broadening the use of the technology to a far greater range of users who can interact with Maven in natural language. For now, this capability is supplied by Anthropic's Claude -- though that arrangement is coming to a bitter end after the Pentagon bristled at the AI lab's demand that its model not be used for fully automated strikes or the tracking of US citizens. Why did Google say no? The ethical question was a factor in Maven's early years, when Google was the program's original AI contractor. In 2018, more than 3,000 employees signed an open letter protesting the company's involvement, arguing that the contract crossed a line. Several engineers resigned. Google declined to renew when the contract expired, and subsequently published AI principles explicitly ruling out participation in weapons systems. The episode exposed a fault line in Silicon Valley between engineers who viewed autonomous targeting as an ethical red line and defense officials who saw it as essential. More recently, Google removed its AI policy restrictions and said it is leaning further into national security work. The Pentagon has said that Google, along with xAI and OpenAI, are in the mix to replace Claude in Maven. What is Palantir's role? In 2024, Palantir -- founded in part with CIA seed funding and built from the start around government intelligence work -- stepped into the space Google vacated. The company has reportedly become Maven's primary technology contractor, and its AI now forms the operational backbone of the program. Palantir CEO Alex Karp frames the stakes explicitly. "This is a have, have-not world," he said at a recent Palantir event, arguing that it was important for the West to achieve capabilities the rest of the world lacked. A system that compresses a kill chain from hours to seconds makes an adversary obsolete, he said. How has it fared? The Pentagon and Palantir declined to comment on Maven's performance in the current war with Iran. US strikes have been carried out at a sustained pace, and it can be assumed that Maven's ability to speed up the targeting and firing process has played a central role. According to the Center for Strategic and International Studies, after three weeks the US strike campaign settled into a pace of between 300 and 500 targets per day. In the first 24 hours of Operation Epic Fury, US forces struck over 1,000 targets, including a school housed in a building previously used as a military complex, according to various media reports. Iran has said the attack killed 168 children aged seven to 12 and wounded many other people.
[3]
Not our role: Palantir shifts blame to military for AI-driven targeting deaths
The U.S. Department of Defense plans to designate Palantir Technologies' Maven Smart System as a formal "program of record," securing multi-year funding for the AI-driven targeting platform utilized across U.S. military branches. The announcement coincided with comments from Louis Mosley, Palantir's UK chief, who stated that accountability for the use of the technology in combat lies with military clients, not the company itself. This move indicates a significant commitment to the Maven system, as formalizing its status will provide stable funding and resources for its development and operational use. Deputy Defense Secretary Steve Feinberg highlighted in a March 9 memo, first reported by Reuters on March 20, that embedding Maven into military processes will equip warfighters with advanced tools necessary to detect and dominate adversaries. Oversight of the Maven system will transition from the National Geospatial-Intelligence Agency to the Pentagon's Chief Digital and Artificial Intelligence Office within 30 days. The U.S. Army will manage all future contracts related to Maven, with the program designation expected to take effect before the close of fiscal year 2026 in September. The initial contract for Maven was awarded in May 2024 for $480 million, which increased to a ceiling of $1.3 billion by May 2025, along with a separate $10 billion Army enterprise agreement. Currently, the Maven system has over 20,000 active users and processes data from more than 150 sources, including satellite imagery and drone video. During Operation Epic Fury against Iran in late February, Maven reportedly helped process 1,000 targets within the first 24 hours. According to the Trump administration, the U.S. has conducted strikes on 11,000 targets in Iran since the conflict began on February 28, many of those targets identified with Maven's aid. However, Maven's operational history has faced scrutiny. A Tomahawk missile strike on an elementary school in Minab on the first day of conflict resulted in at least 168 fatalities. Critics have raised concerns about the rapid decision-making involved in AI-targeting operations, particularly regarding reliability and verification. Analysts noted that the speed of Maven's output leaves little time for thorough verification. According to BBC, Mosley emphasized Palantir's stance on responsibility during military operations, stating, "That's not our role." He acknowledged Maven's utility in the conflict management but reiterated that accountability remains with military organizations utilizing the system.
[4]
The US is waging AI-assisted war on Iran. Here's how
While AI tools used by the military are very advanced, they're not yet at the level where human judgement is unnecessary. * Experts and former officials say the military's artificial intelligence systems are central to "Operation Epic Fury" * As the war drags on, AI could play an increasing role * More than a hundred lawmakers in the House and Senate signed letters sent to Pentagon chief Pete Hegseth in mid-March asking whether the Maven Smart System was involved in the strike on the school Hundreds of Iranian civilian deaths in the war have put the U.S. military's new AI systems in the spotlight and raised concerns from lawmakers over whether these systems are making deadly mistakes. Experts and former officials say the military's artificial intelligence systems are central to "Operation Epic Fury," which is seeing AI deployed on the battlefield to a new degree. "For somebody who spent years talking about how we're moving too slow, I'm now concerned about how fast we're moving," said Jack Shanahan, a retired lieutenant general who led efforts to develop and integrate AI into the military. "At some point it may become increasingly difficult to define what an advanced AI system must not do, as opposed to humans defining what they want it to do." At a closed door House Armed Services Committee briefing on March 25, Pentagon officials told lawmakers AI was used in data management, but not final target selection, according to a person with knowledge of the briefing. U.S. soldiers are "leveraging a variety of advanced AI tools," Adm. Brad Cooper, the commander of U.S. Central Command, said in a March 11 video update on the war. "Humans will always make final decisions on what to shoot and what not to shoot and when to shoot but advanced AI tools can turn processes that used to take hours, and sometimes even days, into seconds." The military has hit more than 12,000 targets in the monthlong Iran war, including more than 1,000 in the first 24 hours after the war launched on Feb. 28. One of the sites bombed that day was an Iranian school, leading to at least 175 deaths, most of them children. In the early days of the war, the U.S. military fired more long-range, expensive missiles to hit Iran from far away, but has since shifted to using more short-range, gravity bombs that can be dropped from aircraft, now that Iran's air defenses are degraded, according to Chairman of the Joint Chiefs of Staff Gen. Dan Caine and other officials. The first targets struck likely came from longstanding Pentagon plans for an Iran attack, said Emelia Probasco, a senior fellow at Georgetown University's Center for Security and Emerging Technology who studies military uses of AI. But as the war drags on, AI could play an increasing role, Probasco said, including in "prioritization" of targets - telling soldiers where to hit first. "We are now entering the phase where those targets have been attacked and now you could potentially start to see an even greater impact of AI," she said. "You're looking for time critical targets, targets that move, targets that we didn't know about before." 20 soldiers with AI match the work of 2,000 For nearly a decade, the military has been integrating an AI tool known as the Maven Smart System into its computer systems. The system, often shortened to "Maven," fuses the military's many, disparate channels of data, intelligence, satellite imagery and asset movements into a single software platform. Military leaders say the system can make decisions in the heat of battle faster and more effective. The system has already drastically increased the number of targets that a given number of operators can hit. According to Probasco's 2024 study of Army exercises using the system, roughly 20 people using it could match the work of more than 2,000 soldiers in Iraq war-era targeting cells then considered the most efficient in U.S. military history. And its development in the two years since her study has been "dramatic," she added. In a demo of the Maven Smart System at a March 12 conference, Cameron Stanley, the Pentagon's chief digital and artificial intelligence officer, showed the ease with which a user could turn a structure into a ball of flame with a "left click, right click, left click." On the screen behind Cameron, a cursor hovered over an overhead image of lined up cars, showing numbers representing their measurements, locational coordinates and other data. With a few clicks, the "detection" of an object could be moved into a "targeting workflow," Cameron said. The system offered a choice of "which metrics AI should prioritize," including "time to target," "distance," or "munitions." A sleek graphic appeared to show on a map the circular blast radius that the strike would create, and the arc that the weapons would travel. After a couple clicks on a blue "approve" bar and green "task executed" bar, the dark cloud of an explosion filled the screen. "When we started this, it literally took hours to do what you just saw there," Cameron said. Iran school strike raises AI questions In spite of officials' claims that AI improves the military's accuracy, the civilian death toll in Iran has raised concerns over whether it has contributed to faulty targeting. Lawmakers have asked whether AI played a role in the school strike. Investigations by the New York Times and other outlets found that the United States was likely behind the strike, which used a U.S.-made Tomahawk missile. The school may have been on an outdated list of targets that the military failed to recheck, according to those reports. The Pentagon has said its own investigation into the strike is ongoing. More than a hundred lawmakers in the House and Senate signed letters sent to Pentagon chief Pete Hegseth in mid-March asking whether the Maven Smart System was involved in the strike on the school, and for more details on how the military is checking the work of AI. Shanahan said he saw "no indications" that AI was involved in the strike, "but we need to acknowledge that while future AI will be capable of finding more targets than ever before, humans must remain responsible and accountable for the decisions to hit those targets." In past military exercises, AI has demonstrated far lower accuracy than humans. In the Army exercises that Probasco studied, the Maven Smart System could correctly identify a tank around 60% of the time, as compared to a human soldier's 84% accuracy, and that number dropped to just 30% in snowy weather. An AI targeting system tested by the Air Force in 2021 hit just 25% accuracy when it was tested on imperfect conditions. The Pentagon in 2023 issued a directive that soldiers and commanders using AI systems must be able to "exercise appropriate levels of human judgment over the use of force." "Our military operates in full compliance with all U.S. laws and established policies, such as ensuring a human is always in the loop for critical operational decisions," the Pentagon said in a statement to USA TODAY. "The responsibility for the lawful use of any AI tool rests with the human operator and the chain of command, not within the software itself." Pentagon goes after company behind its AI chatbot The Trump administration as a whole has moved to remove regulations around AI in the name of innovation and cutting bureaucracy, and the Pentagon has followed suit. In a Jan. 9 memo laying out the military's AI strategy, Hegseth directed the Pentagon to work towards "unleashing experimentation" with AI models and "aggressively identifying and eliminating bureaucratic barriers to deeper integration" of AI. "We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment," the memo read. In recent months, that approach has put the Pentagon at odds with Anthropic, the Silicon Valley company behind Claude, the only AI chatbot that is currently configured to operate on the Maven Smart System. Anthropic sought out an agreement from the Pentagon that its technology would not be used for mass surveillance, or to hit targets without human signoff. The Pentagon refused to accept those terms, saying Claude must be available to the military for "all lawful uses," as its officials publicly blasted the company on social media. The Pentagon moved to declare the company a "supply chain risk" - a designation meant to restrict companies vulnerable to sabotage or subversion by U.S. adversaries - but was blocked from the move by a federal judge's ruling on March 26. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability," the Pentagon said in a statement. "It is the military's sole responsibility to ensure our warfighters have the tools they need to win in a crisis, without interference from corporate policies." Anthropic has said in statements that it does not believe the Pentagon has yet used Claude in a way that broke its conditions. But the dispute reportedly arose after Anthropic learned that the military used Claude in its operation to capture Venezuelan President Nicolas Maduro. "Anthropic currently does not have confidence," the company maintained in court documents, "that Claude would function reliably or safely if used to support lethal autonomous warfare." AI built for military purposes "already has a lot of accuracy issues," but language learning models like Claude "are actually even more inaccurate," said Heidy Khlaaf, chief AI scientist at the AI Now Institute. "They're not very good at solving for tasks outside of what they've been trained on, and that's ok if you're using it in a non critical environment, like writing an email, but that's very different when you're dealing with novel scenarios like a fog of war." The dispute with Claude is not the first time that the increasing business partnerships between Silicon Valley and the Pentagon to create high tech weapons and military tools have come under criticism from the companies building them. Google was originally contracted to work on the Maven Smart System in its early developmental stages, but dropped the contract in 2018 in response to a protest movement from its workers. Google and Amazon workers have also in recent years protested the companies' AI contract with the Israeli military and Google's work with immigration and border enforcement. "If any tech company caves to the Pentagon's demands," Hegseth "will have the power to build and deploy A.I.-powered drones that kill people without the approval of any human," a group of organizations representing Amazon, Google, and Microsoft workers wrote in a statement on the Anthropic dispute. Shanahan said human control of AI for military uses is a "nonnegotiable starting point," but it could eventually be confined to the design and development of systems that increasingly operate on their own. "You're going to be operating under the assumption that at some point an autonomous weapon is released, and no human will have the ability to bring it back."
Share
Share
Copy Link
Palantir's UK head Louis Mosley says responsibility for AI-powered Maven Smart System targeting decisions rests with military customers, not the company. The Pentagon is designating Maven as a formal program of record with multi-year funding as experts raise concerns about verification time and civilian casualties in Operation Epic Fury.
Palantir has firmly positioned itself outside the accountability loop for how its AI-powered Maven Smart System is deployed in combat, with UK and Europe head Louis Mosley telling the BBC that responsibility for the platform's output "must always remain with the military organisation."
1
The statement comes amid mounting scrutiny over AI use in warfare, particularly as the US military has conducted more than 11,000 strikes against Iran since February 28, many reportedly identified using Project Maven.1

Source: BBC
Mosley emphasized that Maven functions as a "support tool" designed to help military personnel synthesize vast amounts of information more quickly than manual processes would allow. When pressed by the BBC on risks that time-pressured commanders might rubber-stamp Maven's recommendations, potentially leading to incorrect targets including civilian casualties, Mosley deferred to individual militaries. "That's really a question for our military customers," he said. "They're the ones that decide the policy framework that determines who gets to make what decision. That's not our role."
1
3
The US Department of Defense plans to designate the Maven Smart System as a formal program of record, securing stable multi-year funding for the AI-driven targeting platform.
3
Deputy Defense Secretary Steve Feinberg highlighted in a March 9 memo that embedding Maven into military processes will equip warfighters with advanced AI tools for military necessary to detect and dominate adversaries.3

Source: France 24
Oversight of Maven will transition from the National Geospatial-Intelligence Agency to the Pentagon's Chief Digital and Artificial Intelligence Office within 30 days, with the US Army managing all future contracts. The initial contract awarded in May 2024 was valued at $480 million, which increased to a ceiling of $1.3 billion by May 2025, alongside a separate $10 billion Army enterprise agreement.
3
Currently, the system has over 20,000 active users and processes data from more than 150 sources, including satellite imagery and drone footage.3
Launched by the Pentagon in 2017 as a narrow experiment to help military analysts process torrents of drone footage, Project Maven has evolved into an AI-assisted targeting and battlefield management system that vastly accelerates what's known as the kill chain—the process from initial detection to destruction.
2
Maven functions like both air traffic control and cockpit for battle, fusing sensor data, enemy troop intelligence, satellite imagery, and troop deployment information into a single operational picture.2
In practice, this means rapidly scanning satellite feeds to detect troop movements while "taking a snapshot of the operational theater" to determine the best course of action for striking specific targets. The emergence of ChatGPT broadened Maven's usability, allowing far more users to interact with the system in natural language—a capability currently supplied by Anthropic's Claude, though that arrangement is ending after the Pentagon bristled at the AI lab's demand that its model not be used for fully automated strikes.
2
During Operation Epic Fury against Iran in late February, Maven reportedly helped process 1,000 targets within the first 24 hours.
3
According to the Center for Strategic and International Studies, after three weeks the US strike campaign settled into a sustained pace of between 300 and 500 targets per day.2
Adm Brad Cooper, head of the US military in the Middle East, has praised AI-assisted war capabilities for helping officers "sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react."1
Source: USA Today
The system's efficiency represents a dramatic shift in military capability. According to a 2024 study, roughly 20 people using Maven could match the work of more than 2,000 soldiers in Iraq war-era targeting cells then considered the most efficient in US military history.
4
At a March 12 conference demonstration, Pentagon chief digital and artificial intelligence officer Cameron Stanley showed how a user could turn a structure into a target with just "left click, right click, left click," with the system offering choices of which metrics AI should prioritize including "time to target," "distance," or "munitions."4
Related Stories
The risks of AI-driven targeting have drawn sharp criticism from experts and lawmakers. Prof Elke Schwarz of Queen Mary University of London warned that "this prioritisation of speed and scale and the use of force then leaves very little time for meaningful verification of targets to make sure that they don't include civilian targets accidentally."
1
She added that relying on software for critical thinking in life-or-death decisions creates a dangerous dependency: "It's a race to the bottom."1
A Tomahawk missile strike on an elementary school in the Iranian town of Minab on the first day of conflict resulted in at least 168 fatalities, with Iranian officials saying around 110 were children.
1
2
More than a hundred lawmakers in the House and Senate signed letters sent to Pentagon chief Pete Hegseth in mid-March asking whether Maven was involved in the strike.4
At a closed-door House Armed Services Committee briefing on March 25, Pentagon officials told lawmakers AI was used in data management but not final target selection.4
Rep Sara Jacobs, a member of the House Armed Services Committee, has called for clearly enforced rules about how and when AI systems are used. "AI tools aren't 100% reliable—they can fail in subtle ways and yet operators continue to over-trust them," she told NBC News. "We have a responsibility to enforce strict guardrails on the military's use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions."
1
Retired Lieutenant General Jack Shanahan, who led efforts to develop and integrate AI into the military, expressed growing concern about the pace of deployment. "For somebody who spent years talking about how we're moving too slow, I'm now concerned about how fast we're moving," he said. "At some point it may become increasingly difficult to define what an advanced AI system must not do, as opposed to humans defining what they want it to do."
4
As the war drags on and longstanding target lists are exhausted, AI could play an increasing role in identifying time-critical targets and prioritization—targets that move or weren't previously known—raising further questions about accountability and the decision-making process in an era of AI autonomy in warfare.4
Summarized by
Navi
[2]
[4]
11 Mar 2026•Technology

21 Mar 2026•Policy and Regulation

28 Feb 2026•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Business and Economy
