Curated by THEOUTPOST
On Fri, 8 Nov, 12:03 AM UTC
15 Sources
[1]
In Anthropic We Trust
CEO Dario Amodei has repeatedly revealed his ambition to use Claude to support the US government and its interests in protecting national security. Over the last few days, we have seen a series of announcements highlighting generative AI firms forming partnerships with the US government to provide AI technology for military and defence. Anthropic is definitely on top of that list. Not only is it big tech's favourite child, but it has also secured its place in the public sector and government organisations. Recently, the company partnered with Palantir to provide its advanced AI model Claude to the US government for data analysis, and complex coding activities in projects of national security interest. This partnership involves an IL6 accreditation, just one level below the top secret tier. It didn't take long for the partnership to spark a debate around the company's commitment to building AI responsibly, especially as its CEO, Dario Amodei, is well-known for his views on building an AI that prioritises safety. Recently, Anthropic released a statement urging governments to take action and bring in regulations to enforce the safe and ethical use of AI. "Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast," said Anthropic. Moreover, Anthropic also hired a full-time AI welfare expert to explore the moral and ethical implications of AI. People were quick to question whether Amodei and Anthropic's views on AI were mere virtue signalling and were disappointed about their partnership with the US government. The announcement also came a day before the election results in the US, where Donald Trump is set to take charge as the 47th President. These concerns stem from Trump's desire to loosen AI regulations. His allies have drafted an order to rapidly maximise AI usage for defence. The move has raised concerns about whether it can set AI on a path towards aiding questionable wartime activities, especially as Palantir founder Peter Thiel, who owns 7% of the company's shares, has been vocal in his support for Trump. It is premature to defend or criticise Anthropic. They have played the game fair and square throughout, at least in terms of transparency. Amodei, on multiple occasions, revealed his ambition to use Claude to support the government and its interests in protecting national security. "We are making Claude available for applications like combating human trafficking, rooting out international corruption, identifying covert influence campaigns, and issuing warnings of potential military activities," said Amodei at the AWS Summit 2024 in Washington, DC. In his recent essay 'Machines of Loving Grace', Amodei said, "On the international side, it seems very important that democracies have the upper hand on the world stage when powerful AI is created." "AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries," he added. Anthropic has also been quite transparent about the same and has revealed its intent to provide its technology for government use. Earlier in June, it revealed its plans to expand Claude's access for government use. Anthropic made its Claude models available on the AWS marketplace for the US Intelligence Community. "Claude offers a wide range of potential applications for government agencies, both in the present and looking towards the future. Government agencies can use Claude to provide improved citizen services, streamline document review and preparation, enhance policymaking with data-driven insights, and create realistic training scenarios," said Anthropic in a statement. At the same time, Anthropic proposed amendments to California's Senate Bill 1047 (SB 1047). Notably, the proposed amendments include exempting US military and intelligence operations from liability for "critical harms". Anthropic also intends to strike a balance between its two ambitions. This year, Anthropic partnered with the US Artificial Intelligence Safety Institute (AISI US) and has also been working with AISI UK to test its models for safety. Last year, Anthropic developed a 'Constitutional AI', to align its LLMs to "abide by high-level normative principles written into a constitution". In September 2023, Anthropic published a responsible scaling policy, a series of protocols, and security levels. "Our RSP defines a framework called AI Safety Levels (ASL) for addressing catastrophic risks, modelled loosely after the US government's biosafety level (BSL) standards for handling of dangerous biological materials," read the report. With its commitment to ethics and morals, Anthropic wants to be the first to foster a strong relationship with the government. Its updated usage policy introduced an exception that will allow governments to use their model, while also stating that it will continue to prevent any activities that are morally questionable. "With carefully selected government entities, we may allow foreign intelligence analysis in accordance with applicable law. All other use restrictions in our usage policy, including those prohibiting use for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations, remain," Anthropic wrote in the statement. In comparison, OpenAI hasn't been actively partnering with the government. However, some reports surfaced claiming it was 'quietly' pitching its tech to the government. Several employees of OpenAI, including many from the safety team, have also left the company. One of Anthropic's major investors is Amazon, and they are also set to raise another round of funds. As mentioned, Anthropic recently made Claude available on the AWS market. Most public sector technology is hosted on AWS, and Amazon, one of the biggest companies in the US, certainly benefits from close ties with the government. "We're convinced that responsibility drives trust, and trust drives adoption, and adoption drives innovation," said Dave Levy, VP of worldwide public sector at AWS, in conversation with AIM. This principle is reflected in their strategic collaboration with Anthopic. Walking the talk pays off. Anthropic has consistently championed safety and security, earning trust and partnerships with public sector companies. In contrast, OpenAI introduced these priorities later, making building trust harder.
[2]
Anthropic, Palantir partner with AWS for Claude's US Defence ops
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use AI company Anthropic and software company Palantir Technologies have announced a partnership with Amazon Web Services (AWS) to provide U.S. intelligence and defence agencies access to the Claude 3 and 3.5 family of models on AWS. The partnership between the trio will enable the US Government to use the AI model Claude within the Palantir AI Platform for tasks such as complex data processing, gaining data driven insights, identifying patterns and trends, streamlining document review and preparation, and helping U.S. officials to make informed decisions in time-sensitive situations. Palantir and AWS have received the Defense Information Systems Agency (DISA) IL6 accreditation that requires strict security protocols. Anthropic restricts the use of its models for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations, however it allows certain Government agencies to use its products. According to Anthropic's usage policy, it provides access to Government agencies based on certain criteria. These are: Anthropic also stated that it may allow foreign intelligence analysis in accordance with applicable law for selected government entities. Increasingly AI companies are providing access to Governments for defence purposes. Recently, Meta announced that it will provide its open-source Llama AI models to U.S. defence and national security agencies. This move integrates Meta's AI models into military settings through partnerships with top defence and technology companies. OpenAI also changed its usage policies to exclude an explicit prohibition on using its models for military and warfare purpose. Following this, the United States military has also collaborated with AI developers like OpenAI to create military wargame simulation videos and cybersecurity tools.OpenAI also roped in former National Security Agency (NSA) head Paul Nakasone into its board of directors. Similarly, Microsoft provided US intelligence agencies a generative AI model disconnected from the internet, that would enable secure information sharing. Google CEO Sundar Pichai also defended its partnerships with the military saying that tech companies have an important role to play because of the advances made by them in various fields like cybersecurity and AI at Munich Security Conference. States like China are also using open source AI models to create military applications with AI. For example, three institutions, including two associated with the China's People's Liberation Army's research body created an AI bot called "ChatBIT" using Llama 13B large language model (LLM), an older version of Meta's LLM. The model is capable of gathering and processing intelligence and offering information for operational decision-making and is 90% as capable as OpenAI's powerful ChatGPT-4, according to the papers.
[3]
"Safe AI" champ Anthropic teams up with defense giant Palantir in new deal
Anthropic has announced a partnership with Palantir and Amazon Web Services to bring its Claude AI models to unspecified US intelligence and defense agencies. Claude, a family of AI language models similar to those that power ChatGPT, will work within Palantir's platform using AWS hosting to process and analyze data. But some critics have called out the deal as contradictory to Anthropic's widely-publicized "AI safety" aims. On X, former Google co-head of AI ethics Timnit Gebru wrote of Anthropic's new deal with Palantir, "Look at how they care so much about 'existential risks to humanity.'" The partnership makes Claude available within Palantir's Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the "secret" classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department. In a press release, the companies outlined three main tasks for Claude in defense and intelligence settings: performing operations on large volumes of complex data at high speeds, identifying patterns and trends within that data, and streamlining document review and preparation. While the partnership announcement suggests broad potential for AI-powered intelligence analysis, it states that human officials will retain their decision-making authority in these operations. As a reference point for the technology's capabilities, Palantir reported that one (unnamed) American insurance company used 78 AI agents powered by their platform and Claude to reduce an underwriting process from two weeks to three hours. The new collaboration builds on Anthropic's earlier integration of Claude into AWS GovCloud, a service built for government cloud computing. Anthropic, which recently began operations in Europe, has been seeking funding at a valuation up to $40 billion. The company has raised $7.6 billion, with Amazon as its primary investor.
[4]
Anthropic and Palantir's partnership brings Claude AI to U.S. defense and intelligence on AWS
Anthropic, Palantir, and Amazon Web Services (AWS) have joined forces to integrate Anthropic's Claude AI models into U.S. government intelligence and defense operations. By leveraging Claude 3 and 3.5 within Palantir's AI Platform (AIP) hosted on AWS, this partnership aims to transform data processing and analysis capabilities for government agencies, empowering them to gain insights faster and make informed decisions in critical scenarios. The Claude models, developed by Anthropic, are now available in Palantir's highly secure, Impact Level 6 (IL6) cloud environment, which meets strict Defense Information Systems Agency (DISA) standards for national security-related data. Within Palantir AIP on AWS, these models are intended to enhance U.S. intelligence and defense capabilities by rapidly processing large volumes of complex data, identifying patterns, and facilitating high-level analysis. These AI-driven tools can streamline resource-intensive tasks such as document review and predictive analysis, ultimately supporting decision-making in sensitive situations. Claude stands out among AI offerings for its focus on responsible deployment and safety, a point Anthropic frequently emphasizes. While competitors like OpenAI's models are also exploring governmental applications, Anthropic differentiates its technology by emphasizing ethical safeguards. For example, the use of Claude models is limited to specific government-authorized tasks, such as intelligence analysis and advance warnings of potential military events, while actively avoiding applications that could be seen as destabilizing, like disinformation campaigns or unauthorized surveillance. This approach aligns with a general public-sector demand for "safety-first" AI models that respect both operational efficacy and regulatory standards. The AWS integration offers the Claude models both security and flexibility, allowing AI-powered applications to be deployed on a reliable, scalable platform with multiple levels of data protection. Hosted on AWS GovCloud and accredited under IL6, Palantir AIP ensures that Claude can perform critical functions without compromising data security. AWS Vice President Dave Levy underscores this as a significant step for public sector AI, enhancing productivity and safeguarding sensitive information. The collaboration reflects a broader trend of AI adoption within the U.S. government. The Brookings Institute recently reported a 1,200% increase in AI-related government contracts since early 2024, underscoring growing government interest in AI. This move from Anthropic and Palantir positions Claude as a key player in public-sector AI, with a reputation for ethical standards and rigorous security measures that may influence other tech companies in the field. The U.S. defense and intelligence community's interest in AI tools like Claude mirrors a broader industry shift towards embedding AI into mission-critical workflows. As Anthropic, Palantir, and AWS further operationalize Claude for government use, they are paving the way for new levels of digital agility and analysis, potentially reshaping U.S. intelligence practices for the future. This partnership, set to benefit from continued innovations in cloud-based AI, illustrates how AI can responsibly elevate government capabilities while upholding high standards for security and ethical use.
[5]
AWS, Anthropic and Palantir Join Forces to Bring Generative AI to US Defense and Intelligence
Meanwhile, OpenAI has yet to make similar announcements despite its cautious strategy of withholding major releases, such as GPT-5. Anthropic, Palantir Technologies, and AWS recently announced partnership to provide US intelligence and defense agencies with Claude 3 and 3.5 models, integrated within Palantir's AI Platform (AIP) and supported by AWS. With this, the trio looks to enable rapid data analysis, improved pattern recognition, and enhanced document review to support critical government functions. Dave Levy, VP of worldwide public sector at AWS said that they are excited to partner with Anthropic and Palantir and offer new generative AI capabilities that will drive innovation across the public sector. A few months ago, Levy, in conversation with AIM, said, "We're convinced that responsibility drives trust, and trust drives adoption, and adoption drives innovation," underscoring AWS's deep-rooted commitment to security and responsible innovation. This principle is reflected in their strategic collaboration aimed at enhancing generative AI capabilities for US defense and intelligence operations. "Our partnership with Anthropic and AWS provides US defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions," said Shyam Sankar, chief technology officer at Palantir. This collaboration leverages Palantir's IL6-accredited AIP and AWS's SageMaker for highly secure, agile, and efficient AI deployment. Both Palantir and AWS have received the Defense Information Systems Agency (DISA) IL6 accreditation, which demands some of the highest information security standards. "Access to Claude 3 and Claude 3.5 within Palantir AI platform on AWS will equip US defense and intelligence organisations with powerful AI tools that can rapidly process and analyse vast amounts of complex data. This will dramatically improve intelligence analysis and enable officials in their decision making processes." said Kate Earle Jensen, head of sales and partnerships at Anthropic. This partnership may prove beneficial, as Claude is widely regarded as one of the best AI models for programming and coding. The announcement comes just days after Meta announced the availability of Llama open source models to the US government and other private projects working towards the interests of the USA's national security. Moreover, newly elected President Trump's allies have drafted an order that advocates for an unprecedented use of artificial intelligence technologies for military purposes and applications. With Trump insisting on loosening regulations and guardrails for AI, it will be interesting to see how the ecosystem evolves, and we're likely to come across many such partnerships in the future. Ahead of Trump's victory, Anthropic had announced its intent to expand access to Claude's AI capabilities to support government initiatives. The blog post also mentions that Claude will allow 'carefully selected government agencies' to legally use it for intelligence operations. "We are making Claude available for applications like combating human trafficking, rooting out international corruption, identifying covert influence campaigns, and issuing warnings of potential military activities," said Anthropic chief Dario Amodei, on the sidelines of the AWS Summit 2024 in Washington, DC. At the time, Amodei had emphasised the importance of responsible AI deployment, stating, "It makes democracy as a whole more effective, and if we provide them poorly, it undermines the notion of democracy." Meanwhile, OpenAI has yet to make similar announcements despite its cautious strategy of withholding major releases, such as GPT-5, during election periods to avoid influencing outcomes. Now that the US election has concluded with Trump's victory, the question remains: will OpenAI finally release GPT-5? Only time will tell. "It is critically important that the US maintains its lead in developing AI with democratic values," said Altman, in a post on X, congratulating Trump.
[6]
Anthropic partners with AWS and Palantir to provide AI models to defense agencies - SiliconANGLE
Anthropic partners with AWS and Palantir to provide AI models to defense agencies Generative artificial intelligence startup Anthropic PBC said today it partnered with big data analytics service company Palantir Technologies Inc. and Amazon Web Services Inc. to provide its Claude AI model family to U.S. intelligence and defense agencies. The company said the partnership would use Palantir's data products to support government operations by processing vast amounts of data rapidly to produce data-driven insights and identify patterns and trends quickly. It would also help review documents and prepare for operations in time-sensitive and critical situations. The company's Claude AI model became accessible through the Palatir Artificial Intelligence Platform via AWS earlier this month. Using Palantir's AIP, customers can access Claude through SageMaker, a fully managed service provided by Amazon hosted through Palantir's secure infrastructure. According to the companies, Palantir and Amazon are among the limited number of companies to receive the Defense Systems Agency Impact Level 6 accreditation. Impact Level 6, or IL6, exists for high-level security classified data and information systems within the U.S. Department of Defense. It is reserved for systems that contain data critical for national security and affects materials up to one level below "top secret." "Our partnership with Anthropic and AWS provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions," said Shyam Sankar, chief technology officer at Palantir. Anthropic also stressed that the partnership will enable the responsible application of AI. The company recently launched its most powerful version of Claude 3.5 Sonnet, which runs at twice the speed of Claude 3 Opus, its biggest model. The company is known for creating AI models designed to produce less harmful results using a concept the company calls "Constitutional AI." This is a learning system for AI that imbues the model with a set of values that it should follow. The vision of this constitutional system is to make AI outputs less toxic or likely to become harmful by having another AI supervise its responses also revise its own based on those values. This news comes as other AI firms have also begun to open their models to government entities. Meta Platforms Inc. recently announced that it would allow U.S. intelligence and defense contractors to use its open-source Llama AI model and OpenAI is reportedly seeking deals with U.S. defense firms. According to Palantir, the newest Claude models have already seen broad adoption across multiple industries and have had a significant impact. "For example, one leading American insurer automated a significant portion of their underwriting process with 78 AI agents powered by AIP and Claude, transforming a process that once took two weeks into one that could be done in three hours," said Sankar. "We are now providing this same asymmetric AI advantage to the U.S. government and its allies."
[7]
Anthropic Joins A.I. Giants to Provide Models to US Defense Agencies
Tech leaders like Meta and Microsoft have also forayed into providing A.I. capabilities to defense agencies. Anthropic, a competitor to OpenAI that has positioned itself as a more safety-conscious alternative, announced today (Nov. 7) that it will provide U.S. defense and intelligence agencies with access to its Claude A.I. models. The deal will see Anthropic team up with Amazon Web Services (AWS) and Palantir to incorporate Claude in government efforts like data processing and document preparation. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters Anthropic joins a growing trove of A.I. companies supplying their technologies to the U.S. government. Between August of 2022 and 2023, the value of A.I.-related federal contracts skyrocketed 150 percent to $675 million, according to a March report from the Washington, D.C.-based think tank Brookings Institute. The U.S. Department of Defense (DoD) is emerging as one of the most dominant players in this new space, seeing the value of its A.I.-related contracts jump from $190 million to $557 million during this time. Under the deal, Anthropic's Claude family of models will be available to government customers through Palantir's platform on AWS. This access "will equip U.S. defense and intelligence organizations with powerful A.I. tools that can rapidly process and analyze vast amounts of complex data," said Kate Earle Jensen, head of sales and partnerships at Anthropic, in a statement. "This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource intensive tasks and boost operational efficiency across departments," she added. According to Anthropic's usage policy, the company is allowed to enter into contracts with "carefully selected government entities" pursuing "foreign intelligence analysis in accordance with applicable law." Use of its A.I. systems for disinformation campaigns, weapon design, censorship, domestic surveillance and "malicious" cyber operations, however, remain restricted. Big Tech strengthens ties with DoD on A.I. application Anthropic's deal follows a similar announcement from Meta (META), which earlier this week unveiled plans to allow U.S. government agencies and contractors like Palantir, Lockheed Martin and Booz Allen to use its open-source Llama A.I. models for defense and national security applications. Meta's models, which will be used to synthesize documents, accelerate code generation and strengthen cyber defense, will also be made available to agencies across the Five Eyes intelligence alliance, which consists of the U.S., Canada, Britain, Australia and New Zealand. Microsoft, meanwhile, teamed up with Palantir in August to offer A.I. software and capabilities to defense-focused U.S. federal agencies. OpenAI, which earlier this year inked a deal with the government contractor Carahsoft, is also reportedly interesting in working with the DoD and the Department of Homeland Security. These growing ties between Big Tech's A.I. products and military applications have not been without scrutiny. In 2018 thousands of Google employees protested against the company's efforts in a Pentagon program known as Project Maven that used A.I. to improve analysis of drone strike videos, leading Google to end its contract. However, the tech company remains engaged in various contracts with defense agencies -- something that became a point of contention earlier this year, when nearly 200 Google DeepMind employees reportedly signed a letter against such partnerships over fears that the A.I. lab's technologies were being used for manufacturing weapons.
[8]
Anthropic teams up with Palantir and AWS to sell its AI to defense customers | TechCrunch
Anthropic today announced that it's teaming up with Palantir, the data-mining company, and Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to Anthropic's Claude 3 and 3.5 family of AI models. The news comes as a growing number of AI vendors, for strategic and revenue-related reasons, look to ink deals with U.S. defense customers. Meta recently revealed that it's making its Llama family of models available to defense partners, while OpenAI, through its the government contractor Carahsoft, is seeking to establish a closer relationship with the U.S. Department of Defense. Anthropic head of sales Kate Earle Jensen says that the company's partnership with Palantir and AWS will allow for an "integrated suite of technology" to "operationalize the use of Claude" within Palantir's platform while leveraging AWS' flexibility. "We're proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations," Jensen said. "Access to Claude 3 and Claude 3.5 within Palantir on AWS will equip U.S. defense and intelligence organizations with powerful AI tools that can rapidly process and analyze vast amounts of complex data. This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource intensive tasks and boost operational efficiency across departments." This summer, Anthropic brought select Claude models to AWS' GovCloud, signaling its ambitions to expand its public sector customers. (GovCloud is AWS' service designed to allow U.S. government agencies and customers move sensitive workloads into the cloud.) The company has positioned itself as a more safety-conscious vendor than OpenAI, but its terms of service allow Claude to be used for tasks like "legally authorized foreign intelligence analysis," "identifying covert influence or sabotage campaigns," and "providing warning in advance of potential military activities."
[9]
Anthropic, Palantir, and AWS Partner to Bring Claude AI Models to US Defense Operations
Partnership enables rapid data processing, enhanced insights, and streamlined document handling. AI start-up Anthropic on Thursday announced its partnership with data analytics firm Palantir Technologies and Amazon Web Services (AWS) to provide US intelligence and defense agencies with access to its Claude family of AI models (Claude 3 and 3.5). This collaboration enables the operational integration of Claude within Palantir's AI Platform (AIP), leveraging AWS's capabilities. Also Read: Anthropic Unveils New AI Model with Computer Use Capability According to the official release, the partnership promotes the responsible application of AI within Palantir's products, supporting government operations like rapidly processing complex data, enhancing data-driven insights, identifying patterns and trends, streamlining document review and preparation, and assisting US officials in making more informed decisions in time-sensitive situations -- all while preserving their decision-making authority. Claude was made available on Palantir's platform earlier this month and is now accessible within Palantir's defense-accredited environment, Palantir Impact Level 6 (IL6), supported by AWS. Palantir and AWS are among a select group of companies to receive the Defense Information Systems Agency (DISA) IL6 accreditation, the official release said. "Our partnership with Anthropic and AWS provides US defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions," said Shyam Sankar, Chief Technology Officer of Palantir. Also Read: Meta Unveils New AI Models and Tools to Drive Innovation "Palantir is proud to be the first industry partner to bring Claude models to classified environments. We've already seen firsthand the impact of these models with AIP in the commercial sector: for example, one leading American insurer automated a significant portion of their underwriting process with 78 AI agents powered by AIP and Claude, transforming a process that once took two weeks into one that could be done in three hours. We are now providing this same asymmetric AI advantage to the U.S. government and its allies," Shyam added. "Access to Claude 3 and Claude 3.5 within Palantir AIP on AWS will equip US defense and intelligence organizations with powerful AI tools that can rapidly process and analyze vast amounts of complex data. This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource-intensive tasks and boost operational efficiency across departments," said Kate Earle Jensen, Head of Sales and Partnerships at Anthropic. Also Read: Meta Expands Access to Llama AI Models for US Government Use "We are excited to partner with Anthropic and Palantir and offer new generative AI capabilities that will drive innovation across the public sector," said Dave Levy, VP of Worldwide Public Sector at AWS. Recently, Meta also announced the availability of its open-source Llama models for defense partners and government use, as reported by TelecomTalk.
[10]
Anthropic, Palantir follows Meta's lead taking AI to war
AI firm Anthropic has become the latest firm to give the United States government access to its AI models for national security purposes -- following a similar lead to Meta's announcement earlier this week. US defense departments will be granted access to Anthropic's Claude 3 and 3.5 AI models, which will be integrated into Palantir's AI Platform and secured on Amazon Web Services, Palantir said in a Nov. 7 statement. "Our partnership with Anthropic and AWS provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions," Palantir's chief technology officer Shyam Sankar explained. It will allow the US government to process vast amounts of data more rapidly, make data-driven intelligence insights and allow officials to make more informed decisions in time-sensitive situations, Anthropic Head of Sales and Partnerships Kate Earle Jensen said. Claude became available on Palantir's AI platform earlier this month and can now be used in Palantir's defense-accredited environment, dubbed Palantir Impact Level 6 (IL6). IL6 is reserved for data systems containing "secrets" deemed critical to national security interests. It requires "maximum protection" against unauthorized access and tampering. Anthropic and Palantir's partnership follows a similar announcement by Meta on Nov. 4, which opened up its Llama AI model to the US military and defense contractors. Meta said Llama would aim to streamline the US military's complicated logistics and planning, track terrorist financing and strengthen America's cyber defenses. Related: AI boosts Meta and Microsoft Q3 earnings, but outlook sours stock prices Palantir, a software firm that largely provides data services for defense purposes, is also involved in Meta's plan. Amazon, Microsoft, IBM, Oracle, Lockheed Martin, Accenture and Deloitte are among the several firms supporting Meta's Llama offering to the US military. Meanwhile, ChatGPT-creator firm OpenAI is also reportedly seeking to establish a closer relationship with US defense departments. National security is one of the main issues US President-elect Donald Trump has promised to improve when he returns to office in January 2025.
[11]
The AI Startup Anthropic, Which Is Always Talking About How Ethical It Is, Just Partnered With Palantir
Anthropic, the AI company that touts itself as the safety-prioritizing alternative to other AI firms like OpenAI -- from which it's poached many executives -- has partnered with shadowy defense contractor Palantir. The AI company is also teaming up with Amazon Web Services to bring its AI chatbot Claude to US intelligence and defense agencies, an alliance that feels at odds with Anthropic's claim of putting "safety at the frontier." According to a press release, the partnership supports the US military-industrial complex by "processing vast amounts of complex data rapidly, elevating data-driven insights, identifying patterns and trends more effectively, streamlining document review and preparation, and helping US officials to make more informed decisions in time-sensitive situations." The situation is especially peculiar considering AI chatbots have long garnered a reputation for their tendency to leak sensitive information and "hallucinate" facts. "Palantir is proud to be the first industry partner to bring Claude models to classified environments," said Palantir CTO Shyam Sankar in a statement. "This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource-intensive tasks and boost operational efficiency across departments," Anthropic head of sales Kate Earle Jensen added. Anthropic does technically allow its AI tools to be used for "identifying covert influence or sabotage campaigns" or "providing warning in advance of potential military activities," according to its recently expanded terms of service. Since June, the terms of service conveniently carve out contractual exceptions for military and intelligence use, as TechCrunch points out. The latest partnership allows Claude access to information that falls under the "secret" Palantir Impact Level 6 (IL6), which is one step below "top secret" in the Defense Department, per TechCrunch. Anything deemed IL6 can contain data critical to national security. In other words, Anthropic and Palantir may not have handed the AI chatbot the nuclear codes -- but it will now have access to some spicy intel. It also lands Anthropic in ethically murky company. Case in point, Palantir scored a $480 million contract from the US Army to build out an AI-powered target identification system called Maven Smart System earlier this year. The overarching Project Maven has previously proven incredibly controversial in the tech sector. However, how a hallucinating AI chatbot fits into all of this remains to be seen. Is Anthropic simply following the money as it prepares to raise enough funds to secure a rumored $40 billion valuation? It's a disconcerting partnership that sets up the AI industry's growing ties with the US military-industrial complex, a worrying trend that should raise all kinds of alarm bells given the tech's many inherent flaws -- and even more so when lives could be at stake.
[12]
Anthropic, Amazon, and Palantir Team Up to Bring AI to the Defense Department
Artificial intelligence is getting a security clearance. On Thursday, industry giants Amazon, Palantir, and Anthropic announced they would partner to allow the feds to harness the power of AI. Specifically, the partnership lets employees at intelligence agencies and the Department of Defense use San Francisco-based Anthropic's generative AI models Claude 3 and 3.5 within Palantir's AI Platform (AIP). The systems will run on Amazon Web Services and will incorporate information classified up to the "secret" level. According to a press release, using Claude in Palantir's AIP will help government workers with "processing vast amounts of complex data rapidly, elevating data driven insights, identifying patterns and trends more effectively, streamlining document review and preparation, and helping U.S. officials to make more informed decisions in time-sensitive situations while preserving their decision-making authorities." The U.S. government is often criticized for being slow to introduce new technologies. And in the case of AI, not everyone is comfortable harnessing the technology for military ends. In 2018, many Google employees protested the company's partnership with the Department of Defense on an initiative known as Project Maven to use AI to analyze drone footage.
[13]
Claude enlists to help US defense, intelligence AI efforts
An emotionally-manipulable AI in the hands of the Pentagon and CIA? This'll surely end well Palantir has announced a partnership with Anthropic and Amazon Web Services to build a cloudy Claude platform suitable for the most secure of the US government's defense and intelligence use cases. In an announcement today, the three firms said the partnership would integrate Claude 3 and 3.5 with Palantir's Artificial Intelligence Platform, hosted on AWS. Both Palantir and AWS have been awarded Impact Level 6 (IL6) certification by the Department of Defense, which allows the processing and storage of classified data up to the Secret level. Claude was first made available to the defense and intelligence communities in early October, an Anthropic spokesperson told The Register. The US government will be using Claude to reduce data processing times, identify patterns and trends, streamline document reviews, and help officials "make more informed decisions in time-sensitive situations while preserving their decision-making authorities," the press release noted. "Palantir is proud to be the first industry partner to bring Claude models to classified environments," said Palantir's CTO, Shyam Sankar. "Our partnership with Anthropic and AWS provides US defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions." Unlike Meta, which announced yesterday it was opening Llama to the US government for defense and national security applications, Anthropic doesn't even need to make an exception to its acceptable use policy (AUP) to allow for potentially dangerous applications of Claude in the hands of the DoD, CIA or any other defense or intelligence branch using it. Meta's policy specifically prohibits the use of Llama for military, warfare, espionage, and other critical applications, for which Meta has granted some exceptions for the Feds. No such restrictions are included in Anthropic's AUP. Even high-risk use cases, which Anthropic defines as the use of Claude that "pose an elevated risk of harm," leave defense and intelligence applications out, only mentioning legal, healthcare, insurance, finance, employment, housing, academia and media usage of Claude as "domains that are vital to public welfare and social equity." When asked about its AUP and how that might pertain to government applications, particularly defense and intelligence as indicated in today's announcement, Anthropic only referred us to a blog post from June about the company's plans to expand government access to Claude. "Anthropic's mission is to build reliable, interpretable, steerable AI systems," the blog stated. "We're eager to make these tools available through expanded offerings to government users." Anthropic's post mentions that it's already established a method of granting acceptable use policy exceptions for government users, noting that those allowances "are carefully calibrated to enable beneficial use by carefully selected government agencies." What those exceptions are is unclear, and Anthropic didn't directly answer questions to that end and the AUP leaves a lot of unanswered questions around the defense and intelligence use of Claude. The existing carve-out structure, Anthropic noted, "allow[s] Claude to be used for legally authorized foreign intelligence analysis ... and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them," Anthropic said. "All other restrictions in our general Usage Policy, including those concerning disinformation campaigns, the design or use of weapons, censorship, and malicious cyber operations, remain." We'll just have to hope no one decides to emotionally blackmail Claude into violating whichever of Anthropic's rules the US government still has to follow. ®
[14]
Palantir Adds an AI Company to Its Arsenal for Military and Spy Work
Palantir and Anthropic have signed a deal to bring Claude to the Pentagon. Further entrenching its position as spooks' and soldiers' go-to supplier for artificial intelligence, Palantir on Thursday announced that it will be adding Anthropic's Claude models to the suite of tools it provides to U.S. intelligence and military agencies. Palantir, the Peter Thiel-founded tech company named after a troublesome crystal ball, has been busy scooping up contracts with the Pentagon and striking deals with other AI developers to host their products on Palantir cloud environments that are certified to handle classified information. Its dominance in the military and intelligence AI spaceâ€"and association with President-Elect Donald Trumpâ€"has caused the company's value to soar over the past year. In January, Palantir's stock was trading at around $16 a share. The value had risen to more than $40 a share by the end of October and then received a major bump to around $55 after Trump won the presidential election this week. In May, the company landed a $480 million deal to work on an AI-powered enemy identification and targeting system prototype called Maven Smart System for the U.S. Army. In August, it announced it would be providing Microsoft's large language models on the Palantir AI Platform to military and intelligence customers. Now Anthropic has joined the party. “Our partnership with Anthropic and [Amazon Web Services] provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions," Palantir chief technology officer Shyam Sankar said in a statement. Palantir said that Pentagon agencies will be able to use the Claude 3 and 3.5 models for "processing vast amounts of complex data rapidly," "streamlining document review and preparation," and making "informed decisions in time-sensitive situations while preserving their decision-making authorities." What sorts of time-sensitive decisions those will be and how closely they will be connected to killing people is unclear. While all other federal agencies are required to publicly disclose how they use their various AI systems, the Department of Defense and intelligence agencies are exempt from those rules, which President-elect Donald Trump's administration may scrap anyway. In June, Anthropic announced that it was expanding government agencies' access to its products and would be open to granting some of those agencies exemptions from its general usage policies. Those exemptions would "allow Claude to be used for legally authorized foreign intelligence analysis, such as combating human trafficking, identifying covert influence or sabotage campaigns, and providing warning in advance of potential military activities." However, Anthropic said it wasn't willing to waive rules prohibiting the use of its tools for disinformation campaigns, the design or use of weapons, censorship, or malicious cyber operations.
[15]
Palantir to Provide Anthropic's Claude to U.S. Military
Palantir, which sells software to governments and enterprise firms, announced on Thursday that Anthropic's Claude 3 and 3.5 models became available earlier this month to U.S. defense agencies through its AI platform, which allows customers to automate tasks, such as responding to emails. Previously, the Claude models were only an option for Palantir's commercial customers. Palantir is
Share
Share
Copy Link
Anthropic, Palantir, and AWS collaborate to integrate Claude AI models into US government intelligence and defense operations, raising questions about AI ethics and national security.
Anthropic, Palantir Technologies, and Amazon Web Services (AWS) have announced a significant partnership to provide US intelligence and defense agencies access to Anthropic's Claude 3 and 3.5 AI models [1][2]. This collaboration aims to enhance data processing, analysis, and decision-making capabilities for critical government functions.
The partnership integrates Claude AI models into Palantir's AI Platform (AIP), hosted on AWS's secure cloud infrastructure [3]. Key features of this collaboration include:
Both Palantir and AWS have received the Defense Information Systems Agency (DISA) IL6 accreditation, ensuring adherence to stringent security protocols [2]. This accreditation allows for the handling of sensitive national security-related data.
Anthropic, known for its emphasis on "AI safety," has faced scrutiny over this partnership [3]. The company has previously:
Dario Amodei, Anthropic's CEO, has been transparent about the company's ambitions to support US government interests:
This partnership reflects a growing trend of AI companies engaging with defense and intelligence agencies:
The collaboration has sparked debates about the ethical use of AI in defense and intelligence:
As AI continues to integrate into government operations, several key points emerge:
Reference
[1]
[4]
[5]
Analytics India Magazine
|AWS, Anthropic and Palantir Join Forces to Bring Generative AI to US Defense and IntelligenceAnthropic introduces Claude Enterprise to compete with OpenAI's ChatGPT Enterprise. Meanwhile, speculation arises about a potential partnership between Anthropic and Amazon to revitalize Alexa.
2 Sources
Anthropic introduces a new 'computer use' feature in its Claude AI models, allowing them to interact with computer interfaces like humans. This development, along with model upgrades, positions Anthropic as a strong competitor to OpenAI in the AI industry.
3 Sources
Leading AI companies OpenAI and Anthropic have agreed to collaborate with the US AI Safety Institute to enhance AI safety and testing. This partnership aims to promote responsible AI development and address potential risks associated with advanced AI systems.
5 Sources
Anthropic, an AI startup backed by Jeff Bezos, announces a partnership with Amazon Web Services and Palantir to provide its Claude AI models to US defense and intelligence agencies, marking a significant development in AI's role in national security.
2 Sources
Leading AI companies like Anthropic, Meta, and OpenAI are changing their policies to allow military use of their technologies, marking a significant shift in the tech industry's relationship with defense and intelligence agencies.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved