Curated by THEOUTPOST
On Fri, 1 Nov, 8:03 AM UTC
36 Sources
[1]
The Chinese Military Is Weaponizing Facebook's Open Source AI
Here's an unintended consequence for you: Reuters reports that Facebook owner Meta's open source Llama model is already being used by the Chinese military. According to the report, the military-focused AI tool dubbed "ChatBIT" is being developed to gather intelligence and provide information for operational decision-making, as laid out in an academic paper obtained by Reuters. Unsurprisingly, allowing a foreign adversary's military to make use of your large language model isn't exactly a good look. In a thinly-veiled attempt to own the narrative, Meta's president of global affairs Nick Clegg published a blog post just three days after Reuters' report, arguing that it's working to make Llama "available to US government agencies and contractors working on national security applications." The blog post desperately attempts to tug at the heartstrings of American tech leaders, with Clegg arguing that AI models like Llama will "not only support the prosperity and security of the United States, they will also help establish US open source standards in the global race for AI leadership." But its timing is certainly suspicious, as Gizmodo notes. What else could explain the saccharine chest-thumping appeal to Americans now, while China's People's Liberation Army was making use of its AI before the US government even considered doing the same? As Reuters points out, Meta's blog post also flies in the face of the company's acceptable use policy, which forbids "military, warfare, nuclear industries or applications, espionage." But since the AI is completely open source, these provisions are utterly ineffective and unenforceable, serving largely as a way for Meta to cover its tracks. Clegg argued that by open-sourcing AI models, the US could better compete with other nations "including China," which are "racing to develop their own open source models" and "investing heavily to leap ahead of the US." "We believe it is in both America and the wider democratic world's interest for American open source models to excel and succeed over models from China and elsewhere," the former deputy prime minister for the UK wrote. But whether that kind of reasoning will satisfy officials at the Pentagon remains to be seen. Meta's flailing is symptomatic of a massive national security blindspot. Now that the cat is out of the bag, the United States' adversaries are enjoying the exact same leaps in tech as it and its allies. Last month, the Biden administration announced that it was finalizing rules to limit US investment in AI in China that could threaten US national security. But given Meta's fast-and-loose approach, these rules will likely be far too little, far too late. Meta, on the other hand, thinks its AI is far too puny to make any difference for China anyway. "In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI," a spokesperson told Reuters.
[2]
Meta changes its tune on defense use of its Llama AI
Change of mind follows discovery China was playing with it uninvited? Meta has historically restricted its LLMs from uses that could cause harm - but that has apparently changed. The Facebook giant has announced it will allow the US government to use its Llama model family for, among other things, defense and national security applications. Nick Clegg, Meta's president of global affairs, wrote yesterday that Llama, already available to the public under various conditions, was now available to US government agencies - as well as a number of commercial partners including Anduril, Lockheed Martin, and Palantir. Meta told The Register all of its Llama models have been made available to the US government and its contractors. Llama - which is described by Meta as open source though it really isn't - is already being used by Uncle Sam's partners such as Oracle to improve aircraft maintenance, and by Scale AI "to support specific national security team missions." IBM, through watsonx, is bringing Llama to national security agencies' self-managed datacenters and clouds, according to Clegg. "These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish US open source standards in the global race for AI leadership," Clegg asserted. The new permission for the federal government and its contractors to use Llama for national security purposes conflicts with the model's general-public acceptable use policy, which specifically prohibits use in "military, warfare, nuclear industries or applications, espionage" or "operation of critical infrastructure, transportation technologies, or heavy machinery." Even so, we're told nothing's changing - outside of the deal Clegg announced. "Our Acceptable Use Policy remains in place," a Meta spokesperson told us. "However, we are allowing the [US government] and companies that support its work to use Llama, including for national security and other related efforts in compliance with relevant provisions of international humanitarian law." Clegg waxed philosophical throughout his blog post about how the success of Llama's ostensibly open design was fundamental to American economic and national security needs. "In a world where national security is inextricably linked with economic output, innovation and job growth, widespread adoption of American open source AI models serves both economic and security interests," Clegg wrote. "We believe it is in both America and the wider democratic world's interest for American open source models to excel and succeed over models from China and elsewhere." Clegg went on to argue that open standards for AI will increase transparency and accountability - which is why the US has to get serious about making sure its vision for the future of the tech becomes the world standard. "The goal should be to create a virtuous circle, helping the United States retain its technological edge while spreading access to AI globally and ensuring the resulting innovations are responsible and ethical, and support the strategic and geopolitical interests of the United States and its closest allies," Clegg explained. To that end, Meta told Bloomberg, similar offers for the use of Llama by government entities were extended to the US's "Five Eyes" intelligence partners: Canada, the UK, Australia, and New Zealand. But let's not forget the self-serving aspect of this deal. It was just days ago, during Meta's Q3 earnings call, that Mark Zuckerberg asserted that opening up Llama would benefit his company, too - by ensuring its AI designs become a sort of de facto standard. "As Llama gets adopted more, you're seeing folks like Nvidia and AMD optimize their chips more to run Llama specifically well, which clearly benefits us," Zuckerberg told investors listening to the earnings call. "So it benefits everyone who's using Llama, but it makes our products better rather than if we were just on an island building a model that no one was kind of standardizing around in the industry." The announcement is perfectly timed to give Llama a patriotic paint job after news broke last week that researchers in China reportedly had built Llama-based AI models for military applications. Meta maintained that China's use of Llama was unauthorized and contrary to its acceptable use policy. And that's inviolable - except for the US government and its allies, apparently. ®
[3]
Meta is letting the US military use its Llama AI model for 'national security applications'
Researchers find evidence China has already used Llama for defense Meta has announced it is offering the use of its Llama generative AI model to government organizations for 'national security applications', and that it is working with US agencies and contractors to support their work. Amongst those Meta has partnered with are Lockheed Martin, AWS, and Oracle. An example the company has given is its work with Oracle to 'synthesize aircraft maintenance documents' to enable technicians to diagnose problems 'more quickly and accurately'. Lockheed Martin is also said to have incorporated Llama into its AI factory, which Meta says has accelerated code generation, data analysis, and enhanced business processes. This is a significant change from Llama's acceptable use policy, which prohibits the use of models for "military, warfare, nuclear industries or applications, espionage", and it specifically prohibits weapon development and promoting violence. The use of AI in defense is challenged by some, who cite security concerns like potentially compromisable data. Other vulnerabilities, like bias and hallucinations, are intrinsic to AI and cannot be avoided, experts have warned. The catalyst for this drastic change in policy could be the recent reports that China has used the model in its own military applications. Llama was reportedly used by the state to gather and process intelligence, creating 'ChatBIT' for military dialogue and question answering. This was, of course, against Llama's terms of use, but since the model is public and open source, the policy is difficult to enforce. "In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI," Meta said in a statement. Meta have confirmed it will also be making exceptions for government agencies in the other Five Eyes countries of Canada, Australia, the UK, and New Zealand.
[4]
Meta to let US national security agencies and defense contractors use Llama AI
Company typically prohibits its use for 'military, warfare, nuclear industries or applications, [and] espionage' Meta announced Monday that it would allow US national security agencies and defense contractors to use its open-source artificial intelligence model, Llama. The announcement came days after Reuters reported an older version of Llama had been used by researchers to develop defense applications for the military wing of the Chinese government. Meta's policies typically prohibit the use of its open-source large language model for "military, warfare, nuclear industries or applications, [and] espionage". The company is making an exception for US agencies and contractors as well as similar national security agencies in the UK, Canada, Australia and New Zealand, according to Bloomberg. "These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish US open-source standards in the global race for AI leadership," Nick Clegg, Meta's president of global affairs, wrote in a blog post. Among the government contractors Meta is opening up Llama to are Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake. The company emphasized the need to develop AI that is more advanced than that of China - a key talking point that many members of the US Congress bring up when discussing whether and how to regulate AI. "In a world where national security is inextricably linked with economic output, innovation and job growth, widespread adoption of American open-source AI models serves both economic and security interests," Clegg wrote. "Other nations - including China and other competitors of the United States - understand this as well, and are racing to develop their own open-source models, investing heavily to leap ahead of the US." Two Chinese researchers associated with the People's Liberation Army (PLA) were able to access and use an older version of Llama to develop a chatbot that helped to gather and process military intelligence, according to Reuters. The researcher's use of Llama was "unauthorized", according to a statement Meta provided Reuters. US regulators have repeatedly expressed their desires to beat other countries, namely China, to developing the most advanced AI for national security reasons. Last week, the White House published its first memo on how the federal government plans to address AI national security policy. Among the priorities the White House listed was the need to "harness AI to achieve national security objectives" and accelerate the procurement of AI capabilities from the private sector. "Advances at the frontier of AI will have significant implications for national security and foreign policy in the near future," the memo reads. The tech industry has long supplied AI technology to US and international defense and national security agencies. In 2018, Google workers successfully opposed the company's participation in a Pentagon project, called Project Maven, that uses AI to better decipher drone videos. Tech workers have protested these defense contracts with more fervor in the last year, particularly as many questioned their employer's work with the Israeli government. However, with the government demand for AI models sky-rocketing, tech firms are likely to be more motivated than ever to bid for these national security contracts.
[5]
Meta gives US government its powerful AI after China took it and weaponized it
TL;DR: Meta has responded to reports of Chinese institutions using its Llama AI model for military purposes by granting access to US government agencies for defense applications. Despite Llama being open-source, Meta prohibits its use for military activities. Meta has seemingly responded to the recent reports that top Chinese institutions linked to China's government have taken Meta's publicly available Llama model for military purposes by granting US government agencies access for defense purposes. The announcement from Meta came after a report from Reuters claimed six researchers from three Chinese institutions, including two under the People's Liberation Army's (PLA) leading research body, used an early version of Meta's powerful AI model called Llama. The report claimed Meta's AI model was used by the researchers as a base for what is called "ChatBIT," and that this AI model was "optimised for dialogue and question-answering tasks in the military field," according to a paper reviewed by Reuters. Notably, Meta's Llama model is open-source, meaning it is publicly available. However, Meta prohibits the use of any of its Llama models for military purposes, and, under its own guidelines, lists the following prohibited use cases for its AI models - "military, warfare, nuclear industries or applications, espionage". These guidelines fall in line with the push from the US government not to fall behind in the race to develop the most sophisticated AI model, as providing adversial countries with the tools to develop more sophisticated systems would jeopardize the US's substantial lead in the space. Now, Meta has announced that its enabling US government agencies and contractors working on national security applications to its Llama models. Meta wrote in its announcement that its partnering with companies such as Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama models to government agencies.
[6]
After China, Meta Just Hands Llama to the US Government to 'Strengthen' Security
Meta's stance to help government agencies leverage their open-source AI models comes after China's rumoured adoption of Llama for military use. Meta is now making Llama available for US government agencies, defence projects and other private sectors working on national security. They're also extending their partnership with companies like Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to help government agencies adopt Llama. "These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish U.S. open source standards in the global race for AI leadership." said Nick Clegg, President of Global Affairs in a blog post published from Meta. The blog post also highlighted how their partners are aiding the adoption of Llama. For example, AWS and Azure are supporting governments by hosting their models on their secure cloud services. Lockheed Martin has also already integrated Llama in their factories, leveraging its capabilities for code generation, and data analysis. "Large language models can support many aspects of America's safety and national security. They can help to streamline complicated logistics and planning, track terrorist financing or strengthen our cyber defences," added Clegg. The announcement comes after reports that China was rumoured to be using Llama for its military applications. Researchers linked to the People's Liberation Army are said to have built ChatBIT, an AI conversation tool fine-tuned to answer questions involving the aspects of the military. It didn't take long to invoke fear in the AI community, with torchbearers like Vinod Khosla going all the way to criticise Meta's Open-Source approach. While his vested interest in OpenAI is bound to find an opportunity to call out Meta, Yann LecCun, Meta's Chief AI scientist, did not hold back. He said, "There is a lot of very good published AI research coming out of China. In fact, Chinese scientists and engineers are very much on top of things (particularly in computer vision, but also in LLMs). They don't really need our open-source LLMs." It's fair to agree with what Yann LecCun is trying to say. China has already made notable progress in generative AI, and it was even claimed that Kai-Fu Lee's foundational model ranks better than GPT 4o on certain benchmarks. Their indigenous approach towards advancements in technology is likely not going to change in the AI sector as well. Clegg also added that "Widespread adoption of American open-source AI models serves both economic and security interests. Other nations -- including China and other competitors of the United States -- understand this as well and are racing to develop their own open-source models, investing heavily to leap ahead of the U.S." Earlier this year, the US Army announced that it's investing $50 million in 'small and nontraditional businesses' to develop AI and ML solutions. Recently, the US Army also launched a generative AI platform called Ask Sage, which assists personnel in several aspects of software development.
[7]
Meta permits its AI models to be used for US military purposes
Meta said that it would make its AI models, called Llama, available to federal agencies and that it was working with defense contractors such as Lockheed Martin and Booz Allen as well as defense-focused tech companies including Palantir and Anduril.Meta will allow US government agencies and contractors working on national security to use its artificial intelligence models for military purposes, the company said Monday, in a shift from its policy that prohibited the use of its technology for such efforts. Meta said that it would make its AI models, called Llama, available to federal agencies and that it was working with defense contractors such as Lockheed Martin and Booz Allen as well as defense-focused tech companies including Palantir and Anduril. The Llama models are "open source," which means the technology can be freely copied and distributed by other developers, companies and governments. Meta's move is an exception to its "acceptable use policy," which forbade the use of the company's AI software for "military, warfare, nuclear industries," among other purposes. In a blog post Monday, Nick Clegg, Meta's president of global affairs, said the company now backed "responsible and ethical uses" of the technology that supported the United States and "democratic values" in a global race for AI supremacy. "Meta wants to play its part to support the safety, security and economic prosperity of America -- and of its closest allies, too," Clegg wrote. He added that "widespread adoption of American open source AI models serves both economic and security interests." A Meta spokesperson said the company would share its technology with members of the Five Eyes intelligence alliance: Canada, Britain, Australia and New Zealand in addition to the United States. Bloomberg earlier reported that Meta's technology would be shared with the Five Eyes countries. Meta, which owns Facebook, Instagram and WhatsApp, has been working to spread its AI software to as many third-party developers as possible, as rivals like OpenAI, Microsoft, Google and Anthropic vie to lead the AI race. Meta, which had lagged some of those companies in AI, decided to open-source its code to catch up. As of August, the company's software has been downloaded more than 350 million times. Meta is likely to face scrutiny for its move. Military applications of Silicon Valley tech products have proved contentious in recent years, with employees at Microsoft, Google and Amazon vocally protesting some of the deals that their companies reached with military contractors and defense agencies. In addition, Meta has come under scrutiny for its open-source approach to AI. While OpenAI and Google argue that the tech behind their AI software is too powerful and susceptible to misuse to release into the wild, Meta has said AI can be improved and made safer only by allowing millions of people to look at the code and examine it. Meta's executives have been concerned that the U.S. government and others may harshly regulate open-source AI, two people with knowledge of the company said. Those fears were heightened last week after Reuters reported that research institutions with ties to the Chinese government had used Llama to build software applications for the People's Liberation Army. Meta executives took issue with the report, and told Reuters that the Chinese government was not authorized to use Llama for military purposes. In his blog post Monday, Clegg said the U.S. government could use the technology to track terrorist activities and improve cybersecurity across American institutions. He also repeatedly said that using Meta's AI models would help the United States remain a technological step ahead of other nations. "The goal should be to create a virtuous circle, helping the United States retain its technological edge while spreading access to AI globally and ensuring the resulting innovations are responsible and ethical, and support the strategic and geopolitical interests of the United States and its closest allies," he said.
[8]
Open Source Bites Back as China’s Military Makes Full Use of Meta AI
China’s People’s Liberation Army is using Llama 13B for military applications. That’s against the acceptable use policy, but there’s no way to put the AI back in the bottle. Chinese research institutions with connections to the Chinese military have developed AI systems using Meta’s open-source Llama model. Papers discussing the AI model are unequivocal: the bots have military applications. According to a report from Reuters, six Chinese researchers from three different institutions with connections to the People's Liberation Army (PLA) released a paper about the AI in June. The researchers scooped up Llama 13B, an early version of Meta’s open-source large language model, and trained it on military data with the goal of making a tool that could gather and process intelligence and help make decisions. They called it ChatBIT. “In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also ... strategic planning, simulation training, and command decision-making will be explored,†the paper said, according to a Reuters translation. ChatBIT was trained using 100,000 military dialogue records. Another paper from the same period described how a Llama-based LLM has already been deployed for domestic policing. Like ChatBIT, the domestic version helps police gather and process large amounts of data to aid decision-making. In a third paper that Reuters uncovered, two researchers at an Aviation firm connected to the PLA are using Llama 2 for war. The bot is for “the training of airborne electronic warfare interference strategies,†Reuters said. We’re living through an AI gold rush. Companies like OpenAI and Microsoft are trying to make millions of dollars from proprietary AI systems that are promising the moon. Many of those systems are closed, a black box in which the inputs and training data are poorly understood by the end-user. Mark Zuckerberg took Meta a different way. In a July essay that invoked open-source gold standard systems like Unix and Linux, Zuckerberg decreed that “open-source AI is the path forward.†“There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives,†the essay said. “I think governments will conclude it’s in their interest to support open source because it will make the world more prosperous and safer.†At the time, Zuckerberg also waved off fears that China would get its hand on Llama. He argued the benefits outweighed the risks. “Some people argue that we must close our models to prevent China from gaining access to them, but my view is that this will not work and will only disadvantage the U.S. and its allies,†he said. “Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geo-political adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities.†Meta does outline acceptable practices for using its open-source LLMs. The listen includes prohibitions against “military, warfare, nuclear industries or applications, espionage, use for materials or activities†and “any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual.†Those are, of course, the kinds of things a military does. Their whole job is inflicting bodily harm on an individual. “Any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy,†Meta’s director of public policy, Molly Montgomery, told Reuters. But there is no recourse here for Meta. Llama is out there. China is using it. Zuckerberg’s company has no way to stop it. The open-source revolution continues.
[9]
Meta Provides U.S. Defence Agencies Access to Llama AI Models
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use Meta announced that it will provide its open-source Llama AI models to U.S. defence and national security agencies. This move integrates Meta's AI models into military settings through partnerships with top defence and technology companies. Meta states that other competitors like China recognize the importance of AI in military strategy and global influence. "Other nations - including China and other competitors of the United States - understand this as well, and are racing to develop their own open source models, investing heavily to leap ahead of the U.S. We believe it is in both America and the wider democratic world's interest for American open-source models to excel and succeed over models from China and elsewhere." the company stated. Meta essentially views this as a step essential for establishing U.S. dominance in the open-source AI standards, drawing parallels with how Linux and Android became global benchmarks. This further highlights the role of artificial intelligence in defence strategies and military modernisation. Meta's partnerships with defence contractors like Lockheed Martin, Palantir, and Anduril, as well as consulting and tech firms like Deloitte, Accenture Federal Services, Amazon Web Services, Microsoft, IBM, Oracle, Booz Allen, and Databricks. These collaborations will facilitate Llama's deployment in tasks that range from operational planning and aircraft maintenance to public services. Among current implementations includes Oracle's development of systems to streamline aircraft maintenance, which aims to reduce military repair times. Scale AI, another partner, is fine-tuning Llama for national security missions such as operational planning and threat analysis. Lockheed Martin has integrated Llama into its AI Factory, where it is applied across tasks like data analysis and code generation for defence purposes. Earlier this week, it was reported that research institutions linked to China's People's Liberation Army (PLA) reportedly used Meta's Llama AI model to develop "ChatBIT," an AI tool intended for military applications. Meta's usage policies forbid using Llama for military or espionage purposes, as per the U.S. International Traffic Arms Regulations (ITAR), but Meta has limited control over its open-source models. Meta's public policy director, Molly Montgomery, called the PLA's use unauthorised and argued that, despite this incident, China's broader trillion-dollar AI investments pose a greater challenge to U.S. leadership in AI, reported Reuters. The U.S. has taken steps to limit China's access to American AI. President Joe Biden's recent executive order restricts access to U.S. personal data from "countries of concern". Moreover, the U.S. military has collaborated with AI developers like OpenAI and Microsoft to develop military-specific AI applications such as wargame simulation videos and cybersecurity tools.
[10]
Chinese Researchers Make Military AI Using Meta's Llama
China has used an available Llama AI model to make its own AI tool that may be used in China's military, according to three research papers reviewed by Reuters. Two institutions tied to the Chinese military were involved in the research to develop "ChatBIT," which is based on Llama 13B. They used the AI to gather and process military intelligence data. In the future, it may be used in its military for training or analysis purposes. ChatBIT may have only been trained on about 100,000 military records, however, which is a very small dataset for an AI model. This means it may not be as capable as the research suggests, Meta VP of AI Research and McGill University Professor Joelle Pineau told Reuters. Meta's rules bar the use of Llama models for "military, warfare, nuclear industries or applications" or espionage, but these restrictions are difficult to enforce if the model is re-shared and used outside of the US. Meta has claimed that it has taken steps to prevent misuse of its models, and said in a statement that "any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy." Meta released its 13B model back in February last year, but said at the time that it would only be available to researchers. It's unclear whether the Chinese researchers were granted access to the model directly, or whether they obtained it through other means. Meta suggested that the Llama model in question is irrelevant and "outdated" considering China's AI research because the country is developing much more advanced models that could even surpass current US-developed ones. Unfortunately, AI tools and models have been misused for a range of different purposes already. Political deepfakes are currently the top malicious use of AI. AI image and video generators have been used for political misinformation campaigns. AI-powered audio has been used to try to dissuade US voters from going to the polls. And AI-powered bots have been deployed by Russia, Israel, and Iran across social media to influence elections or sway public opinion on global policy. China and the US have an ongoing tech rivalry, as well. The countries have sanctioned each other's tech, from chips to drones, and China is developing its own advanced AI chips and even a Neuralink competitor. The US has poured billions into its domestic semiconductor manufacturing industries since 2022 and is trying to stop China from getting access to the world's most advanced chips and AI tech. But Chinese firms and institutions have found plenty of loopholes to get advanced AI chips for years. Some policy experts have argued that open-source AI is important for open innovation and equality, and that it's not less safe than closed models. But open-source or accessible AI means that anyone can ultimately use it -- including China.
[11]
Meta opens Llama AI model up to US military
Social media and tech firm Meta has just opened up its artificial intelligence model Llama to the United States military and defense contractors for national security purposes. Llama will be used to streamline complicated logistics and planning, track terrorist financing, and strengthen America's cyber defenses, Meta's president of global affairs Nick Clegg wrote in a Nov. 4 statement. The firm will be partnering with Microsoft, Amazon, IBM, Oracle, Palantir and other tech heavyweights to offer full-scale services to the US government. Mark Zuckerberg's firm stressed the importance of the US and its allies continuing to champion open-source technologies to maintain its "technological edge" over China and other competitors. "Open source systems have been critical to helping the United States build the most technologically advanced military in the world and, in partnership with its allies." Clegg noted that open-source systems have helped accelerate defense research and high-end computing, identify security vulnerabilities and improve communication. "[It] benefits the public sector by enabling discoveries and breakthroughs, driving efficiency and improving delivery of public services." The US private sector would benefit massively, too, as national security is "inextricably linked" with economic output, Clegg said. "Other nations - including China and other competitors of the United States - understand this as well, and are racing to develop their own open source models, investing heavily to leap ahead of the US." It came just days after Reuters reported that Chinese research institutions linked to the People's Liberation Army had used an early version of Meta's Llama to build its AI military tools to gather and process intelligence, citing a report it obtained. In response, a Meta executive said the People's Liberation Army's apparent use of Llama is "unauthorized" and runs contrary to the Meta's acceptable use policy. Related: AI boosts Meta and Microsoft Q3 earnings, but outlook sours stock prices Under the new multi-company partnership, Oracle will build on Llama to synthesize aircraft maintenance documents so technicians can more efficiently diagnose problems -- speeding up repair time to put the aircraft back in service. Amazon Web Services and Microsoft Azure will host Llama on their cloud solutions to secure sensitive data Aerospace firm Lockheed Martin has incorporated Llama into its "AI Factory" to process and conduct data analyses, while Scale AI is "fine-tuning" Llama to support specific national defense missions, such as planning operations and identifying adversary vulnerabilities. Accenture, Anduril, Booz Allen, Databricks, Deloitte, Leidos and Snowflake are also involved.
[12]
Exclusive-Chinese Researchers Develop AI Model for Military Use on Back of Meta's Llama
(Reuters) - Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts. In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT". The researchers used the Llama 2 13B large language model (LLM) that Meta released in February 2023, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making. ChatBIT was fine-tuned and "optimised for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI's powerful ChatGPT-4. The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service. "It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," said Sunny Cheung, associate fellow at the Jamestown Foundation who specialises in China's emerging and dual use technologies including AI. Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company. Its terms also prohibit use of the models for "military, warfare, nuclear industries or applications, espionage" and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to "incite and promote violence". However, because Meta's models are public, the company has limited ways of enforcing those provisions. In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse. "Any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy," Molly Montgomery, Meta's director of public policy, told Reuters in a phone interview. The Chinese researchers include Geng Guotong and Li Weiwei with the AMS's Military Science Information Research Center and the National Innovation Institute of Defense Technology, as well as researchers from the Beijing Institute of Technology and Minzu University. "In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also ... strategic planning, simulation training and command decision-making will be explored," the paper said. China's Defence Ministry didn't reply to a request for comment, nor did any of the institutions or researchers. Reuters could not confirm ChatBIT's capabilities and computing power, though the researchers noted that its model incorporated only 100,000 military dialogue records, a relatively small number compared with other LLMs. "That's a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so ... it really makes me question what do they actually achieve here in terms of different capabilities," said Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada. The research comes amid a heated debate in U.S. national security and technology circles about whether firms such as Meta should make their models publicly available. U.S. President Joe Biden in October 2023 signed an executive order seeking to manage AI developments, noting that although there can be substantial benefits to innovation," there were also "substantial security risks, such as the removal of safeguards within the model". This week, Washington said it was finalising rules to curb U.S. investment in artificial intelligence and other technology sectors in China that could threaten national security. Pentagon spokesman John Supple said the Department of Defense recognised that open-source models had both benefits and drawbacks, and that "we will continue to closely monitor and assess competitors' capabilities". 'COOKIE JAR' Some observers say China's strides in developing indigenous AI, including setting up scores of research labs, have already made it difficult to keep the country from narrowing the technology gap with the United States. In a separate academic paper reviewed by Reuters, two researchers with the Aviation Industry Corporation of China (AVIC) - which the United States has designated a firm with ties to the PLA - described using Llama 2 for "the training of airborne electronic warfare interference strategies". China's use of Western-developed AI has also extended into domestic security. A June paper described how Llama had been used for "intelligence policing" to process large amounts of data and enhance police decision-making. The state-run PLA Daily published commentary in April on how AI could help "accelerate the research and development of weapons and equipment", help develop combat simulation and improve military training efficiency". "Can you keep them (China) out of the cookie jar? No, I don't see how you can," William Hannas, lead analyst at Georgetown University's Center for Security and Emerging Technology (CSET), told Reuters. A 2023 paper by CSET found 370 Chinese institutions whose researchers had published papers related to General Artificial Intelligence - helping drive China's national strategy to lead the world in AI by 2030. "There is too much collaboration going on between China's best scientists and the U.S.' best AI scientists for them to be excluded from developments," Hannas added. (Additional reporting by Katie Paul in New York; Phil Stewart in Washington, Eduardo Baptista in Beijing and Greg Torode in Hong Kong; Editing by Gerry Doyle)
[13]
Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes
Meta will allow U.S. government agencies and contractors working on national security to use its artificial intelligence models for military purposes, the company said on Monday, in a shift from its policy that prohibited the use of its technology for such efforts. Meta said that it would make its A.I. models, called Llama, available to federal agencies and that it was working with defense contractors such as Lockheed Martin and Booz Allen as well as defense-focused tech companies including Palantir and Anduril. The Llama models are "open source," which means the technology can be freely copied and distributed by other developers, companies and governments. Meta's move is an exception to its "acceptable use policy," which forbade the use of the company's A.I. software for "military, warfare, nuclear industries," among other purposes. In a blog post on Monday, Nick Clegg, Meta's president of global affairs, said the company now backed "responsible and ethical uses" of the technology that supported the United States and "democratic values" in a global race for A.I. supremacy. "Meta wants to play its part to support the safety, security and economic prosperity of America -- and of its closest allies too," Mr. Clegg wrote. He added that "widespread adoption of American open source A.I. models serves both economic and security interests." A Meta spokesman said the company would share its technology with members of the Five Eyes intelligence alliance: Canada, Britain, Australia and New Zealand in addition to the United States. Bloomberg earlier reported that Meta's technology would be shared with the Five Eyes countries. Meta, which owns Facebook, Instagram and WhatsApp, has been working to spread its A.I. software to as many third-party developers as possible, as rivals like OpenAI, Microsoft, Google and Anthropic vie to lead the A.I. race. Meta, which had lagged some of those companies in A.I., decided to open source its code to catch up. As of August, the company's software has been downloaded more than 350 million times. Meta is likely to face scrutiny for its move. Military applications of Silicon Valley tech products have proved contentious in recent years, with employees at Microsoft, Google and Amazon vocally protesting some of the deals that their companies reached with military contractors and defense agencies. In addition, Meta has come under scrutiny for its open-source approach to A.I. While OpenAI and Google argue that the tech behind their A.I. software is too powerful and susceptible to misuse to release into the wild, Meta has said A.I. can be improved and made safer only by allowing millions of people to look at the code and examine it. Meta's executives have been concerned that the U.S. government and others may harshly regulate open-source A.I., two people with knowledge of the company said. Those fears were heightened last week after Reuters reported that research institutions with ties to the Chinese government had used Llama to build software applications for the People's Liberation Army. Meta executives took issue with the report, and told Reuters that the Chinese government was not authorized to use Llama for military purposes. In his blog post on Monday, Mr. Clegg said the U.S. government could use the technology to track terrorist activities and improve cybersecurity across American institutions. He also repeatedly said that using Meta's A.I. models would help the United States remain a technological step ahead of other nations. "The goal should be to create a virtuous circle, helping the United States retain its technological edge while spreading access to A.I. globally and ensuring the resulting innovations are responsible and ethical, and support the strategic and geopolitical interests of the United States and its closest allies," he said.
[14]
Meta Expands AI Access to US Defense Agencies to Strengthen National Security - Decrypt
Social media and technology giant Meta (nee Facebook) announced it is providing its open-source Llama AI models to U.S. defense agencies and contractors, to support national security and strengthen America's position in the AI race. Llama will be available to U.S. government agencies focused on defense and national security applications, and private partners supporting their missions, wrote Nick Clegg, Meta's president of global affairs, in a blog post. By opening Llama to the public sector, Meta aims to contribute to U.S. technological leadership while ensuring ethical standards in artificial intelligence. "This is about supporting the safety, security, and economic prosperity of America -- and of its closest allies too," said Clegg on Monday. Launched by Meta AI in February 2023, Llama is a series of large language models (LLMs) designed to understand and generate human-like text. These models can process vast amounts of information, making them valuable for tasks such as data analysis, language translation, and content generation. Meta's decision to provide the Llama to the arrives amid growing competition with countries like China, where researchers linked to the People's Liberation Army (PLA) were recently reported by Reuters to have adapted Meta's previous Llama 2 model for defense purposes. Clegg pointed out Meta's alignment with U.S. interests, noting: "We believe it is in both America and the wider democratic world's interest for American open source models to excel and succeed over models from China and elsewhere." As part of Meta's mission to collaborate with the U.S. defense agencies, companies such as Oracle, Lockheed Martin, and Amazon Web Services are working with Llama to amplify processes across logistics, cybersecurity, and operational planning. Oracle is using the AI model to streamline aircraft maintenance and enabling technicians to diagnose and address issues more quickly, as per the blog post. Lockheed Martin has reportedly integrated Llama into its AI Factory, boosting data analysis and code generation capabilities for defense applications. Similarly, Scale AI is fine-tuning the model for national security missions focused on adversarial assessment. Meta's initiative extends Llama's capabilities to public sector applications, with partners like Deloitte deploying the model to assist government agencies and nonprofits across areas such as education, energy, and small business. The tech giant believes these responsible and ethical uses of open-source AI models like Llama will not only "support the prosperity and security of the United States but will also help establish U.S. open-source standards in the global race for AI leadership." This rollout follows Meta's recent launch of Llama 3.2, an upgraded model featuring text and image processing capabilities, with smaller versions optimized for mobile devices. Meta partnered with Qualcomm and MediaTek to ensure Llama 3.2's compatibility with mobile chips, expanding the model's accessibility.
[15]
Meta says it's making its Llama models available for US national security applications | TechCrunch
In an effort to combat the perception that its "open" AI is aiding foreign adversaries, Meta today said that it's making its Llama series of AI models available to U.S. government agencies and contractors working on national security applications. "We are pleased to confirm that we're making Llama available to U.S. government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," Meta wrote in a blog post. "We're partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies." Last week, Reuters reported that Chinese research scientists linked to the People's Liberation Army (PLA), the military wing of China's ruling party, used an older Llama model, Llama 2, to develop a tool for defense applications. Chinese researchers including two affiliated with a PLA R&D group created a military-focused chatbot designed to gather and process intelligence as well as offer information for operational decision-making. Meta told Reuters in a statement that the use of the "single, and outdated" Llama model was "unauthorized" and contrary to its acceptable use policy. But the report added much fuel to the debate over the merits and risks of open AI.
[16]
Chinese researchers develop AI model for military use on the back of Meta's Llama, Reuters reports
Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts. Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts. In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT". The researchers used the Llama 2 13B large language model (LLM) that Meta META.O released in February 2023, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making. ChatBIT was fine-tuned and "optimized for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI's powerful ChatGPT-4. The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service. "It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," said Sunny Cheung, associate fellow at the Jamestown Foundation who specializes in China's emerging and dual use technologies including AI. Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company. Its terms also prohibit use of the models for "military, warfare, nuclear industries or applications, espionage" and other activities subject to U.S. defense export controls, as well as for the development of weapons and content intended to "incite and promote violence". However, because Meta's models are public, the company has limited ways of enforcing those provisions.
[17]
Meta Makes Llama AI Models Available to US Government Agencies
Meta added Llama models can help the US in improving national security Meta announced on Monday that its Llama artificial intelligence (AI) models will be available to US government agencies and contractors. The announcement came just days after reports claimed that the company's open-source AI models were being used by researchers in China for military use. The social media giant highlighted that it will also make its Llama models available to those entities in the US that are working on defence and national security applications as well as private sector partners supporting their work. In a newsroom post, Meta confirmed that it has made Llama available directly to US government agencies as well as any allied entities working with the country's government. The company is also partnering with private enterprises such as Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies. Highlighting how Meta's AI models are helping the US government via private enterprises, the company listed several examples. It said that Oracle is building on Llama to synthesise aircraft maintenance documents. This is said to help the enterprise quickly and accurately diagnose problems, speed up repair time, and more. Similarly, Amazon Web Services (AWS) and Microsoft Azure are hosting Meta's AI models on their cloud servers to build solutions for sensitive data, the company stated. IBM's WatsonX is also said to be bringing Llama to national security agencies via their self-managed data centres. Meta said that large language models (LLMs) can support many aspects of the US' safety and national security due to their capability to process large volumes of data and generate insights. Citing more use cases, the social media giant said LLMs can also help streamline logistics and planning, track terrorist financing, and strengthen cyber defences. "Open source systems have helped to accelerate defence research and high-end computing, identify security vulnerabilities and improve communication between disparate systems," it added. Notably, the announcement comes after Reuters reported that Chinese research institutions associated with the People's Liberation Army were using the open-source Llama AI models to develop a tool that can potentially be used for military usage.
[18]
Exclusive-Chinese researchers develop AI model for military use on back of Meta's Llama
(Reuters) - Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts. In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT". The researchers used the Llama 2 13B large language model (LLM) that Meta released in February 2023, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making. ChatBIT was fine-tuned and "optimised for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI's powerful ChatGPT-4. The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service. "It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," said Sunny Cheung, associate fellow at the Jamestown Foundation who specialises in China's emerging and dual use technologies including AI. Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company. Its terms also prohibit use of the models for "military, warfare, nuclear industries or applications, espionage" and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to "incite and promote violence". However, because Meta's models are public, the company has limited ways of enforcing those provisions. In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse. "Any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy," Molly Montgomery, Meta's director of public policy, told Reuters in a phone interview. The Chinese researchers include Geng Guotong and Li Weiwei with the AMS's Military Science Information Research Center and the National Innovation Institute of Defense Technology, as well as researchers from the Beijing Institute of Technology and Minzu University. "In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also ... strategic planning, simulation training and command decision-making will be explored," the paper said. China's Defence Ministry didn't reply to a request for comment, nor did any of the institutions or researchers. Reuters could not confirm ChatBIT's capabilities and computing power, though the researchers noted that its model incorporated only 100,000 military dialogue records, a relatively small number compared with other LLMs. "That's a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so ... it really makes me question what do they actually achieve here in terms of different capabilities," said Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada. The research comes amid a heated debate in U.S. national security and technology circles about whether firms such as Meta should make their models publicly available. U.S. President Joe Biden in October 2023 signed an executive order seeking to manage AI developments, noting that although there can be substantial benefits to innovation," there were also "substantial security risks, such as the removal of safeguards within the model". This week, Washington said it was finalising rules to curb U.S. investment in artificial intelligence and other technology sectors in China that could threaten national security. Pentagon spokesman John Supple said the Department of Defense recognised that open-source models had both benefits and drawbacks, and that "we will continue to closely monitor and assess competitors' capabilities". 'COOKIE JAR' Some observers say China's strides in developing indigenous AI, including setting up scores of research labs, have already made it difficult to keep the country from narrowing the technology gap with the United States. In a separate academic paper reviewed by Reuters, two researchers with the Aviation Industry Corporation of China (AVIC) - which the United States has designated a firm with ties to the PLA - described using Llama 2 for "the training of airborne electronic warfare interference strategies". China's use of Western-developed AI has also extended into domestic security. A June paper described how Llama had been used for "intelligence policing" to process large amounts of data and enhance police decision-making. The state-run PLA Daily published commentary in April on how AI could help "accelerate the research and development of weapons and equipment", help develop combat simulation and improve military training efficiency". "Can you keep them (China) out of the cookie jar? No, I don't see how you can," William Hannas, lead analyst at Georgetown University's Center for Security and Emerging Technology (CSET), told Reuters. A 2023 paper by CSET found 370 Chinese institutions whose researchers had published papers related to General Artificial Intelligence - helping drive China's national strategy to lead the world in AI by 2030. "There is too much collaboration going on between China's best scientists and the U.S.' best AI scientists for them to be excluded from developments," Hannas added. (Additional reporting by Katie Paul in New York; Phil Stewart in Washington, Eduardo Baptista in Beijing and Greg Torode in Hong Kong; Editing by Gerry Doyle)
[19]
Meta Wants You to Know It Really Loves America After China Militarized Its AI Model
Meta believes in the America spirit and is ready to beat China on the battlefield of AI advancement. Mark Zuckerberg and Meta would like you to know that they love America. Meta announced today that it would make its Llama models available to U.S. government agencies and contractors working on issues of national security. “We are pleased to confirm that we are also making Llama available to U.S. government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work,†Nick Clegg, Meta’s President of Global Affairs said in a blog post. Meta’s Llama models are open source, meaning that anyone who gets hold of them can essentially do whatever they want. But the announcement today marks a shift away from Meta’s own acceptable use policy for the models which had a provision against “military, warfare, nuclear industries or applications, espionage.†According to the blog post, Meta is patterning with companies that include “Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies." Meta said that Oracle was using Llama to synthesize aircraft maintenance documents to aid in maintenance. It also said weapons manufacturers would use Llama for a bunch of different things, including “code generation, data analysis, and enhancing business processes.†Why this sudden pivot to American defense contractors? It might have something to do with a Reuters report from last week that discovered various researchers connected to the Chinese military had availed themselves of Meta’s Llama 2 AI model. There’s absolutely no evidence or even any indication that Meta had any direct hand in the People’s Liberation Army’s use of Llama 2. But critics have pointed out that Zuckerberg is weirdly close to China. The Meta CEO met with Chinese President Xi Jinping in 2017. Three years before that, he told a Chinese newspaper that he’d bought copies of Xi’s book, The Governance of China, for his employees. Why? “I want them to understand socialism with Chinese characteristics,†he said at the time. But Zuckerberg is going through a rebrand that’s all-in on Americana. He’s grown his hair out, dresses like a normal human being, and talks about the U.S. every time he gets the chance. On July 4 of this year, he posted a video of himself on a boogie board in a Tuxedo waving an American flag and drinking a Twisted Tea. Clegg’s announcement is full of treacly invocations of the American spirit. “As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of Americaâ€"and of its closest allies too,†the post said. “For decades, open source systems have been critical to helping the United States build the most technologically advanced military in the world and, in partnership with its allies, develop global standards for new technology,†it went on. “Open source systems have helped to accelerate defense research and high-end computing, identify security vulnerabilities and improve communication between disparate systems.†In the end, it did, of course, mention the competition. “We believe it is in both America and the wider democratic world’s interest for American open source models to excel and succeed over models from China and elsewhere,†it said.
[20]
Chinese researchers develop AI model for military use on back of Meta's Llama
Chinese research institutions connected to the People's Liberation Army have utilized Meta's Llama model to create an AI tool for military uses. Academic papers reveal that the tool, called ChatBIT, enhances intelligence gathering and decision-making. ChatBIT was optimized for military tasks and outperformed certain other AI models. Meta stated that PLA's use of their model is unauthorized.Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts. In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT". The researchers used the Llama 2 13B large language model (LLM) that Meta released in February 2023, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making. ChatBIT was fine-tuned and "optimised for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI's powerful ChatGPT-4. The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service. "It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," said Sunny Cheung, associate fellow at the Jamestown Foundation who specialises in China's emerging and dual use technologies including AI. Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company. Its terms also prohibit use of the models for "military, warfare, nuclear industries or applications, espionage" and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to "incite and promote violence". However, because Meta's models are public, the company has limited ways of enforcing those provisions. In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse. "Any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy," Molly Montgomery, Meta's director of public policy, told Reuters in a phone interview. The Chinese researchers include Geng Guotong and Li Weiwei with the AMS's Military Science Information Research Center and the National Innovation Institute of Defense Technology, as well as researchers from the Beijing Institute of Technology and Minzu University. "In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also ... strategic planning, simulation training and command decision-making will be explored," the paper said. China's Defence Ministry didn't reply to a request for comment, nor did any of the institutions or researchers. Reuters could not confirm ChatBIT's capabilities and computing power, though the researchers noted that its model incorporated only 100,000 military dialogue records, a relatively small number compared with other LLMs. "That's a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so ... it really makes me question what do they actually achieve here in terms of different capabilities," said Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada. The research comes amid a heated debate in U.S. national security and technology circles about whether firms such as Meta should make their models publicly available. U.S. President Joe Biden in October 2023 signed an executive order seeking to manage AI developments, noting that although there can be substantial benefits to innovation," there were also "substantial security risks, such as the removal of safeguards within the model". This week, Washington said it was finalising rules to curb U.S. investment in artificial intelligence and other technology sectors in China that could threaten national security. Pentagon spokesman John Supple said the Department of Defense recognised that open-source models had both benefits and drawbacks, and that "we will continue to closely monitor and assess competitors' capabilities". 'COOKIE JAR' Some observers say China's strides in developing indigenous AI, including setting up scores of research labs, have already made it difficult to keep the country from narrowing the technology gap with the United States. In a separate academic paper reviewed by Reuters, two researchers with the Aviation Industry Corporation of China (AVIC) - which the United States has designated a firm with ties to the PLA - described using Llama 2 for "the training of airborne electronic warfare interference strategies". China's use of Western-developed AI has also extended into domestic security. A June paper described how Llama had been used for "intelligence policing" to process large amounts of data and enhance police decision-making. The state-run PLA Daily published commentary in April on how AI could help "accelerate the research and development of weapons and equipment", help develop combat simulation and improve military training efficiency". "Can you keep them (China) out of the cookie jar? No, I don't see how you can," William Hannas, lead analyst at Georgetown University's Center for Security and Emerging Technology (CSET), told Reuters. A 2023 paper by CSET found 370 Chinese institutions whose researchers had published papers related to General Artificial Intelligence - helping drive China's national strategy to lead the world in AI by 2030. "There is too much collaboration going on between China's best scientists and the U.S.' best AI scientists for them to be excluded from developments," Hannas added.
[21]
Chinese researchers build military AI using Meta's open-source Llama model -- ChatBIT allegedly performs at around 90% of the performance of OpenAI GPT-4 LLM
Chinese researchers with ties to China's People's Liberation Army (PLA) have built an AI model called ChatBIT, designed for military applications using Meta's open-source Llama model. According to Reuters, some researchers are associated with the Academy of Military Science (AMS), the PLA's top research group. Three academic papers and several analysts have confirmed the information, with ChatBIT using Meta's Llama 13B large language model (LLM). This LLM has been modified for intelligence gathering and processing, allowing military planners to use it for operational decision-making. According to one of the papers that Reuters cited, the military AI is "optimized for dialogue and question-answering tasks in the military field." It also claimed that ChatBIT performs at around 90% of the performance of OpenAI's GPT-4 LLM, although the paper did not reveal how they tested its performance or say if the AI model has been used in the field. Nevertheless, its use of open-source AI models could potentially allow it to match the latest models released by American tech giants in benchmark tests. "It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," says Jamestown Foundation Associate Fellow Sunny Cheung, a Washington, D.C.-based think tank that looks at China's emerging and dual-use technologies, including artificial intelligence. Meta's license explicitly bans Llama's use for military applications, but its open-source nature makes it nearly impossible to enforce such limits. However, Meta said in a statement that this alleged use of the Llama 13B LLM -- which it says is an "outdated version" given that it's already training Llama 4 -- is largely irrelevant, especially given that China is investing trillions of dollars to gain an edge in AI technologies. Furthermore, other researchers noted that ChatBIT only used 100,000 military dialogue records, a drop in the bucket given that the latest models are trained on trillions of data points. Some experts question the viability of such a small data set for military AI training. But ChatBIT could also just be proof of concept, with the involved military research institutes planning to create more expansive models. Aside from that, the Chinese government might have released these research papers as a sign to the U.S. that it is not afraid of using AI to give it a technological advantage on the global stage. No matter how big or small this development is, Washington has been afraid of this news -- the use of American open-source technologies that will give its opponents a military advantage. That's why, aside from expanding ongoing export controls in China, many U.S. lawmakers also want to block the country from accessing open-source/open-standard technologies like RISC-V. It's also taking steps to stop American entities from investing in Chinese AI, semiconductors, and quantum computing. This is the two-edged sword that American policymakers must contend with. They naturally don't want to give opponents access to advanced technologies via the open-source route; however, open-source technology is also a major driver of technological advancements, and curbing it could put U.S. companies at a disadvantage.
[22]
China using Meta AI models for military applications: Report
Disclaimer: This content generated by AI & may have errors or hallucinations. Edit before use. Read our Terms of use Research institutions linked to China's People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, Reuters reported, citing three research papers. Three institutions, including two associated with the army's research body, the Academy of Military Science (AMS), created an AI bot called "ChatBIT" using Llama 13B large language model (LLM), an older version of Meta's LLM. It is capable of gathering and processing intelligence and offering information for operational decision-making and is 90% as capable as OpenAI's powerful ChatGPT-4, according to the papers. Meta's usage policies state clearly that consumers cannot use Llama models for "military, warfare, nuclear industries or applications, espionage" purposes under the US's International Traffic Arms Regulations (ITAR). The law mandated by the US Department of State prohibits the export of certain technologies for defence purposes. However, Meta has limited control over how its models are used, as they are open to public access. Molly Montgomery, Meta's director of public policy said that PLA's usage was "unauthorized and contrary to our acceptable use policy." However, she added that "In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI," encouraging the US to invest on AI. The United States military has collaborated with AI developers like OpenAI to create military wargame simulation videos and cybersecurity tools. This came after OpenAI changed its usage policies to exclude an explicit prohibition on using its models for military and warfare purposes. OpenAI has also roped in former National Security Agency (NSA) head Paul Nakasone into its board of directors. Similarly, Microsoft provided US intelligence agencies a generative AI model disconnected from the internet, that would enable secure information sharing. The US has expressed concern repeatedly about China using American AI models to surpass its own capabilities. President Joe Biden's Executive Order on Preventing Access to Americans' Bulk Sensitive Personal Data called for protecting data from "countries of concern" lest they use it for defence or cyber threat purposes. The order also said that these countries may innovate and refine AI technology, "thereby improving their ability to exploit the underlying data and exacerbating the national security and foreign policy threats." In May, the US introduced a Bill imposing export control on AI systems to prevent the exploitation of US AI models and other enabling tech by foreign adversaries. An amendment to the Export Control Reform Act, 2018, the 'Enhancing National Frameworks for Overseas Restriction of Critical Exports Act' or 'ENFORCE Act' would grant the US President the power to"control the activities of United States persons, wherever located, relating to specific covered artificial intelligence systems and emerging and foundational technologies that are identified as essential to the national security of the United States."
[23]
Meta opens its Llama AI models to government agencies for national security
Meta is opening up its Llama AI models to government agencies and contractors working on national security, the company said in . The group includes more than a dozen private sector companies that partner with the US government, including Amazon Web Services, Oracle and Microsoft, as well as defense contractors like Palantir and Lockheed Martin. Mark Zuckerberg hinted at the move last week during Meta's earnings call, when the company was "working with the public sector to adopt Llama across the US government." Now, Meta is offering more details about the extent of that work. Oracle, for example, is "building on Llama to synthesize aircraft maintenance documents so technicians can more quickly and accurately diagnose problems, speeding up repair time and getting critical aircraft back in service." Amazon Web Services and Microsoft, according to Meta, are "using Llama to support governments by hosting our models on their secure cloud solutions for sensitive data." Meta is also providing similar access to Llama to governments and contractors in the UK, Canada, Australia and New Zealand, Bloomberg . In a blog post, Meta's President of Global Affairs, Nick Clegg, suggested the partnerships will help the US compete with China in the global arms race over artificial intelligence. "We believe it is in both America and the wider democratic world's interest for American open source models to excel and succeed over models from China and elsewhere," he wrote. "As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of America - and of its closest allies too."
[24]
Meta's Llama Used to Develop AI Model for China's Military
Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts. In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT". The researchers used an earlier Llama 2 13B large language model (LLM) that Meta , incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making. ChatBIT was fine-tuned and "optimized for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90 percent as capable as OpenAI's powerful ChatGPT-4. The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service.
[25]
Exclusive: Chinese researchers develop AI model for military use on back of Meta's Llama
Nov 1 (Reuters) - Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts. In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT". The researchers used the Llama 2 13B large language model (LLM) that Meta (META.O), opens new tab released in February 2023, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making. ChatBIT was fine-tuned and "optimised for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI's powerful ChatGPT-4. The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service. "It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," said Sunny Cheung, associate fellow at the Jamestown Foundation who specialises in China's emerging and dual use technologies including AI. Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company. Its terms also prohibit use of the models for "military, warfare, nuclear industries or applications, espionage" and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to "incite and promote violence". However, because Meta's models are public, the company has limited ways of enforcing those provisions. In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse. "Any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy," Molly Montgomery, Meta's director of public policy, told Reuters in a phone interview. The Chinese researchers include Geng Guotong and Li Weiwei with the AMS's Military Science Information Research Center and the National Innovation Institute of Defense Technology, as well as researchers from the Beijing Institute of Technology and Minzu University. "In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also ... strategic planning, simulation training and command decision-making will be explored," the paper said. China's Defence Ministry didn't reply to a request for comment, nor did any of the institutions or researchers. Reuters could not confirm ChatBIT's capabilities and computing power, though the researchers noted that its model incorporated only 100,000 military dialogue records, a relatively small number compared with other LLMs. "That's a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so ... it really makes me question what do they actually achieve here in terms of different capabilities," said Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada. The research comes amid a heated debate in U.S. national security and technology circles about whether firms such as Meta should make their models publicly available. U.S. President Joe Biden in October 2023 signed an executive order seeking to manage AI developments, noting that although there can be substantial benefits to innovation," there were also "substantial security risks, such as the removal of safeguards within the model". This week, Washington said it was finalising rules to curb U.S. investment in artificial intelligence and other technology sectors in China that could threaten national security. Pentagon spokesman John Supple said the Department of Defense recognised that open-source models had both benefits and drawbacks, and that "we will continue to closely monitor and assess competitors' capabilities". 'COOKIE JAR' Some observers say China's strides in developing indigenous AI, including setting up scores of research labs, have already made it difficult to keep the country from narrowing the technology gap with the United States. In a separate academic paper reviewed by Reuters, two researchers with the Aviation Industry Corporation of China (AVIC) - which the United States has designated, opens new tab a firm with ties to the PLA - described using Llama 2 for "the training of airborne electronic warfare interference strategies". China's use of Western-developed AI has also extended into domestic security. A June paper described how Llama had been used for "intelligence policing" to process large amounts of data and enhance police decision-making. The state-run PLA Daily, opens new tab published commentary in April on how AI could help "accelerate the research and development of weapons and equipment", help develop combat simulation and improve military training efficiency". "Can you keep them (China) out of the cookie jar? No, I don't see how you can," William Hannas, lead analyst at Georgetown University's Center for Security and Emerging Technology (CSET), told Reuters. A 2023 paper by CSET found 370, opens new tab Chinese institutions whose researchers had published papers related to General Artificial Intelligence - helping drive China's national strategy to lead the world in AI by 2030. "There is too much collaboration going on between China's best scientists and the U.S.' best AI scientists for them to be excluded from developments," Hannas added. Additional reporting by Katie Paul in New York; Phil Stewart in Washington, Eduardo Baptista in Beijing and Greg Torode in Hong Kong; Editing by Gerry Doyle Our Standards: The Thomson Reuters Trust Principles., opens new tab James Pomfret Thomson Reuters James Pomfret is a Special Correspondent for Reuters covering politics and policy in Asia, with a specialization on China, Hong Kong and Taiwan. A two-time Pulitzer finalist, his multimedia career has spanned print, radio, TV and photography. His reporting includes "The Revolt of Hong Kong" - an investigative series he helped lead that was a Pulitzer finalist for International Reporting in 2020, and a series on China's weaponization of the rule of law against its critics that won a 2023 SOPA award. Jessie Pang Thomson Reuters Jessie Pang joined Reuters in 2019 after an internship. She covers Hong Kong with a focus on politics and general news.
[26]
China leverages Meta's Llama AI to boost military, police, weapon R&D
According to academic papers and analysts, top Chinese research institutions associated with the People's Liberation Army have reportedly utilized Meta's publicly available Llama model to create an AI tool for potential military applications. In a paper published in June and reviewed by Reuters, six Chinese researchers from three different institutions, including two affiliated with the People's Liberation Army's (PLA) main research organization, the Academy of Military Science (AMS), described how they utilized an earlier version of Meta's Llama as the foundation for their project, which they call "ChatBIT." The researchers utilized an earlier version of Meta's Llama 2 13B large language model (LLM). They incorporated their parameters to develop a military-focused AI tool for gathering and processing intelligence, providing accurate and reliable information for operational decision-making. ChatBIT was fine-tuned and optimized for dialogue and question-answering tasks in the military sector. The paper indicated that it outperformed several other AI models and was about 90% as capable as OpenAI's powerful ChatGPT-4.
[27]
Meta AI is ready for war
Last week, a report from Reuters revealed that Chinese researchers used Meta's Llama 2 model to build an AI system for the country's military. At the time, a Meta spokesperson told Reuters that "the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI." In its post, Meta described the importance for the US to get ahead in the AI race, saying it's in "both America and the wider democratic world's interest for American open source models to excel and succeed over models from China and elsewhere." Other AI companies are getting involved with the military as well, with a report from The Intercept revealing that the US Africa Command bought cloud computing services from Microsoft, offering access to OpenAI's tools. Google DeepMind also has a cloud computing contract with the Israeli government.
[28]
Chinese army scientists use Meta technology to create 'military AI'
Chinese scientists linked to the People's Liberation Army have been using software developed by Meta, Facebook's owner, to develop artificial intelligence (AI) for the country's military. Researchers from the Academy of Military Sciences, the Chinese army's research division, have been using the US technology giant's AI product - known as Llama - to fine-tune software for military means, according to academic papers. Meta's AI products are open source, meaning they are free for anyone to download and experiment with. While the technology giant prohibits their use for "military, warfare, nuclear industries or espionage", there is little it can practically do to enforce its rules once someone has downloaded its AI software. "Any use of our models by the People's Liberation Army is unauthorised and contrary to our acceptable use policy," a Meta spokesman told Reuters, which first reported the news. "In the global competition on AI, the alleged role of a single and outdated version of an American open-source model is irrelevant when we know China is already investing more than $1 trillion [£77bn] to surpass the US on AI." A June research paper seen by the news agency described military scientists using Meta's Llama to create a chatbot they called ChatBIT. The bot was "optimised for dialogue and question-answering tasks in the military field," the research said. Other papers described how researchers at the Aviation Industry Corporation of China, which the US says has ties to the Chinese military, said it had used a Meta Llama algorithm for the "training of airborne electronic warfare interference strategies".
[29]
China researchers develop AI model for military use on back of Meta's Llama
Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts. In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT." The researchers used an earlier Llama 2 13B large language model (LLM) that Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
[30]
Chinese military researchers reportedly used Meta's AI to develop a defense chatbot | TechCrunch
Chinese research scientists linked to the People's Liberation Army (PLA), the military wing of China's ruling party, reportedly used "open" AI from Meta to develop a tool for defense applications. According to Reuters, Chinese researchers, including two affiliated with a PLA R&D group, used Meta's Llama 2 AI model to create a military-focused chatbot. The chatbot, called ChatBIT, is designed to gather and process intelligence, Reuters reports, and offer information for operational decision-making. Meta told Reuters in a statement that the use of the "single, and outdated" Llama model -- Llama 2 is roughly a year old -- was "unauthorized" and contrary to its acceptable use policy. And Reuters says it wasn't able to confirm ChatBIT's capabilities or computing power. But the report provides some of the first evidence that China's military has been trying to leverage open models for defense purposes -- which is sure to fuel the debate over the merits and risks of open AI.
[31]
Meta Expands Access to Open-Source AI Models for US Government Use
The initiative promotes ethical AI use, supporting US security and prosperity. Meta has announced an expansion of its open-source Llama AI models, making them available to US government agencies, including those involved in defense and national security, and private sector partners supporting their work. Under this initiative, Meta has partnered with major companies including Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake to bring Llama to government agencies. Also Read: Meta AI Has More Than 500 Million Monthly Users, Says Mark Zuckerberg According to Meta, these models are being utilised for practical applications, such as Oracle's development of aircraft maintenance tools that improve diagnostic speed and accuracy. Scale AI is fine-tuning Llama to assist in national security missions, including operational planning and adversary analysis. Meanwhile, Lockheed Martin has integrated Llama into its AI Factory to accelerate projects in code generation and data analysis. Amazon Web Services and Microsoft Azure are also supporting government needs by hosting Llama on their secure cloud solutions for handling sensitive data. Additionally, IBM's watsonx solution is bringing Llama to national security agencies in their self-managed data centers and clouds, Meta said. Also Read: Meta Unveils New AI Models and Tools to Drive Innovation "These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish US open source standards in the global race for AI leadership," Meta highlighted. "As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of America - and of its closest allies too," Meta added. Meta emphasised the importance of establishing high standards for openness and accountability in AI development, aiming to create a robust global open-source standard. The initiative also seeks to promote the ethical deployment of AI in national security, guided by international laws and principles. Meta further explained that the public sector stands to benefit from Llama's capabilities, with Deloitte working to implement solutions that enhance community services across various sectors. Also Read: Everyone in India Can Ask AI Assistants Questions About Health Issues, Says Meta Official: Report The Llama models are "open source," meaning the technology can be freely copied and distributed by other developers, companies and governments. This move from Meta is an exception to its "acceptable use policy," which had previously prohibited the use of its AI software for "military, warfare, nuclear industries," and similar purposes. Through collaborations with organisations like the US State Department and UNESCO, Meta said it aims to address societal challenges while embedding democratic values in the digital infrastructure.
[32]
Report: Chinese researchers used Llama 13B to build chatbot optimized for military use - SiliconANGLE
Report: Chinese researchers used Llama 13B to build chatbot optimized for military use Researchers in China have reportedly used Meta Platforms Inc.'s Llama 13B artificial intelligence model to develop a chatbot optimized for military use. Reuters detailed the project today, citing academic papers and analysts. Llama is a family of open-source large language models that Meta released in February 2022. Developers can use the algorithms at no charge in both research and commercial projects. Under Meta's licensing terms, the Llama series may not be used for military applications. According to Reuters, Llama was mentioned in a June AI paper authored by six researchers from three Chinese institutions. Two of those institutions operate under the Academy of Military Science, the People's Liberation Army's leading research body. The paper details a Llama-powered chatbot called ChatBIT that is "optimised for dialogue and question-answering tasks in the military field." The chatbot is reportedly based on Llama 13B, a model that rolled out at the time of the LLM family's initial release last February. The model is based on a modified version of the industry-standard Transformer neural network architecture. Meta's engineers added performance optimizations to the architecture and made other enhancements that improved its ability to understand lengthy prompts. The creators of the ChatBIT chatbot reportedly modified Llama 13B by adding custom parameters. Those are configuration settings that manage how a neural network processes data. Additionally, the researchers gave the chatbot access to 100,000 military dialogue records. Another paper detailed in today's report was published by two researchers from an aviation company that has been linked to the People's Liberation Army. The paper discussed using Llama 2 for "the training of airborne electronic warfare interference strategies." Llama 2 is an iteration of the LLM series that Meta released last July, a few months after the original version. It was trained on 40% more data than the first-generation Llama models and can process prompts with twice as many tokens. A token is a unit of data that corresponds to a few characters. Llama 2 implements an AI technique called grouped-query attention, or GQA, that was not supported by the earlier models. The technique reduces the hardware requirements of an LLM's attention mechanism, a component used to interpret prompts. By lowering AI models' infrastructure usage, GQA helps speed up inference and cut costs. Meta has introduced several new iterations of its LLM series since Llama 2 debuted last year. The most capable model released by the company to date, Llama 3.1 405B, made its debut this past July. It's better at reasoning tasks and can process prompts with more than 60 times the amount of data supported by the first-generation Llama algorithms. Meta developed Llama 3.1 405B using 16,000 H100 graphics processing units. Earlier this week, Chief Executive Officer Mark Zuckerberg revealed that the next iteration of the LLM series is being trained on an even larger AI cluster with more than 100,000 H100s. He detailed that work on Llama 4 is already "well underway", with the first models from the upcoming series set to roll out early next year.
[33]
Chinese Researchers Built Military-Use AI Model on Meta's Llama
Chinese research institutions with ties to the People's Liberation Army used Meta's open-source Llama artificial intelligence model to develop an AI tool with potential military applications, Reuters reported. In an academic paper published in June, Chinese researchers including those at the PLA's research body, talked about how they used Llama to build an AI model called ChatBIT, the report
[34]
Meta Opens Its AI Models to US Defense Agencies and Contractors
Meta Platforms Inc. has granted approval for US government agencies and defense contractors to use its AI models, opening the door for Meta's technology to play a key role in military and national security efforts. The Facebook-parent company is making its large language models, called Llama, available to more than a dozen US agencies and contractors, including Lockheed Martin Corp., Booz Allen Hamilton Holding Corp. and Palantir Technologies Inc.
[35]
Meta's AI Model: A Tool for China's Military Ambitions?
Reuters was unable to confirm the abilities and processing power of ChatBIT. Researchers said that the model had only been trained on 100,000 military dialogue records which were far fewer in number compared to most LLMs which train primarily on trillions of tokens. Joelle Pineau of Meta described it as unlikely that it's able to do this "with 100k dialogues versus trillions on most others". Meta has continued to make a practice of publicly releasing most of its AI models, including Llama while putting conditions on their application. For example, a license application is required for services that have more than 700 million users. The models are also barred from military warfare or espionage applications. However, Meta still faces problems of enforcement with regard to its policies due to the fact that its models are in the public. Meta director of public policy Molly Montgomery put emphasis on the fact that "any use of their models by the PLA is unauthorized and violates their acceptable use policy."
[36]
Meta, Amazon, Microsoft Partner On Defense AI Initiative - Lockheed Martin (NYSE:LMT), Amazon.com (NASDAQ:AMZN)
Meta Platforms Inc META announced it will make its open-source Llama artificial intelligence models available to U.S. government agencies working on national security and defense applications, marking a significant expansion of the technology's authorized use cases. The social media giant is partnering with major government contractors and technology firms including Amazon.com Inc.'s AMZN Amazon Web Services, Lockheed Martin Corp LMT, Microsoft Corp MSFT, Oracle Corp ORCL, and Palantir Technologies Inc PLTR to implement these AI models across various defense initiatives. "As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of America - and of its closest allies too," said Nick Clegg, President of Global Affairs at Meta. Several contractors have already begun implementing Llama in their operations: Oracle is using the technology to improve aircraft maintenance by helping technicians diagnose problems more quickly Scale AI is fine-tuning the model for national security missions and identifying potential vulnerabilities Lockheed Martin has integrated Llama into its AI Factory for code generation and data analysis Amazon Web Services and Microsoft Azure are hosting the models on secure cloud platforms for sensitive government data See Also: Palantir's AI Gamble Pays Off As Customer Revenue From Single Customer Jumps 12x In Under 8 Months -- Alex Karp Quips: 'We Should Just Go Home' Why It Matters: Meta emphasized that making Llama available for defense applications aligns with U.S. interests in the global AI race, particularly as nations like China invest heavily in developing their own open-source models. The company noted that large language models can support various aspects of national security, including streamlining logistics, tracking terrorist financing, and strengthening cyber defenses. Beyond military applications, Meta highlighted broader public sector benefits, with Deloitte implementing Llama-based solutions for government agencies and nonprofits to improve public service delivery in areas such as education and energy. Read Next: Cathie Wood's Latest Shake-up: Dumps Palantir And Jack Dorsey's Block Along With Tesla, Buys Amazon And Meta Image Via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
Meta has granted access to its Llama AI model for US government agencies and defense contractors, reversing its previous policy. This decision comes after reports of Chinese military researchers using an older version of Llama for defense applications.
Meta, the parent company of Facebook, has made a significant policy shift by allowing US government agencies and defense contractors to use its Llama AI model for national security applications 1. This decision comes in the wake of reports that Chinese military researchers had been using an older version of Llama for defense-related purposes 2.
According to Reuters, Chinese researchers associated with the People's Liberation Army (PLA) developed a military-focused AI tool called "ChatBIT" using Meta's open-source Llama model 4. This tool was reportedly designed for intelligence gathering and operational decision-making, despite Meta's acceptable use policy prohibiting military applications 1.
In response to these developments, Nick Clegg, Meta's president of global affairs, announced that Llama would be made available to US government agencies and contractors working on national security applications 2. This decision marks a significant departure from Meta's previous stance, which explicitly forbade the use of its AI models for "military, warfare, nuclear industries or applications, espionage" 3.
Meta has partnered with several companies, including Lockheed Martin, AWS, Oracle, and Palantir, to bring Llama models to government agencies 5. Examples of potential applications include:
This policy shift reflects the growing importance of AI in national security and the global race for AI leadership. Clegg argued that the success of open-source AI models like Llama is fundamental to American economic and national security interests 2. The move aligns with the US government's priorities, as outlined in a recent White House memo on AI national security policy 4.
Despite Meta's justification, the decision has raised concerns about the potential risks of using AI in defense applications. Critics point to security vulnerabilities, such as potentially compromisable data, and inherent AI limitations like bias and hallucinations 3. Additionally, some view Meta's policy change as a reactive measure to China's unauthorized use of Llama, rather than a proactive strategy 1.
Meta's decision underscores the intensifying global competition in AI development, particularly between the United States and China. The company emphasized the need for American open-source models to excel over those from China and other nations 4. This move also aligns with broader US efforts to maintain technological superiority in AI while addressing potential national security implications 4.
Reference
[2]
[3]
Meta's decision to open-source LLaMA 3.1 marks a significant shift in AI development strategy. This move is seen as a way to accelerate AI innovation while potentially saving Meta's Metaverse vision.
6 Sources
Meta has released Llama 3, its latest and most advanced AI language model, boasting significant improvements in language processing and mathematical capabilities. This update positions Meta as a strong contender in the AI race, with potential impacts on various industries and startups.
22 Sources
Meta Platforms unveils Llama 3, a powerful open-source AI model, potentially disrupting the AI industry. The move aims to enhance developer freedom, privacy standards, and Meta's competitive position against rivals like OpenAI and Anthropic.
4 Sources
Meta has released the largest open-source AI model to date, marking a significant milestone in artificial intelligence. This development could democratize AI research and accelerate innovation in the field.
2 Sources
Meta's CEO Mark Zuckerberg reveals the company's strategy behind open-sourcing Llama AI models, highlighting cost savings and industry-wide benefits. The development of Llama 4 and its implications for Meta's future in AI are discussed.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved