Curated by THEOUTPOST
On Thu, 24 Oct, 4:09 PM UTC
24 Sources
[1]
Biden's AI national security memo calls for heavy lift
WASHINGTON - President Joe Biden's directive to all U.S. national security agencies to embed artificial intelligence technologies in their systems sets ambitious targets amid a volatile political environment. That's the first-blush assessment from technology experts after Biden on Oct. 24 directed a broad swath of organizations to harness AI responsibly, even as the technology is rapidly advancing. "It's like trying to assemble a plane while you're in the middle of flying it," said Josh Wallin, a fellow at the defense program at the Center for a New American Security. "It is a heavy lift. This is a new area that a lot of agencies are having to look at that they might have not necessarily paid attention to in the past, but I will also say it's certainly a critical one." Federal agencies will need to rapidly hire experts, get them security clearances and set about working on the tasks Biden lays out as private companies are pouring in money and talent to advance their AI models, Wallin said. The memo, which stems from the president's executive order from last year, asks the Pentagon; spy agencies; the Justice, Homeland Security, Commerce, Energy, and Health and Human Services departments; and others to harness AI technologies. The directive emphasizes the importance of national security systems "while protecting human rights, civil rights, civil liberties, privacy, and safety in AI-enabled national security activities." Federal agencies have deadlines, some as soon as 30 days, to accomplish tasks. Wallin and others said that the deadlines are driven by the pace of technological advances. The memo asks that by April the AI Safety Institute at the National Institute of Standards and Technology "pursue voluntary preliminary testing of at least two frontier AI models prior to their public deployment or release to evaluate capabilities that might pose a threat to national security." Frontier models refer to large AI models like ChatGPT that can recognize speech and generate human-like text. The testing is intended to ensure that the models don't inadvertently enable rogue actors and adversaries to launch offensive cyber operations or "accelerate development of biological and/or chemical weapons, autonomously carry out malicious behavior, automate development and deployment of other models." But the memo also adds an important caveat: The deadline to begin testing the AI models would be "subject to private sector cooperation." Meeting that testing deadline is realistic, said John Miller, senior vice president of policy at ITI, a trade group that represents top tech companies including Google, IBM, Intel, Meta and others. Because the institute "is already working with model developers on model testing and evaluation, it is feasible that the companies could complete or at least begin such testing within 180 days," Miller said in an email. But the memo also asks the AI Safety Institute to issue guidance on testing models within 180 days, and therefore "it seems reasonable to question exactly how these two timelines will sync up," he said. By February the National Security Agency "shall develop the capability to perform rapid systematic classified testing of AI models' capacity to detect, generate, and/or exacerbate offensive cyber threats. Such tests shall assess the degree to which AI systems, if misused, could accelerate offensive cyber operations," the memo says. 'Dangerous' order With the presidential election just a week away, the outcome looms large for this directive. The Republican Party platform says that if elected, Donald Trump would repeal Biden's "dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing." Since Biden's memo is a result of the executive order, it's likely that if Trump wins, "they would just pull the plug" and go their own way on AI, Daniel Castro, vice president at the Information Technology and Innovation Foundation, said in an interview. The leadership at federal departments tasked with compliance would change significantly under Trump as well. As many as 4,000 positions in the federal government change hands with the arrival of a new administration. However, people tracking the issue note there's broad bipartisan consensus that adoption of AI technologies for national security purposes is too critical for partisan disputes to derail it. The tasks and deadlines in the memo reflect in-depth discussions among agencies going back several months, said Michael Horowitz, a professor at the University of Pennsylvania who was until recently a deputy assistant secretary of defense with a portfolio that included military uses of AI and advanced technologies. "I think that the implementation of [the memo] regardless of who wins the election is going to be absolutely critical," Horowitz said in an interview. Wallin noted the memo emphasizes the need for U.S. agencies to understand the risks posed by advanced generative AI models including risks related to chemical, biological and nuclear weapons. On threats like those to national security, there's agreement between the parties, he said in an interview. Senate Intelligence Chairman Mark Warner, D-Va., said in a statement that he backed the Biden memo but the administration should work "in the coming months with Congress to advance a clearer strategy to engage the private sector on national security risks directed at AI systems across the supply chain." Immigration policy The memo acknowledges the long-term need to attract talented people from around the world to the United States in areas like semiconductor design, an issue that could get tied to larger questions about immigration. The Defense, State and Homeland Security departments are directed to use available legal authorities to bring them in. "I think there's broad recognition of the unique importance of STEM talent in ensuring U.S. technological leadership," Horowitz said. "And AI is no exception to that." The memo also asks the State Department, the U.S. Mission to the United Nations and the U.S. Agency for International Development to draw up a strategy within four months to advance international governance norms for the use of AI in national security. The U.S. has already taken several steps to promote international cooperation on artificial intelligence, both for civilian and military uses, Horowitz said. He cited the example of the U.S.-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy that has been endorsed by more than 50 countries. "It demonstrates the way that the United States is already leading by establishing strong norms for responsible behavior," Horowitz said. The push toward responsible use of technology needs to be seen in the context of the broader global debate on whether countries are moving toward authoritarian systems or leaning toward democracy and respect for human rights, Castro said. He noted that China is stepping up investment in Africa. "If we want to get African nations to line up with the U.S. and Europe on AI policy instead of going over to China," he said, "what are we actually doing to bring them to our side?"
[2]
Biden issues AI directives for federal agencies in effort to maintain U.S. advantage
Oct. 24 (UPI) -- The United States has a lead on the global development of artificial intelligence. In an effort to keep it, President Joe Biden issued a memorandum Thursday to guide AI efforts to ensure it's trustworthy, advances U.S. security goals and works well with our international partners. "The United States must lead the world in responsible application of AI to appropriate national security functions," Biden said in the memorandum and compared AI to radar, GPS and nuclear propulsion. "With each paradigm shift, they also developed new systems for tracking and countering adversaries' attempts to wild cutting-edge technology for their own advantage," Biden added. He said emerging AI technology offers potentially great benefits if used properly and potentially threatens national security, bolsters authoritarianism globally, undermines democratic institutions and processes, facilitates human rights abuses and weakens the rules-based international order if misused. Biden refers to AI as an "era-defining technology" that "has demonstrated significant and growing relevance to national security." "The United States government must urgently consider how this current AI paradigm specifically could transform the national security mission," Biden said. Biden announced three objectives in the memorandum that provides direction on harnessing AI-enabled technologies in the U.S. government to counter adversaries' use of AI that endangers national security. The three objectives are to lead development of safe, secure and trustworthy AI; harness powerful AI with appropriate safeguards to achieve national security objectives; and continue cultivating a stable and responsible framework to advance international AI governance. U.S. government policy also is to promote progress, innovation and competition in domestic AI development while protecting against foreign intelligence threats. The Departments of State, Defense and Homeland Security each wil assist in attracting and rapidly bringing to the United States individuals with relevant technical expertise who would improve the nation's competitiveness in AI and related fields, such as semiconductor design and production. The chair of the Council of Economic Advisers has 180 days to prepare an analysis of the AI talent market within the United States and overseas. An economic assessment also must be prepared within 180 days by the assistant to the President for Economic Policy and Director of the National Economic Council to determine the nation's private sector advantages and risks regarding AI, including the design, manufacture and packaging of chips needed for AI-related activities, and the availability of capital and highly skilled workers. Within 90 days the assistant to the President for National Security Affairs must convene appropriate executive departments and agencies to explore ways to prioritize and streamline administrative processing for visa applicants working with sensitive technologies. The Department of Energy also has 180 days to launch a pilot project to evaluate the performance and efficiency of AI-enabling infrastructure and supporting assets, including clean energy generation, power transmission and high-capacity fiber data links. The memorandum also directs respective federal agencies to assess and report on foreign intelligence threats, critical nodes in the AI supply chain and other factors that could affect AI development and use within the United States and globally. In a background call with reporters on Wednesday, unnamed senior administration officials affirmed the importance of emerging AI technology in the United States and worldwide. The United States is well-positioned with AI today, they said, stressing that the United States uses the most advanced hardware and hosts the leading AI companies that are building the most advanced AI systems. They said the recently implemented CHIPS Act makes the United States more resilient in its chip supply chains and helps support the private sector in its development of AI technology. And there are national security implications in AI, they said. Administration officials taking part in the call pointed out that, because countries such as China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities using artificial intelligence, it's particularly imperative that the United States accelerate its national security community's adoption and use of cutting-edge AI capabilities to maintain an competitive edge. Biden a year ago directed the development of the national security memorandum released Thursday to ensure the United States maintains its edge over rivals who would use AI to the detriment of U.S.national security and build effective safeguards to ensure the nation's use of AI upholds the nation's values and preserve public trust.
[3]
Biden Administration Wants AI Guardrails: Here's Its Call to Action
President Joe Biden issued the first-ever national security memorandum on artificial intelligence on Thursday, detailing goals for how the government should foster cutting-edge AI while also advancing international consensus around the powerful and rapidly evolving technology. The White House warned that the US, which has been a global leader in artificial intelligence, can't take its advantage for granted. "We are all familiar with past instances when we saw critical technologies and supply chains that were developed and commercialized here in the US migrate offshore for lack of critical public sector support," the document says. "That is why we are laser focused on maintaining the strongest AI ecosystem in the world here in the United States." At the same time, the memorandum notes that US AI efforts must be governed by the "critical guardrails" established in 2023 by Biden's executive order on safe, secure and trustworthy artificial intelligence. The memorandum directs the National Economic Council to "coordinate an economic assessment of the relative competitive advantage of the US private sector AI ecosystem." It also notes that the country will need to maintain its advantage by investing in semiconductors, infrastructure and clean energy. Since OpenAI released its ChatGPT generative AI chatbot in 2022, the promise and risks of artificial intelligence have been much debated. In its simpler forms, gen AI can help job-hunters refine their resumes or help students manage their time. It has more glamorous uses too. The technology, for instance, helped pull John Lennon's vocals out of an old tape for use in a new Beatles song. But many experts worry about the misuse of AI, which can create fictional images that look very real, such as deepfakes of singer Taylor Swift that falsely suggest she endorsed former President Donald Trump for re-election. AI also has the potential to shake up fields from science and medicine to software development, cars and military technology. The full document contains about 38 unclassified pages, with a classified appendix, according to The New York Times. Some of its statements are obvious, The Times points out. For example, it states that AI systems must never be allowed to make decisions about using nuclear weapons. Biden is not running for re-election, and his term ends in January. It is unclear how or if the new president, whether Trump or Vice President Kamala Harris, will follow the policy spelled out in the document. "Today's NSM is just the latest step in a series of actions thanks to the leadership and diplomatic engagement of the president and vice president, and there will be additional steps taken in the coming months to further support US leadership in AI," the statement says.
[4]
US Government Outlines Artificial Intelligence Guardrails in First Memorandum of Its Kind
President Joe Biden issued the first-ever national security memorandum on artificial intelligence on Thursday, detailing goals for how the government should work with cutting-edge AI technologies while advancing international consensus around the controversial technology. Since OpenAI released its ChatGPT artificial intelligence chatbot in 2022, AI has been a much-debated issue. In its simpler forms, it can help job-hunters refine their resumes or help students manage their time. It has more glamorous uses too. The technology has made the news for pulling John Lennon's vocals out of an old tape for use in a new Beatles son. And many experts worry about the misuse of AI, which has proven its ability to create fictional images that look very real, including deepfakes of singer Taylor Swift that falsely suggest she endorsed former President Donald Trump for re-election. So it's not surprising that the government is seeking to set out a rule book for utilizing the new powerful technology to best serve national interests, while putting critical guardrails in place. The document contains about 38 unclassified pages, with a classified appendix, according to The New York Times. Some of its statements are obvious, The Times points out. For example, it states that AI systems must never be allowed decisions about using nuclear weapons. Of course, Biden is not running for re-election, and his term ends in January. It is unclear how or if the new president, likely to be either Vice President Kamala Harris Trump, will follow the same policy spelled out in the document. The document warns that the US, which has been a global leader in artificial intelligence, can't take its advantage for granted. "We are all familiar with past instances when we saw critical technologies and supply chains that were developed and commercialized here in the US migrate offshore for lack of critical public sector support," the document reads in part. "That is why we are laser focused on maintaining the strongest AI ecosystem in the world here in the United States." The memorandum directs the National Economic Council to "coordinate an economic assessment of the relative competitive advantage of the US private sector AI ecosystem." It also notes that the country will need to maintain its advantage by investing in semiconductors, infrastructure and clean energy. The memorandum notes that US AI efforts must be governed by the "critical guardrails" established in 2023 by Biden's executive order on safe, secure and trustworthy artificial intelligence "Today's NSM is just the latest step in a series of actions thanks to the leadership and diplomatic engagement of the president and vice president, and there will be additional steps taken in the coming months to further support US leadership in AI," the statement said.
[5]
Biden administration outlines government 'guardrails' for AI tools
The new document is the latest in a series Biden has issued grappling with the challenges of using AI tools to speed up government operations whether detecting cyberattacks or predicting extreme weather while limiting the most dystopian possibilities, including the development of autonomous weapons.President Joe Biden on Thursday signed the first national security memorandum detailing how the Pentagon, the intelligence agencies and other national security institutions should use and protect artificial intelligence technology, putting "guardrails" on how such tools are employed in decisions varying from nuclear weapons to granting asylum. The new document is the latest in a series Biden has issued grappling with the challenges of using AI tools to speed up government operations -- whether detecting cyberattacks or predicting extreme weather -- while limiting the most dystopian possibilities, including the development of autonomous weapons. But most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will go into full effect after Biden leaves office, leaving open the question of whether the next administration will abide by them. While most national security memorandums are adopted or amended on the margins by successive presidents, it is far from clear how former President Donald Trump would approach the issue if he is elected next month. The new directive was announced Thursday at the National War College in Washington by Jake Sullivan, the national security adviser, who prompted many of the efforts to examine the uses and threats of the new tools. He acknowledged that one challenge is that the U.S. government funds or owns very few of the key AI technologies -- and that they evolve so fast that they often defy regulation. "Our government took an early and critical role in shaping developments -- from nuclear physics and space exploration to personal computing and the internet," Sullivan said. "That's not been the case with most of the AI revolution. While the Department of Defense and other agencies funded a large share of AI work in the 20th century, the private sector has propelled much of the last decade of progress." Biden's aides have said, however, that the absence of guidelines about how AI can be used by the Pentagon, the CIA or even the Justice Department has impeded development, as companies worried about what applications could be legal. "AI, if used appropriately and for its intended purposes, can offer great benefits," the new memorandum concluded. "If misused, AI could threaten United States national security, bolster authoritarianism worldwide, undermine democratic institutions and processes, facilitate human rights abuses" and more. Such conclusions have become commonplace warnings now. But they are a reminder of how much more difficult it will be to set rules of the road for AI than it was to create, say, arms control agreements in the nuclear age. Like cyberweapons, AI tools cannot be counted or inventoried, and everyday uses can, as the memorandum makes clear, go awry "even without malicious intent." That was the theme that Vice President Kamala Harris laid out when she spoke for the United States last year at international conferences aimed at assembling some consensus about rules on how the technology would be employed. But while Harris, now the Democratic presidential nominee, was designated by Biden to lead the effort, it was notable that she was not publicly involved in the announcement Thursday. The new memorandum contains about 38 pages in its unclassified version, with a classified appendix. Some of its conclusions are obvious: It rules out, for example, ever letting AI systems decide when to launch nuclear weapons; that is left to the president as commander in chief. While it seems clear that no one would want the fate of millions to hang on an algorithm's pick, the explicit statement is part of an effort to lure China into deeper talks about limits on high-risk applications of AI. An initial conversation with China on the topic, conducted in Europe this past spring, made no real progress. "This focuses attention on the issue of how these tools affect the most critical decisions governments make," said Herb Lin, a Stanford University scholar who has spent years examining the intersection of AI and nuclear decision-making. "Obviously, no one is going to give the nuclear codes to ChatGPT," Lin said. "But there is a remaining question about how much information that the president is getting is processed and filtered through AI systems -- and whether that is a bad thing." The memorandum requires an annual report to the president, assembled by the Energy Department, about the "radiological and nuclear risk" of "frontier" AI models that may make it easier to assemble or test nuclear weapons. There are similar deadlines for regular classified evaluations of how AI models could make it possible "to generate or exacerbate deliberate chemical and biological threats." It is the latter two threats that most worry arms experts, who note that getting the materials for chemical and biological weapons on the open market is far easier than obtaining bomb-grade uranium or plutonium, needed for nuclear weapons. But the rules for nonnuclear weapons are murkier. The memorandum draws from previous government mandates intended to keep human decision-makers "in the loop" of targeting decisions, or overseeing AI tools that may be used to pick targets. But such mandates often slow response times. That is especially difficult if Russia and China begin to make greater use of fully autonomous weapons that operate at blazing speeds because humans are removed from battlefield decisions. The new guardrails would also prohibit letting AI tools make a decision on granting asylum. And they would forbid tracking someone based on ethnicity or religion, or classifying someone as a "known terrorist" without a human weighing in. Perhaps the most intriguing part of the order is that it treats private-sector advances in AI as national assets that need to be protected from spying or theft by foreign adversaries, much as early nuclear weapons were. The order calls for intelligence agencies to begin protecting work on large language models or the chips used to power their development as national treasures, and to provide private-sector developers with up-to-the-minute intelligence to safeguard their inventions. It empowers a new and still-obscure organization, the AI Safety Institute, housed within the National Institute of Standards and Technology, to help inspect AI tools before they are released to ensure they could not aid a terrorist group in building biological weapons or help a hostile nation like North Korea improve the accuracy of its missiles. And it describes at length efforts to bring the best AI specialists from around the world to the United States, much as the country sought to attract nuclear and military scientists after World War II, rather than risk them working for a rival like Russia.
[6]
The United States begins the impossible: setting the limits of AI - Softonic
It is an almost impossible task, but the United States government has to address the elephant in the room President Joe Biden released the first national security memorandum on artificial intelligence on Thursday, which details the objectives of how the government should work with cutting-edge AI technologies, while advancing international consensus around this controversial technology. Since OpenAI launched its artificial intelligence chatbot ChatGPT in 2022, AI has been a hot topic, with people both for and against it. Unfortunately, its dangers outweigh its capabilities to do good. So it is not surprising that the American government is trying to establish a rulebook for the use of the powerful new technology to better serve national interests while putting critical barriers in place. The document contains about 38 pages unclassified, with a classified appendix, according to The New York Times. Some of its statements are obvious, notes The Times. For example, it states that AI systems should never be able to make decisions about the use of nuclear weapons. Of course, Biden is not running for re-election, and his term ends in January. It is not clear how or if the new president will follow the same policy outlined in the document. The document warns that the U.S., which has been a global leader in artificial intelligence, cannot take its advantage for granted. "We are all familiar with past cases where we saw critical technologies and supply chains that were developed and commercialized here in the U.S. migrate overseas due to a lack of critical support from the public sector," the document partly states. "That's why we are focused on maintaining the strongest AI ecosystem in the world here in the United States." The memorandum tasks the National Economic Council with "coordinating an economic assessment of the relative competitive advantage of the U.S. private sector AI ecosystem." It also notes that the country will need to maintain its advantage by investing in semiconductors, infrastructure, and clean energy. The memorandum states that U.S. efforts in AI must be guided by the "critical guardrails" established in 2023 by Biden's executive order on safe, secure, and trustworthy artificial intelligence.
[7]
US military, intelligence agencies ordered to embrace AI
The Pentagon and U.S. intelligence agencies have new marching orders -- to more quickly embrace and deploy artificial intelligence as a matter of national security. U.S. President Joe Biden signed the directive, part of a new national security memorandum, on Thursday. The goal is to make sure the United States remains a leader in AI technology while also aiming to prevent the country from falling victim to AI tools wielded by adversaries like China. The memo, which calls AI "an era-defining technology" also lays out guidelines that the White House says are designed to prevent the use of AI to harm civil liberties or human rights. The new rules will "ensure that our national security agencies are adopting these technologies in ways that align with our values," a senior administration official told reporters, speaking about the memo on the condition of anonymity before its official release. The official added that a failure to more quickly adopt AI "could put us at risk of a strategic surprise by our rivals." "Because countries like China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities using artificial intelligence, it's particularly imperative that we accelerate our national security community's adoption and use of cutting-edge AI," the official said. The new guidelines build on an executive order issued last year, which directed all U.S. government agencies to craft policies for how they intend to use AI. They also seek to address issues that could hamper Washington's ability to more quickly incorporate AI into national security systems. Provisions outlined in the memo call for a range of actions to protect the supply chains the produce advanced computer chips that are critical for AI systems. It also calls for additional actions to combat economic espionage that would allow U.S. adversaries or non-U.S. companies from stealing critical innovations. "We have to get this right, because there is probably no other technology that will be more critical to our national security in the years ahead," said White House National Security Adviser Jake Sullivan, addressing an audience at the National Defense University in Washington on Thursday. "The stakes are high," he said. "If we don't act more intentionally to seize our advantages, if we don't deploy AI more quickly and more comprehensively to strengthen our national security, we risk squandering our hard-earned lead. "We could have the best team but lose because we didn't put it on the field," he added. Although the memo prioritizes the implementation of AI technologies to safeguard U.S. interests, it also directs officials to work with allies and others to create a stable framework for use of AI technologies across the globe. "A big part of the national security memorandum is actually setting out some basic principles," Sullivan said, citing ongoing talks with the G-7 and AI-related resolutions at the United Nations. "We need to ensure that people around the world are able to seize the benefits and mitigate the risks," he said.
[8]
New Guidelines Serve as Government 'Guardrails' for A.I. Tools
A national security memorandum detailed how agencies should streamline operations with artificial intelligence safely. President Biden is expected to sign on Thursday the first national security memorandum detailing how the Pentagon and the intelligence agencies should use and protect artificial intelligence technology, placing "guardrails" on how such tools are employed in decisions on nuclear weapons or who is granted asylum. The new document is the latest in a series Mr. Biden has issued that grapples with the challenges of using A.I. tools to speed up government operations -- from detecting cyberattacks to predicting extreme weather -- while limiting the most dystopian possibilities, including the development of autonomous weapons. But most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will lapse after Mr. Biden leaves office. While most national security memorandums are adopted or amended on the margins by successive presidents, it is far from clear how former President Donald J. Trump would approach the issue if he is elected next month. The new directive will be announced on Thursday at the National War College by Jake Sullivan, the national security adviser, who prompted many of the efforts to examine what uses and threats the new tools could pose to the United States. He acknowledged in remarks prepared for the event that one challenge is that the U.S. government funds or owns very few of the key A.I. technologies -- and that they evolve so fast they defy regulation. "Our government took an early and critical role in shaping developments -- from nuclear physics and space exploration, to personal computing and the internet," Mr. Sullivan is expected to say. "That's not been the case with most of the A.I. revolution. While the Department of Defense and other agencies funded a large share of A.I. work in the 20th century, the private sector has propelled much of the last decade of progress." Mr. Biden's aides have said, however, that the absence of guidelines about how A.I. can be used by the Pentagon, the C.I.A., or even the Justice Department is impeding development, as companies worried about what applications could be legal. The new memorandum contains about 50 pages in its unclassified version, with a classified appendix. Some of its conclusions are obvious: It rules out, for example, ever letting A.I. systems decide when to launch nuclear weapons; that is left to the president as commander in chief. While it seems obvious that no one would want the fate of millions to hang on an algorithm's pick, the explicit statement is part of an effort to lure China into deeper talks about the limits that need to be placed on high-risk applications of artificial intelligence. An initial conversation with China on the topic, conducted in Europe this past spring, made no real progress. "This focuses attention on the issue of how these tools affect the most critical decisions governments make," said Herb Lin, a Stanford University scholar who has spent years examining the intersection of artificial intelligence and nuclear decision-making. "Obviously, no one is going to give the nuclear codes to Chat GPT," Dr. Lin said. "But there is a remaining question about how much information that the president is getting is processed and filtered through A.I. systems -- and whether that is a bad thing." But the rules for nonnuclear weapons are murkier. They urge keeping human decision makers "on the loop" of targeting decisions, or overseeing A.I. tools that may be targeting weapons, but without slowing the effectiveness of the weapons. That is especially difficult if Russia and China, as seems likely, begin to make greater use of fully autonomous weapons that operate at blazing speeds because humans are removed from battlefield decisions. Similarly, the president's new A.I. "guardrails" would prohibit letting artificial intelligence tools make a decision on granting asylum. And they would prohibit tracking someone based on ethnicity or religion, or classifying someone as a "known terrorist" without a human weighing in. Perhaps the most intriguing part of the order is that it treats private-sector advances in artificial intelligence as national assets that need to be protected -- much as early nuclear weapons were -- from spying or theft by foreign adversaries. The order calls for intelligence agencies to begin protecting work on large language models or the chips used to power their development as national treasures, and to provide private-sector developers with up-to-the-minute intelligence to protect their inventions. It empowers a new and still-obscure organization, the A.I. Safety Institute, housed within the National Institute of Standards and Technology, to help inspect A.I. tools before they are released to ensure they could not aid a terrorist group in building biological weapons or help a hostile nation like North Korea improve the accuracy of its missiles. And it describes at length efforts to bring the best A.I. specialists from around the world to the United States, much as the United States sought to attract nuclear and military scientists after World War II, rather than risk them working for a rival like Russia.
[9]
White House presses gov't AI use with eye on security, guardrails
WASHINGTON, Oct 24 (Reuters) - The Biden administration on Thursday unveiled plans to push artificial intelligence across the federal government for national security while saying its adoption must still reflect values such as privacy and civil rights. In a memo, the White House directed U.S. agencies "to improve the security and diversity of chip supply chains ... with AI in mind." It also prioritizes the collection of information on other countries' operations against the U.S. AI sector and passing that intelligence along quickly to AI developers to help keep their products secure. Advertisement · Scroll to continue But such efforts must also protect human rights and democratic values, it added. The directive is the latest move by U.S. President Joe Biden's administration to address AI as Congress' efforts to regulate the emerging technology have stalled. Next month, it will convene a global safety summit in San Francisco. Biden last year signed an executive order aimed at limiting the risks that AI poses to consumers, workers, minority groups and national security. Advertisement · Scroll to continue Generative AI can create text, photos and videos in response to open-ended prompts, inspiring both excitement over its potential as well as fears that its could be misused and potentially overpower humans with catastrophic effects. The rapidly evolving technology has prompted governments worldwide to seek to regulate the AI industry, which is led by tech giants such as Microsoft(MSFT.O), opens new tab-backed OpenAI, Alphabet's Google (GOOGL.O), opens new tab and Amazon(AMZN.O), opens new tab, and scores of start-ups. While Thursday's memo pressed government use, it also requires U.S. agencies "to monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses." The directive also calls for a framework for Washington to work with allies to ensure AI "is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms." Reporting by Susan Heavey Editing by Tomasz Janowski Our Standards: The Thomson Reuters Trust Principles., opens new tab
[10]
Biden Administration Fast-Tracks AI National Security, Citing Global Arms Race with China
China has been accused of long-standing efforts to steal sensitive U.S. AI technologies through cyber espionage and data theft operations. U.S. President Joe Biden has announced plans to fast-track AI for national security as the country looks to move ahead of China in the global technology arms race. On Thursday, Oct. 24, the Biden administration announced its first-ever National Security Memorandum (NSM), a directive that aims to "harness cutting-edge AI technologies to advance the U.S. Government's national security mission." Biden Fast Tracks AI The U.S. NSM outlines three main objectives: solidifying U.S. leadership in secure AI, leveraging the technology to advance national security while safeguarding civil rights, and fostering global governance standards for AI. "Americans must know when they can trust systems to perform safely and reliably," the government said in a memo . To achieve this, the White House said it would require U.S. agencies "to monitor, assess, and mitigate AI risks" related to privacy, bias, and other human rights abuses. The NSM also aims to improve the security of the country's chip supply chains to support the development of "the next generation of government supercomputers." Concerns About China's AI Capabilities The U.S. has been attempting to impede the advancement of Chinese AI capabilities in military applications and cutting-edge research by enforcing export controls on semiconductors. On Thursday, White House national security adviser Jake Sullivan said the government was concerned about China's use of AI. Speaking at the National Defense University, Sullivan said China is using AI to spread misinformation, undermine national security, and repress its population. "We know that China is building its own technological ecosystem with digital infrastructure that won't protect sensitive data, that can enable mass surveillance and censorship, that can spread misinformation, and that can make countries vulnerable to coercion," he said. Sullivan added the U.S. needed to provide a more attractive path, "ideally before countries go too far down an untrusted road from which can be expensive and difficult to return." The security advisor claimed the new NSM would help address these concerns and be able to offer guidance to the country's allies. Fierce Competition China has allegedly engaged in data theft operations targeting sensitive AI technologies in the U.S., fueling the government's efforts to heighten security. The White House noted that its competitors wanted to "upend U.S. AI leadership," claiming they had deployed "technological espionage in efforts to steal U.S. technology." "This NSM makes collection on our competitors' operations against our AI sector a top-tier intelligence priority," the memo stated. China has openly committed to becoming the world leader in AI by 2030, making massive state-driven investments to fuel advancements in machine learning and autonomous systems.
[11]
White House presses gov't AI use with eye on security, guardrails
WASHINGTON (Reuters) - The Biden administration on Thursday unveiled plans to push artificial intelligence across the federal government for national security while saying its adoption must still reflect values such as privacy and civil rights. In a memo, the White House directed U.S. agencies "to improve the security and diversity of chip supply chains ... with AI in mind." It also prioritizes the collection of information on other countries' operations against the U.S. AI sector and passing that intelligence along quickly to AI developers to help keep their products secure. But such efforts must also protect human rights and democratic values, it added. The directive is the latest move by U.S. President Joe Biden's administration to address AI as Congress' efforts to regulate the emerging technology have stalled. Next month, it will convene a global safety summit in San Francisco. Biden last year signed an executive order aimed at limiting the risks that AI poses to consumers, workers, minority groups and national security. Generative AI can create text, photos and videos in response to open-ended prompts, inspiring both excitement over its potential as well as fears that its could be misused and potentially overpower humans with catastrophic effects. The rapidly evolving technology has prompted governments worldwide to seek to regulate the AI industry, which is led by tech giants such as Microsoft-backed OpenAI, Alphabet's Google and Amazon, and scores of start-ups. While Thursday's memo pressed government use, it also requires U.S. agencies "to monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses." The directive also calls for a framework for Washington to work with allies to ensure AI "is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms." (Reporting by Susan Heavey; Editing by Tomasz Janowski)
[12]
White House Presses Gov't AI Use With Eye on Security, Guardrails
WASHINGTON (Reuters) - The Biden administration on Thursday unveiled plans to push artificial intelligence across the federal government for national security while saying its adoption must still reflect values such as privacy and civil rights. In a memo, the White House directed U.S. agencies "to improve the security and diversity of chip supply chains ... with AI in mind." It also prioritizes the collection of information on other countries' operations against the U.S. AI sector and passing that intelligence along quickly to AI developers to help keep their products secure. But such efforts must also protect human rights and democratic values, it added. The directive is the latest move by U.S. President Joe Biden's administration to address AI as Congress' efforts to regulate the emerging technology have stalled. Next month, it will convene a global safety summit in San Francisco. Biden last year signed an executive order aimed at limiting the risks that AI poses to consumers, workers, minority groups and national security. Generative AI can create text, photos and videos in response to open-ended prompts, inspiring both excitement over its potential as well as fears that its could be misused and potentially overpower humans with catastrophic effects. The rapidly evolving technology has prompted governments worldwide to seek to regulate the AI industry, which is led by tech giants such as Microsoft-backed OpenAI, Alphabet's Google and Amazon, and scores of start-ups. While Thursday's memo pressed government use, it also requires U.S. agencies "to monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights abuses." The directive also calls for a framework for Washington to work with allies to ensure AI "is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms." (Reporting by Susan Heavey; Editing by Tomasz Janowski)
[13]
New White House memo calls for agencies to protect AI from foreign adversaries
President Joe Biden on Thursday is expected to sign a memorandum detailing how intelligence and national security agencies, including the Pentagon, should use and implement guardrails around AI, reports The New York Times. The order urges keeping humans "in the loop" of AI tools that may be used as targeting weapons, and prohibits letting AI make decisions on granting asylum, tracking someone based on their ethnicity or religion, or classifying a person as a "known terrorist" without human review. Beyond this, the memorandum calls for intelligence agencies to begin protecting work on AI and AI chips from spying or theft by foreign adversaries. And it empowers the recently established AI Safety Institute to help inspect AI tools before they are released to ensure that they can't aid terrorist groups or hostile nations. As The New York Times notes, it's unclear how impactful the order will ultimately be, given most of the deadlines it sets will lapse after Biden leaves office.
[14]
White House Sets New Rules for AI Use by Military and Intelligence Agencies
The White House has set new rules for artificial intelligence (AI) use by military and intelligence agencies. This new framework, signed by President Joe Biden and announced Thursday, directs national security agencies to expand their use of the most advanced AI systems but also prohibits certain uses, including applications that would violate civil rights protected under the U.S. Constitution and any system that would automate the deployment of nuclear weapons. Other provisions in the framework encourage AI research while calling for improved security of the U.S. computer chip supply chain. Intelligence agencies are also directed to prioritize protecting the American industry from foreign espionage campaigns. Officials said the framework is necessary to ensure that AI is used responsibly and to encourage the development of new AI systems as China and other U.S. rivals compete in this space. AI to "Transform Our National Security" U.S. national security adviser Jake Sullivan told students at the National Defense University in Washington when describing the new framework, "This is our nation's first-ever strategy for harnessing the power and managing the risks of AI to advance our national security." Sullivan said AI is different from past innovations-space exploration, the Internet, and nuclear weapons and technology-that the U.S. government largely developed. Instead, the private sector has been leading AI development. Now, AI is "poised to transform our national security landscape," he said. Sullivan said AI is already changing how national security agencies manage logistics and planning, improve cyber defenses and analyze intelligence. Potential Threats From AI While AI can transform national security for the better, it can also be used for mass surveillance, cyberattacks and even lethal autonomous devices. Lethal autonomous drones, capable of taking out an enemy at their own discretion, remain a major concern about AI usage in the military sector. The U.S. issued a declaration last February, calling for international cooperation in setting standards for these drones. The declaration contains non-legally binding guidelines for best practices for responsible military use of AI. "As a rapidly changing technology, we have an obligation to create strong norms of responsible behavior concerning military uses of AI and in a way that keeps in mind that applications of AI by militaries will undoubtedly change in the coming years," Bonnie Jenkins, the State Department's under secretary for arms control and international security, said at the time. "AI Is All Around Us" The new AI framework comes after Biden signed an executive order last October, calling on the U.S. government to create policies for AI usage. Before signing the order, Biden said AI is driving change at "warp speed" and carries both tremendous potential and perils. "AI is all around us," Biden said. "To realize the promise of AI and avoid the risk, we need to govern this technology." This article includes reporting from The Associated Press. Related Articles
[15]
US issues new rules on use of AI by security establishment
US President Joe Biden has announced new rules governing the use of artificial intelligence that will bar the Pentagon and intelligence communities from using the technology in ways that do not "align with democratic values". Biden will publish the new guidelines in a national security memorandum he is due to sign on Thursday. It is the first directive outlining how the US national security apparatus should use AI and aims to set an example for other governments looking to use and expand the technology responsibly, officials said. They added that the new rules were designed to encourage the use of and experimentation with AI, while ensuring that government agencies do not employ it for activities that could, for instance, violate the right to free speech or sidestep controls on nuclear weapons. "Our memorandum directs the first-ever government wide framework on our AI risk management commitments . . . like refraining from uses that depart from our nation's core values, avoiding harmful bias and discrimination, maximising accountability, ensuring effective and appropriate human oversight," US national security adviser Jake Sullivan said in a speech on Thursday morning. The guidelines are not legally binding and if he wins next month's presidential election, Donald Trump could choose not to enact them. Vice-president Kamala Harris has played a key role in shaping the Biden administration's efforts on AI and is expected to focus on emerging technologies if she is elected. The directive is the latest effort by the Biden administration to try to foster use of AI as the US seeks to compete with China, while responding to concerns about the potential misuse of the technology. The rules focus on the national security applications of AI, such as in cyber security, counter-intelligence, and logistics and other activities that support military operations. The US has undertaken a number of measures in an effort to maintain a strategic advantage on the technology, including issuing export controls aimed at slowing China's development of advanced AI. Biden last year signed a sweeping executive order that compelled private companies, whose AI models could threaten US national security, to share safety information with the US government. The new memorandum directs the US intelligence community to prioritise collecting information on competitors' AI activities. It also designates the AI Safety Institute in Washington as responsible for inspecting AI tools to prevent their misuse before they are released.
[16]
US unveils national security memorandum on AI
The United States unveiled Thursday a framework to address national security risks posed by artificial intelligence, a year after President Joe Biden issued an executive order on regulating the technology. The National Security Memorandum (NSM) seeks to thread the needle between harnessing the technology to counter the military use of AI by adversaries such as China while building effective safeguards that uphold public trust, officials said. "There are very clear national security applications of artificial intelligence, including in areas like cybersecurity and counterintelligence," a senior Biden administration official told reporters. "Countries like China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities. "It's particularly imperative that we accelerate our national security communities' adoption and use of cutting-edge AI capabilities to maintain our competitive edge." Last October, Biden ordered the National Security Council and the White House Chief of Staff to develop the memorandum. The instruction came as he issued an executive order on regulating AI, aiming for the United States to "lead the way" in global efforts to manage the technology's risks. The order, hailed by the White House as a "landmark" move, directed federal agencies to set new safety standards for AI systems and required developers to share their safety test results and other critical information with the US government. US officials expect that the rapidly evolving AI technology will unleash military and intelligence competition between global powers. American security agencies were being directed to gain access to the "most powerful AI systems," which involves substantial efforts on procurement, a second administration official said. "We believe that we must out-compete our adversaries and mitigate the threats posed by adversary use of AI," the official told reporters. The NSM, he added, seeks to ensure the government is "accelerating adoption in a smart way, in a responsible way." Alongside the memorandum, the government is set to issue a framework document that provides guidance on "how agencies can and cannot use AI," the official said. In July, more than a dozen civil society groups such as the Center for Democracy & Technology sent an open letter to the Biden administration officials, including National Security Advisor Jake Sullivan, calling for robust safeguards to be built into the NSM. "Despite pledges of transparency, little is known about the AI being deployed by the country's largest intelligence, homeland security, and law enforcement entities like the Department of Homeland Security, Federal Bureau of Investigation, National Security Agency, and Central Intelligence Agency," the letter said. "Its deployment in national security contexts also risks perpetuating racial, ethnic or religious prejudice, and entrenching violations of privacy, civil rights and civil liberties." Sullivan is set to highlight the NSM in an address at the National Defense University in Washington on Thursday, the officials said. Most of the memorandum is unclassified and will be released publicly, while also containing a classified annex that primarily addresses adversary threats, they added.
[17]
White House issues memorandum on AI use in national security initiatives - SiliconANGLE
White House issues memorandum on AI use in national security initiatives The White House today released a document that outlines how the government should use artificial intelligence in national security initiatives. The new national security memorandum, or NSM, is the fruit of an AI-focused executive order that President Joe Biden signed last year. Alongside the creation of the memorandum, the order launched an array of other machine learning initiatives in the federal government. Some are designed to improve AI safety, while one program will use the technology to find and fix flaws in critical software. The first focus of today's NSM is the way the government procures AI technologies for national security missions. Federal agencies will be required to streamline their procurement processes in this area, including by placing a bigger emphasis on buying products that interoperate with one another. When technology products interoperable well out of the box, setting them up requires less time and effort. Another section of the NSM calls on agencies to prioritize AI when building government supercomputers. Many of the supercomputers that the government has commissioned in recent years, including the exascale Frontier system, include graphics cards to speed up machine learning workloads. The NSM states that AI should likewise be a priority in the development of other emerging technology systems. Many of the initiatives that the memorandum outlines focus on the private sector. According to the White House, the NSM directs the government to take steps that will improve the security and diversity of chip supply chains. Additionally, the document makes detecting espionage efforts focused on the U.S. AI industry a "top-tier intelligence priority." The White House elaborated that the document "directs relevant U.S. government entities to provide AI developers with the timely cybersecurity and counterintelligence information necessary to keep their inventions secure." The NSM covers a number of other areas as well. It instructs the government to work with allies on the development of an international framework for ensuring that AI systems are safe, secure and trustworthy. Additionally, the NSM directs the National Economic Council to prepare an assessment about the U.S. AI industry's competitiveness. In conjunction with the release of the memorandum, the White House published a framework that provides federal agencies with guidance on how to implement the new requirements. The latter document covers, among other topics, the way officials should go about addressing AI risks. Lastly, NSM specifies that the government is doubling down on an existing program called the National AI Research Resource. It's an initiative designed to provide scientists with access to compute infrastructure, datasets and other resources necessary for AI research.
[18]
Biden Admin Releases Sweeping Set Of Actions To Protect AI Misuse For Nuclear, Other Risks
This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy. In an unprecedented move, the White House released a new memorandum focused on increasing cooperation between the US national security establishment and the AI industry. Multiple administrations, from the Trump to the Biden Administration, have been focused on the growing threat to US national security interests, particularly threats from hostile non state actors seeking to gain an undue advantage in high tech industries such as semiconductor fabrication by seeking access to proprietary information or technology. Today's announcement extends the White House's efforts to secure American artificial intelligence against intelligence operations from hostile actors by increasing information sharing between the intelligence sector and the AI industry. The latest memorandum from the White House covers policy objectives to enhance US AI leadership through talent acquisition, leverage AI to protect American national security and develop a global AI use policy. Within these, the framework explicitly instructs government agencies to ensure that the AI industry can access relevant counterintelligence information to help protect against hostile state and nonstate actors. It also aims to fortify against risks "stemming from deliberate misuse and accidents," by directing the Commerce Department to work through the AI Safety Institute (AISI) and work with the private sector through classified and unclassified activities. Commerce's work within the framework of protecting against AI misuse and accidents includes chemical weapons and biosecurity. Through this, the Department will "establish an enduring capability to lead voluntary unclassified pre-deployment safety testing of frontier AI models on behalf of the United States Government," to protect against unidentified risks that might cover chemical, bio security and cybersecurity misuse. Within three months of the memorandum, the AISI will test at least two AI models to check whether they can "aid offensive cyber operations, accelerate development of biological and/or chemical weapons, autonomously carry out malicious behavior, automate development and deployment of other models with such capabilities, and give rise to other risks." The nuclear aspect of safeguarding against the misuse of AI will be overseen by the Department of Energy, which will work through the National Nuclear Security Administration (NNSA). This requires the DOE to test AI models' " capacity to generate or exacerbate nuclear and radiological risks," outlines the framework. It also requires the DOE to evaluate AI's capabilities for nuclear and radiological knowledge. Following its evaluations, the DOE will submit a report to the President's desk recommending any potential corrective actions, particularly when protecting and safeguarding "unauthorized disclosure of restricted data or other classified information." Sharing that hostile actors have typically "employed techniques including research collaborations, investment schemes, insider threats, and advanced cyber espionage to collect and exploit United States scientific insights," the White House directs the Office of the Director of National Intelligence (ODNI) and the National Security Council (NSC) to "improve identification and assessment of foreign intelligence threats to the United States AI ecosystem" as well as tertiary sectors such as semiconductor fabrication. It also directs the Pentagon, Commerce, Homeland Security, the Justice Department and other US government agencies to "develop a list of the most plausible avenues" through which US state and nonstate adversaries could harm the AI supply chain.
[19]
President Biden sets up new AI guardrails for military, intelligence agencies
The new guidelines prevent AI from making decisions about launching nuclear weapons and granting asylum. The White House issued its first national security memorandum outlining the use of artificial intelligence for the military and intelligence agencies. The White House also shared a shortened copy of the memo with the public. The new memo sets up guidelines for military and intelligence agencies for using AI in its day-to-day operations. The memo sets a series of deadlines for agencies to study the applications and regulations of AI tools, most of which will lapse following President Biden's term. The memo also aims to limit "the most dystopian possibilities, including the development of autonomous weapons," according to the . National Security Adviser Jake Sullivan announced the new directive today at as part of a talk on AI's presence in government operations. Sullivan has been one of the President's most vocal proponents for examining the benefits and risks of AI technology. He also raised concerns about China's use of AI to control its population and spread misinformation and how the memo can spark conversations with other countries grappling with implementing its own AI strategies. The memorandum establishes some hard edges for AI usage especially when it comes to weapons systems. The memo states that AI can never be used as a decision maker for launching nuclear weapons or assigning asylum status to immigrants coming to the US. It also prohibits AI from tracking anyone based on their race or religion or determining if a suspect is a known terrorist without human intervention. The memo also lays out protections for private-sector AI advance as "national assets that need to be protected...from spying or theft by foreign adversaries," according to the Times. The memorandum orders intelligence agencies to help private companies working on AI models secure their work and provide updated intelligence reports to project their AI assets.
[20]
New Rules for US National Security Agencies Balance AI's Promise With Need to Protect Against Risks
WASHINGTON (AP) -- New rules from the White House on the use of artificial intelligence by U.S. national security and spy agencies aim to balance the technology's immense promise with the need to protect against its risks. The rules being announced Thursday are designed to ensure that national security agencies can access the latest and most powerful AI while also mitigating its misuse, according to Biden administration officials who briefed reporters on condition of anonymity under ground rules set by the White House. Recent advances in artificial intelligence have been hailed as potentially transformative for a long list of industries and sectors, including military, national security and intelligence. But there are risks to the technology's use by government, including possibilities it could be harnessed for mass surveillance, cyberattacks or even lethal autonomous devices. The new policy framework will prohibit certain uses of AI, such as any applications that would violate constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons. The rules also are designed to promote responsible use of AI by directing national security and spy agencies to use the most advanced systems that also safeguard American values, the officials said. Other provisions call for improved security of the nation's computer chip supply chain and direct intelligence agencies to prioritize work to protect the American industry from foreign espionage campaigns. The guidelines were created following an ambitious executive order signed by President Joe Biden last year that directed federal agencies to create policies for how AI could be used. Officials said the rules are needed not only to ensure that AI is used responsibly but also to encourage the development of new AI systems and see that the U.S. keeps up with China and other rivals also working to harness the technology's power. Lethal autonomous drones, which are capable of taking out an enemy at their own discretion, remain a key concern about the military use of AI. Last year, the U.S. issued a declaration calling for international cooperation on setting standards for autonomous drones. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[21]
New rules for US national security agencies balance AI's promise with need to protect against risks
WASHINGTON -- New rules from the White House on the use of artificial intelligence by U.S. national security and spy agencies aim to balance the technology's immense promise with the need to protect against its risks. The rules being announced Thursday are designed to ensure that national security agencies can access the latest and most powerful AI while also mitigating its misuse, according to Biden administration officials who briefed reporters on condition of anonymity under ground rules set by the White House. Recent advances in artificial intelligence have been hailed as potentially transformative for a long list of industries and sectors, including military, national security and intelligence. But there are risks to the technology's use by government, including possibilities it could be harnessed for mass surveillance, cyberattacks or even lethal autonomous devices. The new policy framework will prohibit certain uses of AI, such as any applications that would violate constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons. The rules also are designed to promote responsible use of AI by directing national security and spy agencies to use the most advanced systems that also safeguard American values, the officials said. Other provisions call for improved security of the nation's computer chip supply chain and direct intelligence agencies to prioritize work to protect the American industry from foreign espionage campaigns. The guidelines were created following an ambitious executive order signed by President Joe Biden last year that directed federal agencies to create policies for how AI could be used. Officials said the rules are needed not only to ensure that AI is used responsibly but also to encourage the development of new AI systems and see that the U.S. keeps up with China and other rivals also working to harness the technology's power. Lethal autonomous drones, which are capable of taking out an enemy at their own discretion, remain a key concern about the military use of AI. Last year, the U.S. issued a declaration calling for international cooperation on setting standards for autonomous drones.
[22]
New rules for US national security agencies balance AI's promise with need to protect against risks
WASHINGTON (AP) -- New rules from the White House on the use of artificial intelligence by U.S. national security and spy agencies aim to balance the technology's immense promise with the need to protect against its risks. The rules being announced Thursday are designed to ensure that national security agencies can access the latest and most powerful AI while also mitigating its misuse, according to Biden administration officials who briefed reporters on condition of anonymity under ground rules set by the White House. Recent advances in artificial intelligence have been hailed as potentially transformative for a long list of industries and sectors, including military, national security and intelligence. But there are risks to the technology's use by government, including possibilities it could be harnessed for mass surveillance, cyberattacks or even lethal autonomous devices. The new policy framework will prohibit certain uses of AI, such as any applications that would violate constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons. The rules also are designed to promote responsible use of AI by directing national security and spy agencies to use the most advanced systems that also safeguard American values, the officials said. Other provisions call for improved security of the nation's computer chip supply chain and direct intelligence agencies to prioritize work to protect the American industry from foreign espionage campaigns. The guidelines were created following an ambitious executive order signed by President Joe Biden last year that directed federal agencies to create policies for how AI could be used. Officials said the rules are needed not only to ensure that AI is used responsibly but also to encourage the development of new AI systems and see that the U.S. keeps up with China and other rivals also working to harness the technology's power. Lethal autonomous drones, which are capable of taking out an enemy at their own discretion, remain a key concern about the military use of AI. Last year, the U.S. issued a declaration calling for international cooperation on setting standards for autonomous drones.
[23]
New rules for US national security agencies balance AI's promise with need to protect against risks
WASHINGTON (AP) -- New rules from the White House on the use of artificial intelligence by U.S. national security and spy agencies aim to balance the technology's immense promise with the need to protect against its risks. The rules being announced Thursday are designed to ensure that national security agencies can access the latest and most powerful AI while also mitigating its misuse, according to Biden administration officials who briefed reporters on condition of anonymity under ground rules set by the White House. Recent advances in artificial intelligence have been hailed as potentially transformative for a long list of industries and sectors, including military, national security and intelligence. But there are risks to the technology's use by government, including possibilities it could be harnessed for mass surveillance, cyberattacks or even lethal autonomous devices. The new policy framework will prohibit certain uses of AI, such as any applications that would violate constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons. The rules also are designed to promote responsible use of AI by directing national security and spy agencies to use the most advanced systems that also safeguard American values, the officials said. Other provisions call for improved security of the nation's computer chip supply chain and direct intelligence agencies to prioritize work to protect the American industry from foreign espionage campaigns. The guidelines were created following an ambitious executive order signed by President Joe Biden last year that directed federal agencies to create policies for how AI could be used. Officials said the rules are needed not only to ensure that AI is used responsibly but also to encourage the development of new AI systems and see that the U.S. keeps up with China and other rivals also working to harness the technology's power. Lethal autonomous drones, which are capable of taking out an enemy at their own discretion, remain a key concern about the military use of AI. Last year, the U.S. issued a declaration calling for international cooperation on setting standards for autonomous drones.
[24]
White House issues new rules on AI use for national security agencies
New rules from the White House on the use of artificial intelligence by U.S. national security and spy agencies aim to balance the technology's immense promise with the need to protect against its risks. The framework signed by President Joe Biden and announced Thursday is designed to ensure that national security agencies can access the latest and most powerful AI while also mitigating its misuse. Recent advances in artificial intelligence have been hailed as potentially transformative for a long list of industries and sectors, including military, national security and intelligence. But there are risks to the technology's use by government, including possibilities it could be harnessed for mass surveillance, cyberattacks or even lethal autonomous devices. "This is our nation's first-ever strategy for harnessing the power and managing the risks of AI to advance our national security," national security adviser Jake Sullivan said as he described the new policy to students during an appearance at the National Defense University in Washington.
Share
Share
Copy Link
President Biden's new directive aims to maintain U.S. leadership in AI while addressing national security concerns and ethical considerations, setting deadlines for federal agencies to implement AI technologies responsibly.
President Joe Biden has issued the first-ever national security memorandum on artificial intelligence, setting ambitious targets for federal agencies to harness AI technologies responsibly while maintaining the United States' competitive edge 1. The directive, stemming from last year's executive order, emphasizes the importance of embedding AI in national security systems while protecting human rights, civil liberties, privacy, and safety 1.
The memorandum outlines three main objectives:
Federal agencies face tight deadlines, some as short as 30 days, to accomplish various tasks 1. By April, the AI Safety Institute at NIST is expected to begin voluntary preliminary testing of at least two frontier AI models to evaluate potential national security threats 1.
The directive acknowledges the potential benefits and risks of AI in national security contexts. It warns that misuse of AI could threaten U.S. national security, bolster authoritarianism, undermine democratic institutions, and facilitate human rights abuses 4.
The National Security Agency is tasked with developing capabilities to perform rapid systematic classified testing of AI models' capacity to detect, generate, or exacerbate offensive cyber threats by February 1.
The memorandum directs the National Economic Council to assess the U.S. private sector's competitive advantage in AI 3. It also emphasizes the need to attract global talent in AI and related fields, such as semiconductor design and production 2.
The directive establishes clear boundaries for AI use in critical decisions. For instance, it explicitly prohibits AI systems from making decisions about nuclear weapons use 4. It also mandates human oversight in targeting decisions and prohibits AI from making asylum grant decisions 5.
With the presidential election approaching, the future of this directive remains uncertain. The Republican Party platform suggests a different approach to AI development if Donald Trump were to win the election 1. However, experts note bipartisan consensus on the critical nature of AI adoption for national security 1.
Reference
[1]
[2]
[5]
President Biden signs an executive order for AI data centers and introduces new regulations on AI chip exports, sparking industry debate and raising questions about the future of AI development globally.
78 Sources
78 Sources
The White House issues a national security memorandum directing US agencies to rapidly deploy advanced AI systems for military and intelligence purposes, aiming to maintain a competitive edge over rivals like China.
6 Sources
6 Sources
President Donald Trump signs a new executive order on AI, rescinding Biden-era policies and calling for AI development free from 'ideological bias'. The move sparks debate on innovation versus safety in AI advancement.
44 Sources
44 Sources
Government officials and AI experts from multiple countries meet in San Francisco to discuss AI safety measures, while Trump's vow to repeal Biden's AI policies casts uncertainty over future regulations.
8 Sources
8 Sources
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved