9 Sources
9 Sources
[1]
US Commerce Department mulls quarterly cybersecurity reports
The U.S. Department of Commerce has proposed mandatory reporting requirements for US-based artificial intelligence (AI) developers and cloud providers. This includes giving the federal government reports about developmental activities, cybersecurity measures, and outcomes from red-teaming efforts. It also includes testing for dangerous capabilities like the ability to assist in cyberattacks and also the model's ability to lower the barrier to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons. The Department of Commerce's Bureau of Industry and Security (BIS) requires companies working on AI and those developing, acquiring, or operating computing clusters to submit quarterly reports about their activities from the past quarter and their plans for the next six months. These actions come in compliance with the U.S. President's executive order last year outlining measures for ensuring AI safety and security. The order stated that developers of powerful AI models must share their safety test results and other critical information with the US government in accordance with the Defense Production Act before they make the model public. The order also stated that the secretary of Commerce was to ensure that those acquiring or developing large-scale computing clusters report the existence and location of these clusters and the amount of total computing power available in each cluster. BIS has already collected information from companies in a a mandatory survey conducted on January 26, 2024. The authority says that it will not require companies that have already submitted information to re-submit the same. However, it would require them to report any additions, updates, or changes to the information since the survey. The reporting requirements set out under the proposed regulations apply to dual-use foundational models that meet technical conditions issued by the Department of Commerce. Developers create dual-use foundational AI models by training them on broad data. These models contain at least tens of billions of parameters and can apply to a wide range of contexts. Developers can easily modify these models to exhibit high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters. The Department of Commerce expects to update the technical conditions that determine who has to comply with the regulations based on technological advancements. The U.S. Department of Commerce notes that AI models are becoming integral to US industries that are associated with national defence. For instance, the manufacturers of military equipment (e.g., aircrafts, tanks, and missile launchers) use AI models to enhance the accuracy of their equipment. Similarly, manufacturers of signal intelligence devices (e.g., satellites, cameras, and radar) use AI models to improve how those devices capture signals. Dual-use models can enhance the capabilities of these products and services further by allowing them to operate more effectively across a wider range of environments, to respond more effectively to unexpected signals, and to combat additional types of cyberattacks. As such, the department believes that it is essential that the U.S. government is ready to take action and ensure that dual-use foundation models produced by U.S. companies are available to the defense industrial base. To do so, it needs information about how many U.S. companies are developing, have plans to develop, or have the computing hardware necessary to develop dual-use foundation models. It also needs information about the characteristics of the models under development. Such information will allow the U.S. government to know whether it needs to take steps to stimulate the development of dual-use foundational models or support the development of specific types of models. Given that these models need to be integrated into the defense industrial base, the government also needs information about how companies are training these models to respond to specific inputs and how the companies have tested the safety and reliability of their models. BIF is seeking stakeholder input for aspects of the rules such as: Also read: STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
[2]
US proposes requiring reporting for advanced AI, cloud providers
The U.S. Commerce Department said Monday it is proposing to require detailed reporting requirements for advanced artificial intelligence developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The proposal from the department's Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of "frontier" AI models and computing clusters. It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons. External red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team." Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Commerce said the information collected under the proposal "will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors." President Joe Biden in October 2023 signed an executive order requiring developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are released to the public. The rule would establish reporting requirements for advanced artificial intelligence (AI) models and computing clusters. The regulatory push comes as legislative action in Congress on AI has stalled. Earlier this year, the BIS conducted a pilot survey of AI developers. The Biden administration has taken a series of steps to prevent China from using U.S. technology for AI, as the burgeoning sector raises security concerns. Top cloud providers include Amazon.com's AWS, Alphabet's Google Cloud and Microsoft's Azure unit. (Reporting by David Shepardson; Editing by Chizu Nomiyama)
[3]
US proposes requiring reporting for advanced AI, cloud providers
WASHINGTON (Reuters) - The U.S. Commerce Department said Monday it is proposing to require detailed reporting requirements for advanced artificial intelligence developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The proposal from the department's Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of "frontier" AI models and computing clusters. It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons. External red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team." Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Commerce said the information collected under the proposal "will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors." President Joe Biden in October 2023 signed an executive order requiring developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are released to the public. The rule would establish reporting requirements for advanced artificial intelligence (AI) models and computing clusters. The regulatory push comes as legislative action in Congress on AI has stalled. Earlier this year, the BIS conducted a pilot survey of AI developers. The Biden administration has taken a series of steps to prevent China from using U.S. technology for AI, as the burgeoning sector raises security concerns. Top cloud providers include Amazon.com's AWS, Alphabet's Google Cloud and Microsoft's Azure unit. (Reporting by David Shepardson; Editing by Chizu Nomiyama)
[4]
US Proposes Requiring Reporting for Advanced AI, Cloud Providers
WASHINGTON (Reuters) - The U.S. Commerce Department said Monday it is proposing to require detailed reporting requirements for advanced artificial intelligence developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The proposal from the department's Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of "frontier" AI models and computing clusters. It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons. External red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team." Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Commerce said the information collected under the proposal "will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors." President Joe Biden in October 2023 signed an executive order requiring developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are released to the public. The rule would establish reporting requirements for advanced artificial intelligence (AI) models and computing clusters. The regulatory push comes as legislative action in Congress on AI has stalled. Earlier this year, the BIS conducted a pilot survey of AI developers. The Biden administration has taken a series of steps to prevent China from using U.S. technology for AI, as the burgeoning sector raises security concerns. Top cloud providers include Amazon.com's AWS, Alphabet's Google Cloud and Microsoft's Azure unit. (Reporting by David Shepardson; Editing by Chizu Nomiyama)
[5]
US sets reporting requirements for AI models, hardware
Washington wants to know what the biggest model-makers are up to The US Commerce Department has proposed a fresh set of reporting requirements for developers of cutting-edge AI models and those renting the infrastructure required to train them. The rules [PDF], published on Monday, are a response to the Biden administration's executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" enacted last northern autumn. The executive order established interim reporting requirements for those developing large AI compute clusters and/or training frontier models. The order also directed the Commerce Department to define and maintain permanent reporting standards. As we reported at the time, the interim limits targeted only the biggest models and compute clusters. The updated rules mandate reporting of models that require more than 10 integer or floating point operations to train. Models trained primarily on biological sequence data are subject to a lower threshold of 10 operations. Entities developing such projects will also be required to disclose the capabilities of their models, their infosec protections, and any outcomes of red-teaming efforts to ensure that AI technologies "meet stringent safety standards for safety, and reliability, can withstand cyber attacks, and have limited risk of misuse by foreign adversaries or non-state actors," according the announcement of the proposed rules. Chief among the department's concerns is that sufficiently advanced models could be used to facilitate cyber crime - or lower the barrier to developing biological, chemical, nuclear weapons and dirty bombs - if actions aren't taken to test, identify, and mitigate these threats. The rules also require infrastructure operators to report if their compute clusters exceed 300Gbit/sec networking capacity, have a theoretical peak performance greater than 10 integer or floating point operations per second for AI training - or 100 exaFLOPS without sparsity. As a reminder, that's the equivalent to a cluster of 50,530 H100 GPUs connected via 400Gbit/sec InfiniBand, assuming FP8 precision, or a cluster of 101,060 such accelerators at the 16-bit precision more commonly employed in AI training. While the threshold for reporting compute capacity hasn't changed, the proposed rules increase the interconnect bandwidth from 100Gbit/sec to 300Gbit/sec. The rules also clarify that we're talking about dense compute capability versus the sparse floating point mathematics often touted by Nvidia and its rivals. Under these rules, those operating clusters already exceeding this threshold, or expecting to in the next month, will be required to report the scope of their operations on a quarterly basis. While those numbers may have seemed enormous a year ago, the scale and pace of AI innovation has accelerated considerably. Some hyperscalers, like Meta, are deploying hundreds of thousands of GPUs. However, we expect the list of infrastructure providers subject to the rules to be rather short. "As AI is progressing rapidly, it holds both tremendous promise and risk. This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security," commerce secretary Gina Raimondo declared in a canned statement. If approved following a 30-day comment period, the proposed rules will be codified in the Bureau of Industry and Security's industrial base surveys data-collections regulations. Dual-use applications of AI - those with the potential for both peaceful and non-peaceful use cases - have been on Uncle Sam's radar for some time now. Back in January, the Commerce Department proposed rules that would require certain infrastructure-as-a-service providers to tattle on any foreign person using their services to train large AI models likely to be capable of dual-use applications. While not named in the proposal, the implied target of these measures is China - which is known to be circumventing US trade restrictions on AI accelerators by running workloads on infrastructure rented inside the US. Today's proposal also comes a little under a week after the department tightened controls on quantum computing and semiconductor exports to China, Iran, Russia, and other nations of concern. ®
[6]
US proposes reporting mandates for AI developers, cloud providers
The Biden administration is proposing a reporting requirement for leading artificial intelligence (AI) developers and cloud providers to allow the federal government to evaluate their technology's safety and defense capabilities, the Commerce Department announced. The department said Monday the proposed rule from its Bureau of Industry and Security would mandate "frontier" AI model and computing clusters to provide detailed reporting about their developmental activities and cybersecurity measures. The rule will also ask for the developers' results from red teaming -- the process of testing for flaws and vulnerabilities in an AI system. The bureau said it would include "testing for dangerous capabilities like the ability to assist in cyberattacks or lower the barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons." The proposal comes amid a wider push from the federal government to better understand the capabilities and risks of AI as the technology develops. Commerce Secretary Gina Raimondo noted in a statement that AI is "progressing rapidly" with both "tremendous promise and risk." "The information collected through the proposed reporting requirement will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors," the Commerce Department security bureau said in a release. It follows a pilot survey of AI developers conducted earlier this year, the bureau said. The conversation over guardrails on AI has permeated across government including Congress, where lawmakers have held various hearings and meetings with experts to understand the risks and benefits of the technology. President Biden last year issued a sweeping executive order on AI safety, risks and the preserving of data privacy. The AI Safety Institute was launched within the Commerce Department as part of this order. Last month, leading AI companies OpenAI and Anthropic have signed agreements with the U.S. government for their AI models to be used for research, testing and evaluation.
[7]
US proposes requiring reporting for advanced AI, cloud providers
It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons. External red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team." Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Commerce said the information collected under the proposal "will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors." President Joe Biden in October 2023 signed an executive order requiring developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are released to the public. The rule would establish reporting requirements for advanced artificial intelligence (AI) models and computing clusters. The regulatory push comes as legislative action in Congress on AI has stalled. Earlier this year, the BIS conducted a pilot survey of AI developers. The Biden administration has taken a series of steps to prevent China from using U.S. technology for AI, as the burgeoning sector raises security concerns. Top cloud providers include Amazon.com's AWS, Alphabet's Google Cloud and Microsoft's Azure unit. (Reporting by David Shepardson; Editing by Chizu Nomiyama)
[8]
AI developers and cloud providers may be required to report to the government - Fast Company
The U.S. Commerce Department said Monday it is proposing to require detailed reporting requirements for advanced artificial intelligence developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The proposal from the department's Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of "frontier" AI models and computing clusters. It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons. External red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the "red team."
[9]
Commerce Department Proposes Mandatory Reporting Requirements for AI Firms | PYMNTS.com
The Commerce Department's Bureau of Industry and Security (BIS) aims to require the world's leading artificial intelligence (AI) developers and cloud providers to provide detailed reporting to the federal government. The BIS released a Notice of Proposed Rulemaking Monday (Sept. 9), saying that the new mandatory reporting requirements are intended to ensure that AI is safe and reliable, can withstand cyberattacks and has limited risk of misuse by foreign adversaries or non-state actors, according to a Monday (Sept. 9) press release. "As AI is progressing rapidly, it holds both tremendous promise and risk," Secretary of Commerce Gina M. Raimondo said in the release. "This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security." The reporting mandated by the proposed rule would encompass developmental activities, cybersecurity measures and outcomes from red-teaming efforts, according to the release. The red-teaming efforts would involve testing for the ability to assist in cyberattacks; the ability to lower the barriers to entry to developing chemical, biological, radiological or nuclear weapons; and other dangerous capabilities, per the release. The BIS has long conducted defense industrial base surveys that inform the government about emerging risks in important industries, Under Secretary of Commerce for Industry and Security Alan F. Estevez said in the release. "This proposed reporting requirement would help us understand the capabilities and security of our most advanced AI systems," Estevez said. The Biden Administration issued an executive order aimed at safe AI development in October 2023, adding that more action is required and that the White House would work with Congress in hopes of crafting bipartisan AI legislation. Biden's requirements for AI companies included a rule saying that the developers "of the most powerful AI systems" share their safety test results and other key information with the federal government; that AI firms must come up with "standards, tools and tests" to make sure their systems are secure and trustworthy; and that the companies guard against the threat of "using AI to engineer dangerous biological materials" by establishing strong standards for biological synthesis screening.
Share
Share
Copy Link
The US Department of Commerce has proposed new regulations requiring advanced AI and cloud service providers to submit quarterly reports on cybersecurity and artificial intelligence. This move aims to enhance national security and maintain the US's technological edge.
The U.S. Department of Commerce has unveiled a proposal that would require advanced artificial intelligence (AI) and cloud computing service providers to submit quarterly reports on cybersecurity and AI developments
1
. This initiative, announced on September 9, 2024, is part of a broader effort to bolster national security and maintain the United States' technological edge in an increasingly competitive global landscape.The proposed rule would apply to U.S. Infrastructure as a Service (IaaS) providers and foreign IaaS providers with U.S. customers
2
. It specifically targets those offering advanced AI capabilities, including services that train AI models with over 1 billion parameters. The regulation would also encompass providers with over 1 million customers in the U.S. and those generating annual revenues exceeding $50 million from U.S. customers3
.Under the proposed rules, affected companies would be required to submit detailed reports covering various aspects of their operations:
4
The Commerce Department's initiative is driven by several key factors:
5
.While the proposal aims to enhance security and oversight, it has raised concerns within the tech industry. Some worry about the potential burden of compliance, especially for smaller companies that may fall under the reporting requirements. Others question the effectiveness of such measures in preventing misuse of cloud services by malicious actors
5
.Related Stories
The Commerce Department has opened a 60-day public comment period on the proposed rules. After reviewing feedback, the department will work on finalizing the regulations. The exact timeline for implementation remains uncertain, but the government appears keen on moving swiftly to address the rapidly evolving AI landscape
1
.This U.S. initiative comes amid a global push for greater regulation and oversight of AI technologies. It reflects growing concerns about the potential risks associated with advanced AI systems and the need for responsible development and deployment. The proposed rules could potentially influence similar regulatory efforts in other countries, shaping the future of AI governance on a global scale.
Summarized by
Navi
[3]
[4]
[5]
24 Oct 2024•Policy and Regulation
14 Dec 2024•Policy and Regulation
09 Jan 2025•Policy and Regulation