8 Sources
[1]
DHS lays out its ground rules for businesses using AI
It aligns with other government efforts to maintain security when using AI The US Department of Homeland Security (DHS) has introduced a new set of guidelines in an effort to promote the secure and responsible use of AI across what it deems to be critical infrastructure sectors. The 'Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure' hopes to tackle existing challenges so that AI can be used more widely in areas where its time-saving credentials matter. In an announcement, the DHS says the framework is the first of its kind for all levels of the supply chain, including cloud and compute firms, AI developers and even consumers. The framework looks to address the risks associated with artificial intelligence, including system vulnerabilities and attacks. Noting the rise in deployment of generative AI across these critical infrastructure sectors, the DHS added: "Given the increasingly interconnected nature of these systems, their disruption can have devastating consequences for homeland security." "The Framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and more," noted DHS Secretary Alejandro N. Mayorkas. "I urge every executive, developer, and elected official to adopt and use this Framework to help build a safer future for all." The framework is broken down into a series of actions for each member of the supply chain, including cloud and compute infrastructure providers, AI developers, critical infrastructure owners and operators, civic society members such as universities and research institutions, and public sector entities like federal, state, local, tribal, and territorial governments. US Secretary of Commerce, Gina Raimondo, added: "This new Framework will complement the work we're doing at the Department of Commerce to help ensure AI is responsibly deployed across our critical infrastructure to help protect our fellow Americans and secure the future of the American economy."
[2]
US government releases guidelines for AI in critical infrastructure
The framework recommends that AI developers evaluate potentially dangerous capabilities in their products. The US government released guidelines this week for using artificial intelligence (AI) in the power grid, water system, air travel network, and other areas of critical infrastructure. Private industry would have to adopt and implement the guidelines announced by the US Homeland Security Department, which were developed in consultation with the department's advisory Artificial Intelligence Safety and Security Board. "We intend the framework to be, frankly, a living document and to change as developments in the industry change as well," Alejandro Mayorkas, homeland security secretary, told reporters. The framework recommends that AI developers evaluate potentially dangerous capabilities in their products, ensure their products align with "human-centric values" and protect users' privacy. The cloud-computing infrastructure would need to vet hardware and software suppliers and protect the physical security of data centres. Owners and operators of critical infrastructure are advised to have stronger cybersecurity protocols that consider AI-related risks and provide transparency about how AI is used. There are also guidelines for state and local governments. Asked if the framework could change once President-elect Donald Trump takes the oath of office in January, Mayorkas stressed that he was implementing the policies of current President Joe Biden's administration. "The president-elect will determine what policies to promulgate and implement," Mayorkas said. "And that is, of course, the president-elect's prerogative".
[3]
Homeland Security Department releases framework for using AI in critical infrastructure
WASHINGTON -- The Biden administration on Thursday released guidelines for using artificial intelligence in the power grid, water system, air travel network and other pieces of critical infrastructure. Private industry would have to adopt and implement the guidelines announced by the Homeland Security Department, which were developed in consultation with the department's advisory Artificial Intelligence Safety and Security Board. Homeland Security Secretary Alejandro Mayorkas told reporters that "we intend the framework to be, frankly, a living document and to change as developments in the industry change as well." The framework recommends that AI developers evaluate potentially dangerous capabilities in their products, ensure their products align with "human-centric values" and protect users' privacy. The cloud-computing infrastructure would need to vet hardware and software suppliers and protect the physical security of data centers. Owners and operators of critical infrastcture are advised to have stronger cybersecurity protocols that consider AI-related risks and provide transparency about how AI is used. There are also guidelines for state and local governments. Asked if the framework could possibly change once President-elect Donald Trump takes the oath of office in January, Mayorkas stressed that he was implementing the policies of President Joe Biden's administration. "The president-elect will determine what policies to promulgate and implement," Mayorkas said. "And that is, of course, the president-elect's prerogative."
[4]
Homeland Security Department releases framework for using AI in critical infrastructure
WASHINGTON (AP) -- The Biden administration on Thursday released guidelines for using artificial intelligence in the power grid, water system, air travel network and other pieces of critical infrastructure. Private industry would have to adopt and implement the guidelines announced by the Homeland Security Department, which were developed in consultation with the department's advisory Artificial Intelligence Safety and Security Board. Homeland Security Secretary Alejandro Mayorkas told reporters that "we intend the framework to be, frankly, a living document and to change as developments in the industry change as well." The framework recommends that AI developers evaluate potentially dangerous capabilities in their products, ensure their products align with "human-centric values" and protect users' privacy. The cloud-computing infrastructure would need to vet hardware and software suppliers and protect the physical security of data centers. Owners and operators of critical infrastcture are advised to have stronger cybersecurity protocols that consider AI-related risks and provide transparency about how AI is used. There are also guidelines for state and local governments. Asked if the framework could possibly change once President-elect Donald Trump takes the oath of office in January, Mayorkas stressed that he was implementing the policies of President Joe Biden's administration. "The president-elect will determine what policies to promulgate and implement," Mayorkas said. "And that is, of course, the president-elect's prerogative."
[5]
Homeland Security Department Releases Framework for Using AI in Critical Infrastructure
WASHINGTON (AP) -- The Biden administration on Thursday released guidelines for using artificial intelligence in the power grid, water system, air travel network and other pieces of critical infrastructure. Private industry would have to adopt and implement the guidelines announced by the Homeland Security Department, which were developed in consultation with the department's advisory Artificial Intelligence Safety and Security Board. Homeland Security Secretary Alejandro Mayorkas told reporters that "we intend the framework to be, frankly, a living document and to change as developments in the industry change as well." The framework recommends that AI developers evaluate potentially dangerous capabilities in their products, ensure their products align with "human-centric values" and protect users' privacy. The cloud-computing infrastructure would need to vet hardware and software suppliers and protect the physical security of data centers. Owners and operators of critical infrastcture are advised to have stronger cybersecurity protocols that consider AI-related risks and provide transparency about how AI is used. There are also guidelines for state and local governments. Asked if the framework could possibly change once President-elect Donald Trump takes the oath of office in January, Mayorkas stressed that he was implementing the policies of President Joe Biden's administration. "The president-elect will determine what policies to promulgate and implement," Mayorkas said. "And that is, of course, the president-elect's prerogative." Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[6]
Homeland Security Department releases framework for using AI in critical infrastructure
WASHINGTON (AP) -- The Biden administration on Thursday released guidelines for using artificial intelligence in the power grid, water system, air travel network and other pieces of critical infrastructure. Private industry would have to adopt and implement the guidelines announced by the Homeland Security Department, which were developed in consultation with the department's advisory Artificial Intelligence Safety and Security Board. Homeland Security Secretary Alejandro Mayorkas told reporters that "we intend the framework to be, frankly, a living document and to change as developments in the industry change as well." The framework recommends that AI developers evaluate potentially dangerous capabilities in their products, ensure their products align with "human-centric values" and protect users' privacy. The cloud-computing infrastructure would need to vet hardware and software suppliers and protect the physical security of data centers. Owners and operators of critical infrastcture are advised to have stronger cybersecurity protocols that consider AI-related risks and provide transparency about how AI is used. There are also guidelines for state and local governments. Asked if the framework could possibly change once President-elect Donald Trump takes the oath of office in January, Mayorkas stressed that he was implementing the policies of President Joe Biden's administration. "The president-elect will determine what policies to promulgate and implement," Mayorkas said. "And that is, of course, the president-elect's prerogative."
[7]
Homeland Security Department to Release New A.I. Guidance
The voluntary best practices are aimed at companies that own or operate critical infrastructure. Companies that own or operate critical infrastructure increasingly rely on artificial intelligence. Airports use A.I. in their security systems; water companies use it to predict pipe failures; and energy companies use it to project demand. On Thursday, the U.S. Department of Homeland Security will release new guidance for how such companies use the technology. The document, a compilation of voluntary best practices, stems from an executive order that President Biden signed more than a year ago to create safeguards around A.I. Among other measures, it directed the Department of Homeland Security to create a board of experts from the private and public sectors to examine how best to protect critical infrastructure. The risks run the gamut from an airline meltdown to the exposure of confidential personal information. Alejandro N. Mayorkas, the homeland security secretary, first convened the board in May. It includes Sam Altman, the chief executive of OpenAI; Jensen Huang, the chief executive of Nvidia; Sundar Pichai, the chief executive of Alphabet; and Vicki Hollub, the chief executive of Occidental Petroleum. Given the broad range of companies whose executives worked to put it together, the guidance is general in scope. It encourages companies that provide cloud computing services, like Amazon, to monitor for suspicious activity and establish clear protocol for reporting it. It suggests developers like OpenAI put in place strong privacy practices and look for potential biases. And for critical infrastructure owners and operators, like airlines, it encourages strong privacy practices and transparency around the use of A.I. The 35-page document stops short of suggesting any formal metrics that could be used to help companies hold themselves accountable for complying with the guidelines, though it calls on legislators to supplement companies' internal oversight mechanisms with regulation -- a requirement that President Biden acknowledged was necessary when he issued his executive order. "It's a broad acknowledgment that we're all responsible for our individual contributions to A.I. and the technology," said Ed Bastian, the chief executive of Delta Air Lines, who is also on the board. "It's something that, as the end user, we've been victims of candidly in the past." Mr. Bastian was referring to a flawed software update issued this summer by the cybersecurity company CrowdStrike that led to widespread technological disruptions. The outage, which affected Delta more than other carriers, highlighted operational vulnerabilities and cost Delta an estimated $500 million. He said he hoped the new guidance could help avoid a similarly disastrous problem. "Putting out a framework that everyone sees each other's accountability to the ecosystem sounds simple, but it's a massive step in the right direction," he said. The board's call to supplement the new guidance with regulation may not be answered any time soon. Such laws do not appear to be an immediate priority for President-elect Donald J. Trump. He has said he would revoke Mr. Biden's executive order on artificial intelligence as part of his deregulatory agenda. One of his most urgent priorities for the Department of Homeland Security is cracking down on illegal immigrants. Mr. Mayorkas said the framework would still be useful without enforcement mechanisms. He compared the work companies are doing to safeguard against the risks of A.I. to the work that they did when cybersecurity risks first emerged. He hopes they move faster. "It took many companies, not all, but many companies, too much time to build governance regimes to address the breadth and depth of the cybersecurity challenge," Mr. Mayorkas said. "By calling this out in terms of a culture of safety, security and accountability in the in the framework, we seek to ensure a more accelerated uptake in the in the domain of A.I."
[8]
Biden Admin: If Trump Wants to Squash 'Rare' AI Plan, That's His 'Prerogative'
Homeland Security Secretary Alejandro Mayorkas (Credit: Kent Nishimura / Stringer / Getty Images News) The Department of Homeland Security today released a framework for using AI in critical infrastructure, such as emergency services, the power grid, and water and IT systems. "The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and so much more," Homeland Security Secretary Alejandro Mayorkas said on a call with reporters. The effort stems from President Biden's 2023 Executive Order on AI. Biden tasked Mayorkas with forming a board to discuss the use of AI, which came together in April, the Associated Press reports. The group includes OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, Google CEO Sundar Pichai, Microsoft CEO Satya Nadella, and more. The framework begins to answer an essential question: What will these companies do, and how will it benefit the American public? The 35-page document carves out a game plan to use AI to enhance critical infrastructure, with extensive feedback from a range of stakeholders. It proposes a set of "voluntary responsibilities" for the major players: AI companies, cloud computing providers, critical infrastructure operators, civic society, and the public sector. "It is, quite frankly, exceedingly rare to have leading AI developers engaged directly with civil society on issues that are at the forefront of today's AI debates," says Mayorkas. But will any of this happen when President-elect Donald Trump takes office? He's expected to take a hands-off approach to AI, with an emphasis on deregulation. Mayorkas also cautioned about over-regulating so a second Trump administration may find common ground in this effort. "It is quite important that [AI regulation] should not impair our leadership in the world and not suffocate our inventiveness," Mayorkas said. "This framework...could ward off precipitous regulation or legislation that does not move at the speed of business and does not embrace and support our innovative leadership." But whether the framework persists ultimately depends on Trump. "The President-elect will determine what policies to promulgate and implement, and that is, of course, the President-elect's prerogative," Mayorkas said. "But right now, we have one president, and we are executing accordingly." Even if Trump strikes down the effort, Mayorkas said he would expect the 23 board members who signed onto the plan to implement it and "catalyze other organizations in their respective spheres...to adopt and implement the guidelines as well." In July, many of the companies involved here joined forces for a coalition intended to enhance trust and security in the use and deployment of AI. The Coalition for Secure AI (CoSAI) will be hosted by OASIS Open, a nonprofit promoting open standards development. Sponsors include Amazon, Anthropic, IBM, Intel, Microsoft, Nvidia, and OpenAI.
Share
Copy Link
The DHS has introduced a comprehensive framework to guide the responsible and secure implementation of AI across critical infrastructure sectors, addressing potential risks and promoting widespread adoption.
The U.S. Department of Homeland Security (DHS) has unveiled a pioneering framework aimed at promoting the secure and responsible use of artificial intelligence (AI) in critical infrastructure sectors. This initiative, titled 'Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure,' marks a significant step in addressing the challenges and risks associated with AI deployment in crucial areas of national importance 1.
The framework is designed to be all-encompassing, targeting various levels of the AI supply chain, including cloud and compute firms, AI developers, and even end-users. It aims to tackle existing challenges to facilitate wider AI adoption in time-sensitive applications while maintaining security and responsibility 1.
DHS Secretary Alejandro N. Mayorkas emphasized the potential impact of the framework, stating, "The Framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and more" 1.
The framework outlines specific recommendations for different stakeholders:
AI Developers: Evaluate potentially dangerous capabilities in their products, ensure alignment with "human-centric values," and protect user privacy 2.
Cloud Computing Infrastructure: Vet hardware and software suppliers and enhance physical security of data centers 3.
Critical Infrastructure Owners and Operators: Implement stronger cybersecurity protocols considering AI-related risks and provide transparency about AI usage 4.
State and Local Governments: Adhere to specific guidelines tailored for public sector entities 5.
The framework was developed in consultation with the DHS's advisory Artificial Intelligence Safety and Security Board, ensuring a comprehensive and expert-driven approach. Secretary Mayorkas highlighted the framework's adaptability, describing it as a "living document" that will evolve alongside industry developments 3.
This framework aligns with other government efforts to maintain security in AI implementation. U.S. Secretary of Commerce, Gina Raimondo, noted that the framework would complement the Department of Commerce's work in ensuring responsible AI deployment across critical infrastructure 1.
As AI continues to play an increasingly significant role in critical infrastructure, this framework represents a crucial step towards balancing innovation with security and responsibility in the rapidly evolving landscape of artificial intelligence.
Summarized by
Navi
[5]
U.S. News & World Report
|Homeland Security Department Releases Framework for Using AI in Critical InfrastructureGoogle is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.
2 Sources
Technology
6 hrs ago
2 Sources
Technology
6 hrs ago
Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago
Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.
5 Sources
Technology
1 day ago
5 Sources
Technology
1 day ago
As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago