Curated by THEOUTPOST
On Thu, 21 Nov, 12:04 AM UTC
8 Sources
[1]
AI experts and US allies hold inaugural meeting on safety institutes
Experts in artificial intelligence (AI) are gathering in San Francisco to talk about how to keep models safe, but uncertainty from the incoming Trump administration overshadows their work. Government scientists and artificial intelligence (AI) experts are meeting in the US this week as questions about the industry's future loom ahead of President-elect Donald Trump's second term in the White House. Officials from the US and its allies are hoping to talk about how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse. "We have a choice," said US Commerce Secretary Gina Raimondo to the crowd of attendees on Wednesday. "We are the ones developing this technology. You are the ones developing this technology. We can decide what it looks like". The meeting was the first of the International Network of AI Safety Institutes, which was announced during the AI summit in Seoul in May. The lingering uncertainty comes from Trump's camp, who promised to "repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology". Biden signed a sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department. But Trump hasn't made clear what about the order he dislikes or what he'd do about the AI Safety Institute. Trump's transition team didn't respond to emails this week seeking comment. Trump didn't spend much time talking about AI during his four years as president, though in 2019 he became the first to sign an executive order about AI. It directed federal agencies to prioritise research and development in the field. Addressing concerns about slowing down innovation, Raimondo said she wanted to make it clear that the US AI Safety Institute is not a regulator and also "not in the business of stifling innovation". "But here's the thing. Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation," she said. Some experts expect the kind of technical work happening at an old military officers' club at San Francisco's Presidio National Park this week to proceed regardless of who's in charge. "There's no reason to believe that we'll be doing a 180 when it comes to the work of the AI Safety Institute," said Heather West, a senior fellow at the Center for European Policy Analysis. Behind the rhetoric, she said there's already been overlap. Raimondo and other officials sought to press home the idea that AI safety is not a partisan issue. "And by the way, this room is bigger than politics. Politics is on everybody's mind. I don't want to talk about politics. I don't care what political party you're in, this is not in Republican interest or Democratic interest," she said. "It's frankly in no one's interest anywhere in the world, in any political party, for AI to be dangerous, or for AI to in get the hands of malicious non-state actors that want to cause destruction and sow chaos."
[2]
US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work
Hosted by the Biden administration, officials from a number of US allies - among them Australia, Canada, Japan, Kenya, Singapore, the United Kingdom and the 27-nation European Union - began meeting Wednesday in the California city that's a commercial hub for AI development.President-elect Donald Trump has vowed to repeal President Joe Biden's signature artificial intelligence policy when he returns to the White House for a second term. What that actually means for the future of AI technology remains to be seen. Among those who could use some clarity are the government scientists and AI experts from multiple countries gathering in San Francisco this week to deliberate on AI safety measures. Hosted by the Biden administration, officials from a number of US allies - among them Australia, Canada, Japan, Kenya, Singapore, the United Kingdom and the 27-nation European Union - began meeting Wednesday in the California city that's a commercial hub for AI development. Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse. It's the first such meeting since world leaders agreed at an AI summit in South Korea in May to build a network of publicly backed safety institutes to advance research and testing of the technology. "We have a choice," said US Commerce Secretary Gina Raimondo to the crowd of officials, academics and private-sector attendees on Wednesday. "We are the ones developing this technology. You are the ones developing this technology. We can decide what it looks like." Like other speakers, Raimondo addressed the opportunities and risks of AI - including "the possibility of human extinction" and asked why would we allow that? "Why would we choose to allow AI to replace us? Why would we choose to allow the deployment of AI that will cause widespread unemployment and societal disruption that goes along with it? Why would we compromise our global security?" she said. "We shouldn't. In fact, I would argue we have an obligation to keep our eyes at every step wide open to those risks and prevent them from happening. And let's not let our ambition blind us and allow us to sleepwalk into our own undoing." Hong Yuen Poon, deputy secretary of Singapore's Ministry of Digital Development and Information, said that a "helping-one-another mindset is important" between countries when it comes to AI safety, including with "developing countries which may not have the full resources" to study it. Biden signed a sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department. Trump promised in his presidential campaign platform to "repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology." But he hasn't made clear what about the order he dislikes or what he'd do about the AI Safety Institute. Trump's transition team didn't respond to emails this week seeking comment. Addressing concerns about slowing down innovation, Raimondo said she wanted to make it clear that the US AI Safety Institute is not a regulator and also "not in the business of stifling innovation." "But here's the thing. Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation," she said. Tech industry groups - backed by companies including Amazon, Google, Meta and Microsoft - are mostly pleased with the AI safety approach of Biden's Commerce Department, which has focused on setting voluntary standards. They have pushed for Congress to preserve the new agency and codify its work into law. Some experts expect the kind of technical work happening at an old military officers' club at San Francisco's Presidio National Park this week to proceed regardless of who's in charge. "There's no reason to believe that we'll be doing a 180 when it comes to the work of the AI Safety Institute," said Heather West, a senior fellow at the Centre for European Policy Analysis. Behind the rhetoric, she said there's already been overlap. Trump didn't spend much time talking about AI during his four years as president, though in 2019 he became the first to sign an executive order about AI. It directed federal agencies to prioritize research and development in the field. Before that, tech experts were pushing the Trump-era White House for a stronger AI strategy to match what other countries were pursuing. Trump in the waning weeks of his administration signed an executive order promoting the use of "trustworthy" AI in the federal government. Those policies carried over into the Biden administration. All of that was before the 2022 debut of ChatGPT, which brought public fascination and worry about the possibilities of generative AI and helped spark a boom in AI-affiliated businesses. What's also different this time is that tech mogul and Trump adviser Elon Musk has been picked to lead a government cost-cutting commission. Musk holds strong opinions about AI's risks and grudges against some AI industry leaders, particularly ChatGPT maker OpenAI, which he has sued. Raimondo and other officials sought to press home the idea that AI safety is not a partisan issue. "And by the way, this room is bigger than politics. Politics is on everybody's mind. I don't want to talk about politics. I don't care what political party you're in, this is not in Republican interest or Democratic interest," she said. "It's frankly in no one's interest anywhere in the world, in any political party, for AI to be dangerous, or for AI to in get the hands of malicious non-state actors that want to cause destruction and sow chaos."
[3]
US gathers allies to talk AI safety, Trump's vow to undo Biden's AI policy overshadows their work
SAN FRANCISCO -- President-elect Donald Trump has vowed to repeal President Joe Biden's signature artificial intelligence policy when he returns to the White House for a second term. What that actually means for the future of AI technology remains to be seen. Among those who could use some clarity are the government scientists and AI experts from multiple countries gathering in San Francisco this week to deliberate on AI safety measures. Hosted by the Biden administration, officials from a number of U.S. allies -- among them Australia, Canada, Japan, Kenya, Singapore, the United Kingdom and the 27-nation European Union -- began meeting Wednesday in the California city that's a commercial hub for AI development. Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse. It's the first such meeting since world leaders agreed at an AI summit in South Korea in May to build a network of publicly backed safety institutes to advance research and testing of the technology. Hong Yuen Poon, deputy secretary of Singapore's Ministry of Digital Development and Information, said Wednesday that a "helping-one-another mindset is important" between countries when it comes to AI safety, including with "developing countries which may not have the full resources" to study it. Biden signed a sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department. Trump promised in his presidential campaign platform to "repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology." But he hasn't made clear what about the order he dislikes or what he'd do about the AI Safety Institute. Trump's transition team didn't respond to emails this week seeking comment. Tech industry groups -- backed by companies including Amazon, Google, Meta and Microsoft -- are mostly pleased with the AI safety approach of Biden's Commerce Secretary Gina Raimondo, which has focused on setting voluntary standards. They have pushed for Congress to preserve the new agency and codify its work into law. Some experts expect the kind of technical work happening at an old military officers' club at San Francisco's Presidio National Park this week to proceed regardless of who's in charge. "There's no reason to believe that we'll be doing a 180 when it comes to the work of the AI Safety Institute," said Heather West, a senior fellow at the Center for European Policy Analysis. Behind the rhetoric, she said there's already been overlap. Trump didn't spend much time talking about AI during his four years as president, though in 2019 he became the first to sign an executive order about AI. It directed federal agencies to prioritize research and development in the field. Before that, tech experts were pushing the Trump-era White House for a stronger AI strategy to match what other countries were pursuing. Trump in the waning weeks of his administration signed an executive order promoting the use of "trustworthy" AI in the federal government. Those policies carried over into the Biden administration. All of that was before the 2022 debut of ChatGPT, which brought public fascination and worry about the possibilities of generative AI and helped spark a boom in AI-affiliated businesses. What's also different this time is that tech mogul and Trump adviser Elon Musk has been picked to lead a government cost-cutting commission. Musk holds strong opinions about AI's risks and grudges against some AI industry leaders, particularly ChatGPT maker OpenAI, which he has sued.
[4]
US Gathers Allies to Talk AI Safety. Trump's Vow to Undo Biden's AI Policy Overshadows Their Work
President-elect Donald Trump has vowed to repeal President Joe Biden's signature artificial intelligence policy when he returns to the White House for a second term. What that actually means for the future of AI technology remains to be seen. Among those who could use some clarity are the government scientists and AI experts from multiple countries gathering in San Francisco this week to deliberate on AI safety measures. Hosted by the Biden administration, officials from a number of U.S. allies -- among them Canada, Kenya, Singapore, the United Kingdom and the 27-nation European Union -- are scheduled to begin meeting Wednesday in the California city that's a commercial hub for AI development. Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse. It's the first such meeting since world leaders agreed at an AI summit in South Korea in May to build a network of publicly backed safety institutes to advance research and testing of the technology. Biden signed a sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department. Trump promised in his presidential campaign platform to "repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology." But he hasn't made clear what about the order he dislikes or what he'd do about the AI Safety Institute. Trump's transition team didn't respond to emails this week seeking comment. Tech industry groups -- backed by companies including Amazon, Google, Meta and Microsoft -- are mostly pleased with the AI safety approach of Biden's Commerce Secretary Gina Raimondo and have pushed for Congress to preserve the new agency and codify its work into law. Some experts expect the kind of technical work happening in San Francisco this week to proceed regardless of who's in charge. "There's no reason to believe that we'll be doing a 180 when it comes to the work of the AI Safety Institute," said Heather West, a senior fellow at the Center for European Policy Analysis. Behind the rhetoric, she said there's already been overlap. Trump didn't spend much time talking about AI during his four years as president, though in 2019 he became the first to sign an executive order about AI. It directed federal agencies to prioritize research and development in the field. Before that, tech experts were pushing the Trump-era White House for a stronger AI strategy to match what other countries were pursuing. Trump in the waning weeks of his administration signed an executive order promoting the use of "trustworthy" AI in the federal government. Those policies carried over into the Biden administration. All of that was before the 2022 debut of ChatGPT, which brought public fascination and worry about the possibilities of generative AI and helped spark a boom in AI-affiliated businesses. What's also different this time is that tech mogul and Trump adviser Elon Musk has been picked to lead a government cost-cutting commission. Musk holds strong opinions about AI's risks and grudges against some AI industry leaders, particularly ChatGPT maker OpenAI, which he has sued. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[5]
U.S. Enlists Allies on AI Safety as Trump Vows to Undo Current Policy
President-elect Donald Trump has vowed to repeal President Joe Biden's signature artificial intelligence policy when he returns to the White House for a second term. What that actually means for the future of AI technology remains to be seen. Among those who could use some clarity are the government scientists and AI experts from multiple countries gathering in San Francisco this week to deliberate on AI safety measures. Hosted by the Biden administration, officials from a number of U.S. allies -- among them Australia, Canada, Japan, Kenya, Singapore, the United Kingdom and the 27-nation European Union -- began meeting Wednesday in the California city that's a commercial hub for AI development. Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse.
[6]
US gathers allies to talk AI safety. Trump's vow to undo Biden's AI policy overshadows their work
President-elect Donald Trump has vowed to repeal President Joe Biden's signature artificial intelligence policy when he returns to the White House for a second term. What that actually means for the future of AI technology remains to be seen. Among those who could use some clarity are the government scientists and AI experts from multiple countries gathering in San Francisco this week to deliberate on AI safety measures. Hosted by the Biden administration, officials from a number of U.S. allies -- among them Canada, Kenya, Singapore, the United Kingdom and the 27-nation European Union -- are scheduled to begin meeting Wednesday in the California city that's a commercial hub for AI development. Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse. It's the first such meeting since world leaders agreed at an AI summit in South Korea in May to build a network of publicly backed safety institutes to advance research and testing of the technology. Biden signed a sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department. Trump promised in his presidential campaign platform to "repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology." But he hasn't made clear what about the order he dislikes or what he'd do about the AI Safety Institute. Trump's transition team didn't respond to emails this week seeking comment. Tech industry groups -- backed by companies including Amazon, Google, Meta and Microsoft -- are mostly pleased with the AI safety approach of Biden's Commerce Secretary Gina Raimondo and have pushed for Congress to preserve the new agency and codify its work into law. Some experts expect the kind of technical work happening in San Francisco this week to proceed regardless of who's in charge. "There's no reason to believe that we'll be doing a 180 when it comes to the work of the AI Safety Institute," said Heather West, a senior fellow at the Center for European Policy Analysis. Behind the rhetoric, she said there's already been overlap. Trump didn't spend much time talking about AI during his four years as president, though in 2019 he became the first to sign an executive order about AI. It directed federal agencies to prioritize research and development in the field. Before that, tech experts were pushing the Trump-era White House for a stronger AI strategy to match what other countries were pursuing. Trump in the waning weeks of his administration signed an executive order promoting the use of "trustworthy" AI in the federal government. Those policies carried over into the Biden administration. All of that was before the 2022 debut of ChatGPT, which brought public fascination and worry about the possibilities of generative AI and helped spark a boom in AI-affiliated businesses. What's also different this time is that tech mogul and Trump adviser Elon Musk has been picked to lead a government cost-cutting commission. Musk holds strong opinions about AI's risks and grudges against some AI industry leaders, particularly ChatGPT maker OpenAI, which he has sued.
[7]
US gathers allies to talk AI safety. Trump's vow to undo Biden's AI policy overshadows their work
President-elect Donald Trump has vowed to repeal President Joe Biden's signature artificial intelligence policy when he returns to the White House for a second term President-elect Donald Trump has vowed to repeal President Joe Biden's signature artificial intelligence policy when he returns to the White House for a second term. What that actually means for the future of AI technology remains to be seen. Among those who could use some clarity are the government scientists and AI experts from multiple countries gathering in San Francisco this week to deliberate on AI safety measures. Hosted by the Biden administration, officials from a number of U.S. allies -- among them Canada, Kenya, Singapore, the United Kingdom and the 27-nation European Union -- are scheduled to begin meeting Wednesday in the California city that's a commercial hub for AI development. Their agenda addresses topics such as how to better detect and combat a flood of AI-generated deepfakes fueling fraud, harmful impersonation and sexual abuse. It's the first such meeting since world leaders agreed at an AI summit in South Korea in May to build a network of publicly backed safety institutes to advance research and testing of the technology. Biden signed a sweeping AI executive order last year and this year formed the new AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department. Trump promised in his presidential campaign platform to "repeal Joe Biden's dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology." But he hasn't made clear what about the order he dislikes or what he'd do about the AI Safety Institute. Trump's transition team didn't respond to emails this week seeking comment. Tech industry groups -- backed by companies including Amazon, Google, Meta and Microsoft -- are mostly pleased with the AI safety approach of Biden's Commerce Secretary Gina Raimondo and have pushed for Congress to preserve the new agency and codify its work into law. Some experts expect the kind of technical work happening in San Francisco this week to proceed regardless of who's in charge. "There's no reason to believe that we'll be doing a 180 when it comes to the work of the AI Safety Institute," said Heather West, a senior fellow at the Center for European Policy Analysis. Behind the rhetoric, she said there's already been overlap. Trump didn't spend much time talking about AI during his four years as president, though in 2019 he became the first to sign an executive order about AI. It directed federal agencies to prioritize research and development in the field. Before that, tech experts were pushing the Trump-era White House for a stronger AI strategy to match what other countries were pursuing. Trump in the waning weeks of his administration signed an executive order promoting the use of "trustworthy" AI in the federal government. Those policies carried over into the Biden administration. All of that was before the 2022 debut of ChatGPT, which brought public fascination and worry about the possibilities of generative AI and helped spark a boom in AI-affiliated businesses. What's also different this time is that tech mogul and Trump adviser Elon Musk has been picked to lead a government cost-cutting commission. Musk holds strong opinions about AI's risks and grudges against some AI industry leaders, particularly ChatGPT maker OpenAI, which he has sued.
[8]
Why the U.S. Launched an International Network of AI Safety Institutes
U.S. Gathers Global Group to Tackle AI Safety Amid Growing National Security Concerns "AI is a technology like no other in human history," U.S. Commerce Secretary Gina Raimondo said on Wednesday in San Francisco. "Advancing AI is the right thing to do, but advancing as quickly as possible, just because we can, without thinking of the consequences, isn't the smart thing to do." Raimondo's remarks came during the inaugural convening of the International Network of AI Safety Institutes, a network of artificial intelligence safety institutes (AISIs) from 9 nations as well as the European Commission brought together by the U.S. Departments of Commerce and State. The event gathered technical experts from government, industry, academia, and civil society to discuss how to manage the risks posed by increasingly-capable AI systems. Raimondo suggested participants keep two principles in mind: "We can't release models that are going to endanger people," she said. "Second, let's make sure AI is serving people, not the other way around." Read More: How Commerce Secretary Gina Raimondo Became America's Point Woman on AI The convening marks a significant step forward in international collaboration on AI governance. The first AISIs emerged last November during the inaugural AI Safety Summit hosted by the UK. Both the U.K. and the U.S. governments announced the formation of their respective AISIs as a means of giving their governments the technical capacity to evaluate the safety of cutting-edge AI models. Other countries followed suit; by May, at another AI Summit in Seoul, Raimondo had announced the creation of the network. In a joint statement, the members of the International Network of AI Safety Institutes -- which includes AISIs from the U.S., U.K., Australia, Canada, France, Japan, Kenya, South Korea, and Singapore -- laid out their mission: "to be a forum that brings together technical expertise from around the world," "...to facilitate a common technical understanding of AI safety risks and mitigations based upon the work of our institutes and of the broader scientific community," and "...to encourage a general understanding of and approach to AI safety globally, that will enable the benefits of AI innovation to be shared amongst countries at all stages of development." In the lead-up to the convening, the U.S. AISI, which serves as the network's inaugural chair, also announced a new government taskforce focused on the technology's national security risks. The Testing Risks of AI for National Security (TRAINS) Taskforce brings together representatives from the Departments of Defense, Energy, Homeland Security, and Health and Human Services. It will be chaired by the U.S. AISI, and aim to "identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology," with a particular focus on radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, and conventional military capabilities. The push for international cooperation comes at a time of increasing tension around AI development between the U.S. and China, whose absence from the network is notable. In remarks pre-recorded for the convening, Senate Majority Leader Chuck Schumer emphasized the importance of ensuring that the Chinese Community Party does not get to "write the rules of the road." Earlier Wednesday, Chinese lab Deepseek announced a new "reasoning" model thought to be the first to rival OpenAI's own reasoning model, o1, which the company says is "designed to spend more time thinking" before it responds. On Tuesday, the U.S.-China Economic and Security Review Commission, which has provided annual recommendations to Congress since 2000, recommended that Congress establish and fund a "Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability," which the commission defined as "systems as good as or better than human capabilities across all cognitive domains" that "would surpass the sharpest human minds at every task." Many experts in the field, such as Geoffrey Hinton, who earlier this year won a Nobel Prize in physics for his work on artificial intelligence, have expressed concerns that, should AGI be developed, humanity may not be able to control it, which could lead to catastrophic harm. In a panel discussion at Wednesday's event, Anthropic CEO Dario Amodei -- who believes AGI-like systems could arrive as soon as 2026 -- cited "loss of control" risks as a serious concern, alongside the risks that future, more capable models are misused by malicious actors to perpetrate bioterrorism or undermine cybersecurity. Responding to a question, Amodei expressed unequivocal support for making the testing of advanced AI systems mandatory, noting "we also need to be really careful about how we do it." Meanwhile, practical international collaboration on AI safety is advancing. Earlier in the week, the U.S. and U.K. AISIs shared preliminary findings from their pre-deployment evaluation of an advanced AI model -- the upgraded version of Anthropic's Claude 3.5 Sonnet. The evaluation focused on assessing the model's biological and cyber capabilities, as well as its performance on software and development tasks, and the efficacy of the safeguards built into it to prevent the model from responding to harmful requests. Both the U.K. and U.S. AISIs found that these safeguards could be "routinely circumvented," which they noted is "consistent with prior research on the vulnerability of other AI systems' safeguards." The San Francisco convening set out three priority topics that stand to "urgently benefit from international collaboration": managing risks from synthetic content, testing foundation models, and conducting risk assessments for advanced AI systems. Ahead of the convening, $11 million of funding was announced to support research into how best to mitigate risks from synthetic content (such as the generation and distribution of child sexual abuse material, and the facilitation of fraud and impersonation). The funding was provided by a mix of government agencies and philanthropic organizations, including the Republic of Korea and the Knight Foundation. While it is unclear how the election victory of Donald Trump will impact the future of the U.S. AISI and American AI policy more broadly, international collaboration on the topic of AI safety is set to continue. The U.K. AISI is hosting another San Francisco-based conference this week, in partnership with the Centre for the Governance of AI, "to accelerate the design and implementation of frontier AI safety frameworks." And in February, France will host its "AI Action Summit," following the Summits held in Seoul in May and in the U.K. last November. The 2025 AI Action Summit will gather leaders from the public and private sectors, academia, and civil society, as actors across the world seek to find ways to govern the technology as its capabilities accelerate. Raimondo on Wednesday emphasized the importance of integrating safety with innovation when it comes to something as rapidly advancing and as powerful as AI. "It has the potential to replace the human mind," she said. "Safety is good for innovation. Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation. We need that virtuous cycle."
Share
Share
Copy Link
Government officials and AI experts from multiple countries meet in San Francisco to discuss AI safety measures, while Trump's vow to repeal Biden's AI policies casts uncertainty over future regulations.
The Biden administration has convened a significant meeting in San Francisco, bringing together government officials and AI experts from various allied nations to discuss crucial aspects of AI safety 1. This gathering marks the first meeting of the International Network of AI Safety Institutes, announced during the AI summit in Seoul in May 1.
Participating countries include Australia, Canada, Japan, Kenya, Singapore, the United Kingdom, and the European Union 2. The agenda focuses on critical issues such as detecting and combating AI-generated deepfakes that fuel fraud, harmful impersonation, and sexual abuse 3.
President Biden has taken significant steps in AI policy, including signing a sweeping AI executive order and establishing the AI Safety Institute at the National Institute for Standards and Technology 4. However, President-elect Donald Trump has vowed to repeal Biden's AI policies, describing them as "dangerous" and hindering innovation 5.
The impending change in administration has created uncertainty about the future of AI regulations in the United States. However, some experts believe that the technical work of the AI Safety Institute may continue regardless of who is in charge 4.
Tech industry groups, backed by major companies like Amazon, Google, Meta, and Microsoft, have generally supported the Biden administration's approach to AI safety, which focuses on setting voluntary standards 4. US Commerce Secretary Gina Raimondo emphasized that AI safety is not a partisan issue, stating, "It's frankly in no one's interest anywhere in the world, in any political party, for AI to be dangerous" 1.
While Trump's previous term saw limited focus on AI, he did sign the first executive order on AI in 2019, directing federal agencies to prioritize research and development in the field 4. The landscape has changed significantly since then, particularly with the advent of generative AI technologies like ChatGPT 4.
As the international community grapples with the rapid advancement of AI technology, the outcome of these discussions and the future direction of US AI policy remain uncertain, highlighting the complex interplay between technological progress, international cooperation, and domestic politics.
Reference
[1]
[2]
[3]
[4]
U.S. News & World Report
|US Gathers Allies to Talk AI Safety. Trump's Vow to Undo Biden's AI Policy Overshadows Their WorkThe Biden administration announces plans to convene a global summit on artificial intelligence safety in November, aiming to address the potential risks and benefits of AI technology.
12 Sources
12 Sources
President Donald Trump signs a new executive order on AI, rescinding Biden-era policies and calling for AI development free from 'ideological bias'. The move sparks debate on innovation versus safety in AI advancement.
44 Sources
44 Sources
The Paris AI Action Summit brings together world leaders and tech executives to discuss AI's future, with debates over regulation, safety, and economic benefits taking center stage.
47 Sources
47 Sources
President Biden's new directive aims to maintain U.S. leadership in AI while addressing national security concerns and ethical considerations, setting deadlines for federal agencies to implement AI technologies responsibly.
24 Sources
24 Sources
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved