Curated by THEOUTPOST
On Wed, 18 Sept, 4:05 PM UTC
12 Sources
[1]
Biden administration to convene AI safety summit in California
The Biden administration will host a global safety summit on artificial intelligence in November to discuss the quickly developing technology and efforts to mitigate its risks. Secretary of Commerce Gina Raimondo and State Department Secretary Antony Blinken will co-host the meeting in San Francisco with government scientists and AI experts from at least nine countries and the European Union, the Commerce Department announced Wednesday. The two-day meeting will take place on Nov. 20 and 21. It comes amid a wider push from the federal government to better understand the capabilities and risks of AI as the technology evolves. "AI is the defining technology of our generation. With AI evolving at a rapid pace, we at the Department of Commerce, and across the Biden-Harris Administration, are pulling every lever," Raimondo said in a statement Wednesday, adding, "We want the rules of the road on AI to be underpinned by safety, security, and trust, which is why this convening is so important." Blinken stressed the importance of strengthening international collaborations in "harnessing AI technology to solve the world's greatest challenges." Raimondo told The Associated Press the steady rise of AI-generated fakery and how to determine when the technology needs guardrails are among some of the most urgent discussions. "We're going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors," Raimondo told the news service. "Because if we keep a lid on the risks, it's incredible to think about what we could achieve." Representatives from the U.S., the United Kingdom, Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union will attend the November summit. China, a major player in AI development, was not on the list, but Raimondo told The AP, "we're still trying to figure out exactly who else might come in terms of scientists." "I think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism," she said. "Every country in the world ought to be able to agree that those are bad things and we ought to be able to work together to prevent them." The meeting's timing is notable, occurring about two weeks after the U.S. presidential election, and nearly two months ahead of a wider AI summit in Paris in February.
[2]
US to Convene Global AI Safety Summit in November
WASHINGTON (Reuters) - The Biden administration plans to convene a global safety summit on artificial intelligence, it said on Wednesday, as Congress continues to struggle with regulating the technology. Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host on Nov. 20-21 the first meeting of the International Network of AI Safety Institutes in San Francisco to "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence." The network members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Raimondo in May announced the launch of the International Network of AI Safety Institutes during the AI Seoul Summit in May, where nations agreed to prioritize AI safety, innovation and inclusivity. The goal of the San Francisco meeting is to jumpstart technical collaboration before the AI Action Summit in Paris in February. Raimondo said the aim is "close, thoughtful coordination with our allies and like-minded partners." "We want the rules of the road on AI to be underpinned by safety, security, and trust," she added. The San Francisco meeting will include technical experts from each member's AI safety institute, or equivalent government-backed scientific office, to discuss priority work areas, and advance global collaboration and knowledge sharing on AI safety. Last week, the Commerce Department said it was proposing to require detailed reporting requirements for advanced AI developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The regulatory push comes as legislative action in Congress on AI has stalled. President Joe Biden in October 2023 signed an executive order requiring developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are publicly released. (Reporting by David Shepardson; editing by Miral Fahmy)
[3]
US to convene global AI safety summit in November
WASHINGTON (Reuters) - The Biden administration plans to convene a global safety summit on artificial intelligence, it said on Wednesday, as Congress continues to struggle with regulating the technology. Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host on Nov. 20-21 the first meeting of the International Network of AI Safety Institutes in San Francisco to "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence." The network members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Raimondo in May announced the launch of the International Network of AI Safety Institutes during the AI Seoul Summit in May, where nations agreed to prioritize AI safety, innovation and inclusivity. The goal of the San Francisco meeting is to jumpstart technical collaboration before the AI Action Summit in Paris in February. Raimondo said the aim is "close, thoughtful coordination with our allies and like-minded partners." "We want the rules of the road on AI to be underpinned by safety, security, and trust," she added. The San Francisco meeting will include technical experts from each member's AI safety institute, or equivalent government-backed scientific office, to discuss priority work areas, and advance global collaboration and knowledge sharing on AI safety. Last week, the Commerce Department said it was proposing to require detailed reporting requirements for advanced AI developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The regulatory push comes as legislative action in Congress on AI has stalled. President Joe Biden in October 2023 signed an executive order requiring developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are publicly released. (Reporting by David Shepardson; editing by Miral Fahmy)
[4]
US to convene global AI safety summit in November
WASHINGTON, Sept 18 (Reuters) - The Biden administration plans to convene a global safety summit on artificial intelligence, it said on Wednesday, as Congress continues to struggle with regulating the technology. Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host on Nov. 20-21 the first meeting of the International Network of AI Safety Institutes in San Francisco to "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence." Advertisement · Scroll to continue The network members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Raimondo in May announced the launch of the International Network of AI Safety Institutes during the AI Seoul Summit in May, where nations agreed to prioritize AI safety, innovation and inclusivity. The goal of the San Francisco meeting is to jumpstart technical collaboration before the AI Action Summit in Paris in February. Advertisement · Scroll to continue Raimondo said the aim is "close, thoughtful coordination with our allies and like-minded partners." "We want the rules of the road on AI to be underpinned by safety, security, and trust," she added. The San Francisco meeting will include technical experts from each member's AI safety institute, or equivalent government-backed scientific office, to discuss priority work areas, and advance global collaboration and knowledge sharing on AI safety. Last week, the Commerce Department said it was proposing to require detailed reporting requirements for advanced AI developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The regulatory push comes as legislative action in Congress on AI has stalled. President Joe Biden in October 2023 signed an executive order requiring developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are publicly released. Reporting by David Shepardson; editing by Miral Fahmy Our Standards: The Thomson Reuters Trust Principles., opens new tab
[5]
US to convene global AI safety summit in November
WASHINGTON - The Biden administration plans to convene a global safety summit on artificial intelligence, it said on Wednesday, as Congress continues to struggle with regulating the technology. Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host on Nov. 20-21 the first meeting of the International Network of AI Safety Institutes in San Francisco to "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence." The network members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Raimondo in May announced the launch of the International Network of AI Safety Institutes during the AI Seoul Summit in May, where nations agreed to prioritize AI safety, innovation and inclusivity. The goal of the San Francisco meeting is to jumpstart technical collaboration before the AI Action Summit in Paris in February. Raimondo said the aim is "close, thoughtful coordination with our allies and like-minded partners." "We want the rules of the road on AI to be underpinned by safety, security, and trust," she added. The San Francisco meeting will include technical experts from each member's AI safety institute, or equivalent government-backed scientific office, to discuss priority work areas, and advance global collaboration and knowledge sharing on AI safety. Last week, the Commerce Department said it was proposing to require detailed reporting requirements for advanced AI developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The regulatory push comes as legislative action in Congress on AI has stalled. President Joe Biden in October 2023 signed an executive order requiring developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are publicly released. (Reporting by David Shepardson; editing by Miral Fahmy)
[6]
U.S. to convene global AI safety summit in November
The Biden administration plans to convene a global safety summit on artificial intelligence, it said on Wednesday, as Congress continues to struggle with regulating the technology. Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host on Nov. 20-21 the first meeting of the International Network of AI Safety Institutes in San Francisco to "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence." The network members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Raimondo in May announced the launch of the International Network of AI Safety Institutes during the AI Seoul Summit in May, where nations agreed to prioritize AI safety, innovation and inclusivity. The goal of the San Francisco meeting is to jumpstart technical collaboration before the AI Action Summit in Paris in February. Raimondo said the aim is "close, thoughtful coordination with our allies and like-minded partners." "We want the rules of the road on AI to be underpinned by safety, security, and trust," she added. The San Francisco meeting will include technical experts from each member's AI safety institute, or equivalent government-backed scientific office, to discuss priority work areas, and advance global collaboration and knowledge sharing on AI safety. Last week, the Commerce Department said it was proposing to require detailed reporting requirements for advanced AI developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The regulatory push comes as legislative action in Congress on AI has stalled. President Joe Biden in October 2023 signed an executive order requiring developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are publicly released. Published - September 18, 2024 04:00 pm IST Read Comments
[7]
Biden administration to host international AI safety meeting in San Francisco after election
Government scientists and artificial intelligence experts from at least nine countries and the European Union will meet in San Francisco after the U.S. elections to coordinate on safely developing AI technology and averting its dangers. President Joe Biden's administration on Wednesday announced a two-day international AI safety gathering planned for November 20 and 21. It will happen just over a year after delegates at an AI Safety Summit in the United Kingdom pledged to work together to contain the potentially catastrophic risks posed by AI advances. U.S. Commerce Secretary Gina Raimondo told The Associated Press it will be the "first get-down-to-work meeting" after the UK summit and a May follow-up in South Korea that sparked a network of publicly backed safety institutes to advance research and testing of the technology. Among the urgent topics likely to confront experts is a steady rise of AI-generated fakery but also the tricky problem of how to know when an AI system is so widely capable or dangerous that it needs guardrails. "We're going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors," Raimondo said in an interview. "Because if we keep a lid on the risks, it's incredible to think about what we could achieve." Situated in a city that's become a hub of the current wave of generative AI technology, the San Francisco meetings are designed as a technical collaboration on safety measures ahead of a broader AI summit set for February in Paris. It will occur about two weeks after a presidential election between Vice President Kamala Harris -- who helped craft the U.S. stance on AI risks -- and former President Donald Trump, who has vowed to undo Biden's signature AI policy. Raimondo and Secretary of State Antony Blinken announced that their agencies will co-host the convening, which taps into a network of newly formed national AI safety institutes in the U.S. and UK, as well as Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union. The biggest AI powerhouse missing from the list of participants is China, which isn't part of the network, though Raimondo said "we're still trying to figure out exactly who else might come in terms of scientists." "I think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism," she said. "Every country in the world ought to be able to agree that those are bad things and we ought to be able to work together to prevent them." Many governments have pledged to safeguard AI technology but they've taken different approaches, with the EU the first to enact a sweeping AI law that sets the strongest restrictions on the riskiest applications. Biden last October signed an executive order on AI that requires developers of the most powerful AI systems to share safety test results and other information with the government. It also delegated the Commerce Department to create standards to ensure AI tools are safe and secure before public release. San Francisco-based OpenAI, maker of ChatGPT, said last week that before releasing its latest model, called o1, it granted early access to the U.S. and UK national AI safety institutes. The new product goes beyond the company's famous chatbot in being able to "perform complex reasoning" and produce a "long internal chain of thought" when answering a query, and poses a "medium risk" in the category of weapons of mass destruction, the company has said. Since generative AI tools began captivating the world in late 2022, the Biden administration has been pushing AI companies to commit to testing their most sophisticated models before they're let out into the world. "That is the right model," Raimondo said. "That being said, right now, it's all voluntary. I think we probably need to move beyond a voluntary system. And we need Congress to take action." Tech companies have mostly agreed, in principle, on the need for AI regulation, but some have chafed at proposals they argue could stifle innovation. In California, Gov. Gavin Newsom on Tuesday signed three landmark bills to crack down on political deepfakes ahead of the 2024 election, but has yet to sign, or veto, a more controversial measure that would regulate extremely powerful AI models that don't yet exist but could pose grave risks if they're built.
[8]
Biden administration to host international AI safety meeting in San Francisco after election
Government scientists and artificial intelligence experts from at least nine countries and the European Union will meet in San Francisco after the U.S. elections to coordinate on safely developing AI technology and averting its dangers. President Joe Biden's administration on Wednesday announced a two-day international AI safety gathering planned for November 20 and 21. It will happen just over a year after delegates at an AI Safety Summit in the United Kingdom pledged to work together to contain the potentially catastrophic risks posed by AI advances. U.S. Commerce Secretary Gina Raimondo told The Associated Press it will be the "first get-down-to-work meeting" after the UK summit and a May follow-up in South Korea that sparked a network of publicly backed safety institutes to advance research and testing of the technology. Among the urgent topics likely to confront experts is a steady rise of AI-generated fakery but also the tricky problem of how to know when an AI system is so widely capable or dangerous that it needs guardrails. "We're going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors," Raimondo said in an interview. "Because if we keep a lid on the risks, it's incredible to think about what we could achieve." Situated in a city that's become a hub of the current wave of generative AI technology, the San Francisco meetings are designed as a technical collaboration on safety measures ahead of a broader AI summit set for February in Paris. It will occur about two weeks after a presidential election between Vice President Kamala Harris -- who helped craft the U.S. stance on AI risks -- and former President Donald Trump, who has vowed to undo Biden's signature AI policy. Raimondo and Secretary of State Antony Blinken announced that their agencies will co-host the convening, which taps into a network of newly formed national AI safety institutes in the U.S. and UK, as well as Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union. The biggest AI powerhouse missing from the list of participants is China, which isn't part of the network, though Raimondo said "we're still trying to figure out exactly who else might come in terms of scientists." "I think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism," she said. "Every country in the world ought to be able to agree that those are bad things and we ought to be able to work together to prevent them." Many governments have pledged to safeguard AI technology but they've taken different approaches, with the EU the first to enact a sweeping AI law that sets the strongest restrictions on the riskiest applications. Biden last October signed an executive order on AI that requires developers of the most powerful AI systems to share safety test results and other information with the government. It also delegated the Commerce Department to create standards to ensure AI tools are safe and secure before public release. San Francisco-based OpenAI, maker of ChatGPT, said last week that before releasing its latest model, called o1, it granted early access to the U.S. and UK national AI safety institutes. The new product goes beyond the company's famous chatbot in being able to "perform complex reasoning" and produce a "long internal chain of thought" when answering a query, and poses a "medium risk" in the category of weapons of mass destruction, the company has said. Since generative AI tools began captivating the world in late 2022, the Biden administration has been pushing AI companies to commit to testing their most sophisticated models before they're let out into the world. "That is the right model," Raimondo said. "That being said, right now, it's all voluntary. I think we probably need to move beyond a voluntary system. And we need Congress to take action." Tech companies have mostly agreed, in principle, on the need for AI regulation, but some have chafed at proposals they argue could stifle innovation. In California, Gov. Gavin Newsom on Tuesday signed three landmark bills to crack down on political deepfakes ahead of the 2024 election, but has yet to sign, or veto, a more controversial measure that would regulate extremely powerful AI models that don't yet exist but could pose grave risks if they're built.
[9]
Biden Administration to Host International AI Safety Meeting in San Francisco After Election
Government scientists and artificial intelligence experts from at least nine countries and the European Union will meet in San Francisco after the U.S. elections to coordinate on safely developing AI technology and averting its dangers. President Joe Biden's administration on Wednesday announced a two-day international AI safety gathering planned for November 20 and 21. It will happen just over a year after delegates at an AI Safety Summit in the United Kingdom pledged to work together to contain the potentially catastrophic risks posed by AI advances. U.S. Commerce Secretary Gina Raimondo told The Associated Press it will be the "first get-down-to-work meeting" after the UK summit and a May follow-up in South Korea that sparked a network of publicly backed safety institutes to advance research and testing of the technology. Among the urgent topics likely to confront experts is a steady rise of AI-generated fakery but also the tricky problem of how to know when an AI system is so widely capable or dangerous that it needs guardrails. "We're going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors," Raimondo said in an interview. "Because if we keep a lid on the risks, it's incredible to think about what we could achieve." Situated in a city that's become a hub of the current wave of generative AI technology, the San Francisco meetings are designed as a technical collaboration on safety measures ahead of a broader AI summit set for February in Paris. It will occur about two weeks after a presidential election between Vice President Kamala Harris -- who helped craft the U.S. stance on AI risks -- and former President Donald Trump, who has vowed to undo Biden's signature AI policy. Raimondo and Secretary of State Antony Blinken announced that their agencies will co-host the convening, which taps into a network of newly formed national AI safety institutes in the U.S. and UK, as well as Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union. The biggest AI powerhouse missing from the list of participants is China, which isn't part of the network, though Raimondo said "we're still trying to figure out exactly who else might come in terms of scientists." "I think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism," she said. "Every country in the world ought to be able to agree that those are bad things and we ought to be able to work together to prevent them." Many governments have pledged to safeguard AI technology but they've taken different approaches, with the EU the first to enact a sweeping AI law that sets the strongest restrictions on the riskiest applications. Biden last October signed an executive order on AI that requires developers of the most powerful AI systems to share safety test results and other information with the government. It also delegated the Commerce Department to create standards to ensure AI tools are safe and secure before public release. San Francisco-based OpenAI, maker of ChatGPT, said last week that before releasing its latest model, called o1, it granted early access to the U.S. and UK national AI safety institutes. The new product goes beyond the company's famous chatbot in being able to "perform complex reasoning" and produce a "long internal chain of thought" when answering a query, and poses a "medium risk" in the category of weapons of mass destruction, the company has said. Since generative AI tools began captivating the world in late 2022, the Biden administration has been pushing AI companies to commit to testing their most sophisticated models before they're let out into the world. "That is the right model," Raimondo said. "That being said, right now, it's all voluntary. I think we probably need to move beyond a voluntary system. And we need Congress to take action." Tech companies have mostly agreed, in principle, on the need for AI regulation, but some have chafed at proposals they argue could stifle innovation. In California, Gov. Gavin Newsom on Tuesday signed three landmark bills to crack down on political deepfakes ahead of the 2024 election, but has yet to sign, or veto, a more controversial measure that would regulate extremely powerful AI models that don't yet exist but could pose grave risks if they're built. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[10]
Biden administration to host international AI safety meeting in San Francisco after election
Government scientists and artificial intelligence experts from at least nine countries and the European Union will meet in San Francisco after the U.S. elections to coordinate on safely developing AI technology and averting its dangers Government scientists and artificial intelligence experts from at least nine countries and the European Union will meet in San Francisco after the U.S. elections to coordinate on safely developing AI technology and averting its dangers. President Joe Biden's administration on Wednesday announced a two-day international AI safety gathering planned for November 20 and 21. It will happen just over a year after delegates at an AI Safety Summit in the United Kingdom pledged to work together to contain the potentially catastrophic risks posed by AI advances. U.S. Commerce Secretary Gina Raimondo told The Associated Press it will be the "first get-down-to-work meeting" after the UK summit and a May follow-up in South Korea that sparked a network of publicly backed safety institutes to advance research and testing of the technology. Among the urgent topics likely to confront experts is a steady rise of AI-generated fakery but also the tricky problem of how to know when an AI system is so widely capable or dangerous that it needs guardrails. "We're going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors," Raimondo said in an interview. "Because if we keep a lid on the risks, it's incredible to think about what we could achieve." Situated in a city that's become a hub of the current wave of generative AI technology, the San Francisco meetings are designed as a technical collaboration on safety measures ahead of a broader AI summit set for February in Paris. It will occur about two weeks after a presidential election between Vice President Kamala Harris -- who helped craft the U.S. stance on AI risks -- and former President Donald Trump, who has vowed to undo Biden's signature AI policy. Raimondo and Secretary of State Antony Blinken announced that their agencies will co-host the convening, which taps into a network of newly formed national AI safety institutes in the U.S. and UK, as well as Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union. The biggest AI powerhouse missing from the list of participants is China, which isn't part of the network, though Raimondo said "we're still trying to figure out exactly who else might come in terms of scientists." "I think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism," she said. "Every country in the world ought to be able to agree that those are bad things and we ought to be able to work together to prevent them." Many governments have pledged to safeguard AI technology but they've taken different approaches, with the EU the first to enact a sweeping AI law that sets the strongest restrictions on the riskiest applications. Biden last October signed an executive order on AI that requires developers of the most powerful AI systems to share safety test results and other information with the government. It also delegated the Commerce Department to create standards to ensure AI tools are safe and secure before public release. San Francisco-based OpenAI, maker of ChatGPT, said last week that before releasing its latest model, called o1, it granted early access to the U.S. and UK national AI safety institutes. The new product goes beyond the company's famous chatbot in being able to "perform complex reasoning" and produce a "long internal chain of thought" when answering a query, and poses a "medium risk" in the category of weapons of mass destruction, the company has said. Since generative AI tools began captivating the world in late 2022, the Biden administration has been pushing AI companies to commit to testing their most sophisticated models before they're let out into the world. "That is the right model," Raimondo said. "That being said, right now, it's all voluntary. I think we probably need to move beyond a voluntary system. And we need Congress to take action." Tech companies have mostly agreed, in principle, on the need for AI regulation, but some have chafed at proposals they argue could stifle innovation. In California, Gov. Gavin Newsom on Tuesday signed three landmark bills to crack down on political deepfakes ahead of the 2024 election, but has yet to sign, or veto, a more controversial measure that would regulate extremely powerful AI models that don't yet exist but could pose grave risks if they're built.
[11]
US to convene global AI safety summit in November
The network members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. Raimondo in May announced the launch of the International Network of AI Safety Institutes during the AI Seoul Summit in May, where nations agreed to prioritize AI safety, innovation and inclusivity. The goal of the San Francisco meeting is to jumpstart technical collaboration before the AI Action Summit in Paris in February. Raimondo said the aim is "close, thoughtful coordination with our allies and like-minded partners." "We want the rules of the road on AI to be underpinned by safety, security, and trust," she added. The San Francisco meeting will include technical experts from each member's AI safety institute, or equivalent government-backed scientific office, to discuss priority work areas, and advance global collaboration and knowledge sharing on AI safety. Last week, the Commerce Department said it was proposing to require detailed reporting requirements for advanced AI developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks. The regulatory push comes as legislative action in Congress on AI has stalled. President Joe Biden in October 2023 signed an executive order requiring developers of AI systems posing risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are publicly released. (Reporting by David Shepardson; editing by Miral Fahmy)
[12]
US Targets 'Malicious' AI Use at November Safety Summit | PYMNTS.com
The U.S. will host a multi-national summit in November to discuss safe AI development. U.S. Commerce Secretary Gina Raimondo told The Associated Press (AP) Wednesday (Sept. 18) that this will be the "first get-down-to-work meeting" following gatherings in the U.K. and South Korea to discuss the possible dangers posed by artificial intelligence (AI). As the AP report notes, among the topics likely to come up at the two-day meeting, planned for Nov. 20 and 21 in San Francisco, are the rise of AI-generated fakery as well as the issue of how to determine when an AI system is capable enough -- or dangerous enough -- to require protective measures. "We're going to think about how we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors," Raimondo said. "Because if we keep a lid on the risks, it's incredible to think about what we could achieve." The meeting is expected to include representatives from national AI safety institutes in the U.S. and U.K., along with Australia, Canada, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union, the report said. The AP also points out that the meeting will take place after the 2024 presidential election between Vice President Kamala Harris -- who helped develop the U.S. position on AI risks -- and former President Donald Trump, who has pledged to overturn the White House AI policy. In other AI news, PYMNTS wrote earlier this week about new research which suggests tech giants could gain the upper hand in generative AI, raising questions about the competitive future of the industry. One recent paper found that massive computational requirements and network effects naturally cause market concentration, which could in turn result in a few key players holding outsized influence over pricing, data control and AI capabilities. This is a trend that has many players in the industry concerned. "We are likely to see decreasing prices for smaller models and continued differentiation across large models," Alex Mashrabov, CEO of Higgsfield AI, told PYMNTS, citing OpenAI's GPT-4 for prosumer use cases and models like Flux and Llama for easy fine-tuning as examples of this differentiation. Observers say lack of competition in generative AI could mean higher prices and fewer choices for businesses hoping to integrate AI tools into their operations, as well as slower innovation, which could hamper the development of new AI applications. As PYMNTS has reported, Big Tech companies have in recent months been rapidly rolling out iterations of large language models (LLMs) that power chatbots.
Share
Share
Copy Link
The Biden administration announces plans to convene a global summit on artificial intelligence safety in November, aiming to address the potential risks and benefits of AI technology.
The Biden administration has announced plans to host a global summit on artificial intelligence (AI) safety this November, marking a significant step in addressing the rapidly evolving landscape of AI technology. The summit, set to take place at Bletchley Park in the United Kingdom, will bring together government officials, AI companies, and civil society groups to discuss the potential risks and benefits of AI 1.
The primary goal of the summit is to establish a common framework for managing the risks associated with advanced AI systems. Participants will focus on identifying best practices for developing and deploying AI technologies responsibly. The event is expected to draw representatives from various countries, including China, although specific attendees have not yet been confirmed 2.
As AI technology continues to advance at a rapid pace, concerns about its potential negative impacts have grown. The summit will address issues such as AI-generated disinformation, privacy concerns, and the technology's impact on jobs and the economy. Simultaneously, discussions will explore the opportunities AI presents for innovation and economic growth 3.
The summit represents a significant effort to foster international collaboration on AI governance. By bringing together diverse stakeholders, the U.S. aims to create a unified approach to managing AI development and deployment. This initiative aligns with the Biden administration's commitment to maintaining U.S. leadership in AI innovation while addressing potential risks 4.
Major tech companies and AI developers are expected to play a crucial role in the summit. Their participation will be vital in discussing ethical AI development, transparency, and accountability measures. The event will also explore ways to ensure that AI benefits society as a whole, addressing concerns about bias and fairness in AI systems 5.
The outcome of this summit could have far-reaching implications for the future of AI governance globally. It may lead to the establishment of international standards, guidelines, or even treaties regarding AI development and use. As AI continues to integrate into various aspects of society, the decisions made at this summit could shape the trajectory of technological advancement and its impact on humanity for years to come.
Reference
[2]
[3]
[5]
Government officials and AI experts from multiple countries meet in San Francisco to discuss AI safety measures, while Trump's vow to repeal Biden's AI policies casts uncertainty over future regulations.
8 Sources
8 Sources
The Paris AI Action Summit brings together world leaders and tech executives to discuss AI's future, with debates over regulation, safety, and economic benefits taking center stage.
47 Sources
47 Sources
President Biden's new directive aims to maintain U.S. leadership in AI while addressing national security concerns and ethical considerations, setting deadlines for federal agencies to implement AI technologies responsibly.
24 Sources
24 Sources
The Paris AI Action Summit concluded with a declaration signed by 60 countries, but the US and UK's refusal to sign highlights growing divisions in global AI governance approaches.
18 Sources
18 Sources
The AI Action Summit in Paris marks a significant shift in global attitudes towards AI, emphasizing economic opportunities over safety concerns. This change in focus has sparked debate among industry leaders and experts about the balance between innovation and risk management.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved