2 Sources
[1]
AI safety is hard to steer with science in flux, US official says
NEW YORK, Dec 10 (Reuters) - Policymakers aiming to recommend safeguards for artificial intelligence are facing a formidable challenge: science that is still evolving. AI developers themselves are grappling with how to prevent abuse of novel systems, offering no easy fix for government authorities to embrace, Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, said on Tuesday. Cybersecurity is an area of concern according to Kelly, speaking at the Reuters NEXT conference in New York. Ways to bypass guard rails that AI labs established for security and other topics, called "jailbreaks," can be easy, she said. "It is difficult for policymakers to say these are best practices we recommend in terms of safeguards, when we don't actually know which ones work and which ones don't," Kelly said. Technology experts are hashing out how to vet and protect AI across different dimensions. Another area regards synthetic content. Tampering with digital watermarks, which flag to consumers when images are AI-generated, remains too easy for authorities to devise guidance for industry, she said. The U.S. AI Safety Institute, created under the Biden administration, is addressing such concerns via academic, industry and civil society partnerships that inform its tech evaluations, Kelly said. She said AI safety is a "fundamentally bipartisan issue," when asked what will happen to the body after Donald Trump takes office in January. The institute's first director, Kelly recently presided over the inaugural gathering of AI safety institutes from around the world, which took place last month in San Francisco. Asked about the outcome of these meetings, Kelly said the 10 country members were working toward interoperable safety tests with the help of more technical, hoodie-wearing experts than in a typical diplomatic meeting. "It was very much getting the nerds in the room," she said. Reporting by Jeffrey Dastin, Kenneth Li and Krystal Hu in New York Editing by Matthew Lewis Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Cybersecurity Jeffrey Dastin Thomson Reuters Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.
[2]
AI safety is hard to steer with science in flux, US official says
NEW YORK (Reuters) - Policymakers aiming to recommend safeguards for artificial intelligence are facing a formidable challenge: science that is still evolving. AI developers themselves are grappling with how to prevent abuse of novel systems, offering no easy fix for government authorities to embrace, Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, said on Tuesday. Cybersecurity is an area of concern according to Kelly, speaking at the Reuters NEXT conference in New York. Ways to bypass guard rails that AI labs established for security and other topics, called "jailbreaks," can be easy, she said. "It is difficult for policymakers to say these are best practices we recommend in terms of safeguards, when we don't actually know which ones work and which ones don't," Kelly said. Technology experts are hashing out how to vet and protect AI across different dimensions. Another area regards synthetic content. Tampering with digital watermarks, which flag to consumers when images are AI-generated, remains too easy for authorities to devise guidance for industry, she said. The U.S. AI Safety Institute, created under the Biden administration, is addressing such concerns via academic, industry and civil society partnerships that inform its tech evaluations, Kelly said. She said AI safety is a "fundamentally bipartisan issue," when asked what will happen to the body after Donald Trump takes office in January. The institute's first director, Kelly recently presided over the inaugural gathering of AI safety institutes from around the world, which took place last month in San Francisco. Asked about the outcome of these meetings, Kelly said the 10 country members were working toward interoperable safety tests with the help of more technical, hoodie-wearing experts than in a typical diplomatic meeting. "It was very much getting the nerds in the room," she said. (Reporting by Jeffrey Dastin, Kenneth Li and Krystal Hu in New York; Editing by Matthew Lewis)
Share
Copy Link
Elizabeth Kelly, director of the U.S. AI Safety Institute, discusses the difficulties in establishing AI safety guidelines due to rapidly evolving technology and the need for international cooperation.
Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, has highlighted the significant challenges facing policymakers in recommending safeguards for artificial intelligence. Speaking at the Reuters NEXT conference in New York, Kelly emphasized that the rapidly evolving nature of AI science makes it difficult to establish concrete safety guidelines 1.
"It is difficult for policymakers to say these are best practices we recommend in terms of safeguards, when we don't actually know which ones work and which ones don't," Kelly stated 1. This uncertainty extends to AI developers themselves, who are still grappling with methods to prevent abuse of novel systems.
Kelly identified cybersecurity as a particular area of concern. She noted that "jailbreaks" - methods to bypass security measures established by AI labs - can be alarmingly easy to execute 2.
Another challenge lies in the realm of synthetic content. Kelly pointed out that tampering with digital watermarks, which are designed to flag AI-generated images to consumers, remains too simple for authorities to develop effective industry guidance 1.
The U.S. AI Safety Institute, established under the Biden administration, is tackling these issues through partnerships with academia, industry, and civil society. These collaborations inform the institute's technology evaluations 2.
Kelly, who serves as the institute's first director, emphasized that AI safety is a "fundamentally bipartisan issue," addressing concerns about the institute's future following potential changes in administration 1.
Recently, Kelly presided over the inaugural gathering of AI safety institutes from around the world in San Francisco. The meeting brought together representatives from 10 countries to work towards developing interoperable safety tests 2.
Describing the nature of these meetings, Kelly noted the presence of more technical experts than typical diplomatic gatherings. "It was very much getting the nerds in the room," she remarked, highlighting the focus on practical, technical solutions to AI safety challenges 1.
Summarized by
Navi
[2]
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
10 Sources
Technology
19 hrs ago
10 Sources
Technology
19 hrs ago
Nvidia is reportedly developing a new AI chip, the B30A, based on its latest Blackwell architecture for the Chinese market. This chip is expected to outperform the currently allowed H20 model, raising questions about U.S. regulatory approval and the ongoing tech trade tensions between the U.S. and China.
11 Sources
Technology
19 hrs ago
11 Sources
Technology
19 hrs ago
SoftBank Group has agreed to invest $2 billion in Intel, buying common stock at $23 per share. This strategic investment comes as Intel undergoes a major restructuring under new CEO Lip-Bu Tan, aiming to regain its competitive edge in the semiconductor industry, particularly in AI chips.
18 Sources
Business
11 hrs ago
18 Sources
Business
11 hrs ago
Databricks, a data analytics firm, is set to raise its valuation to over $100 billion in a new funding round, showcasing the strong investor interest in AI startups. The company plans to use the funds for AI acquisitions and product development.
7 Sources
Business
3 hrs ago
7 Sources
Business
3 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
11 hrs ago
15 Sources
Technology
11 hrs ago