U.S. AI Safety Institute Director Highlights Challenges in Developing AI Safeguards

2 Sources

Share

Elizabeth Kelly, director of the U.S. AI Safety Institute, discusses the difficulties in establishing AI safety guidelines due to rapidly evolving technology and the need for international cooperation.

News article

Challenges in Developing AI Safeguards

Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute, has highlighted the significant challenges facing policymakers in recommending safeguards for artificial intelligence. Speaking at the Reuters NEXT conference in New York, Kelly emphasized that the rapidly evolving nature of AI science makes it difficult to establish concrete safety guidelines

1

.

"It is difficult for policymakers to say these are best practices we recommend in terms of safeguards, when we don't actually know which ones work and which ones don't," Kelly stated

1

. This uncertainty extends to AI developers themselves, who are still grappling with methods to prevent abuse of novel systems.

Cybersecurity and Synthetic Content Concerns

Kelly identified cybersecurity as a particular area of concern. She noted that "jailbreaks" - methods to bypass security measures established by AI labs - can be alarmingly easy to execute

2

.

Another challenge lies in the realm of synthetic content. Kelly pointed out that tampering with digital watermarks, which are designed to flag AI-generated images to consumers, remains too simple for authorities to develop effective industry guidance

1

.

The Role of the U.S. AI Safety Institute

The U.S. AI Safety Institute, established under the Biden administration, is tackling these issues through partnerships with academia, industry, and civil society. These collaborations inform the institute's technology evaluations

2

.

Kelly, who serves as the institute's first director, emphasized that AI safety is a "fundamentally bipartisan issue," addressing concerns about the institute's future following potential changes in administration

1

.

International Cooperation on AI Safety

Recently, Kelly presided over the inaugural gathering of AI safety institutes from around the world in San Francisco. The meeting brought together representatives from 10 countries to work towards developing interoperable safety tests

2

.

Describing the nature of these meetings, Kelly noted the presence of more technical experts than typical diplomatic gatherings. "It was very much getting the nerds in the room," she remarked, highlighting the focus on practical, technical solutions to AI safety challenges

1

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo