2 Sources
[1]
UN report urges stronger measures to detect AI-driven deepfakes
GENEVA, July 11 (Reuters) - Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations' International Telecommunication Union urged in a report on Friday. Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its "AI for Good Summit" in Geneva. The ITU called for robust standards to combat manipulated multimedia and recommended that content distributors such as social media platforms use digital verification tools to authenticate images and videos before sharing. "Trust in social media has dropped significantly because people don't know what's true and what's fake," Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted. Combatting deepfakes was a top challenge due to Generative AI's ability to fabricate realistic multimedia, he said. Leonard Rosenthol of Adobe, a digital editing software leader that has been addressing deepfakes since 2019, underscored the importance of establishing the provenance of digital content to help users assess its trustworthiness. "We need more of the places where users consume their content to show this information...When you are scrolling through your feeds you want to know: 'can I trust this image, this video...'" Rosenthol said. Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the importance of a global approach to the problem, given there is currently no single international watchdog focusing on detecting manipulated material. "If we have patchworks of standards and solutions, then the harmful deepfake can be more effective," she told Reuters. The ITU is currently developing standards for watermarking videos - which make up 80% of internet traffic - to embed provenance data such as creator identity and timestamps. Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users. "AI will only get more powerful, faster or smarter... We'll need to upskill people to make sure that they are not victims of the systems," he said. Reporting by Olivia Le Poidevin; Editing by Hugh Lawson Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence
[2]
UN report urges stronger measures to detect AI-driven deepfakes
GENEVA (Reuters) -Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations' International Telecommunication Union urged in a report on Friday. Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its "AI for Good Summit" in Geneva. The ITU called for robust standards to combat manipulated multimedia and recommended that content distributors such as social media platforms use digital verification tools to authenticate images and videos before sharing. "Trust in social media has dropped significantly because people don't know what's true and what's fake," Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted. Combatting deepfakes was a top challenge due to Generative AI's ability to fabricate realistic multimedia, he said. Leonard Rosenthol of Adobe, a digital editing software leader that has been addressing deepfakes since 2019, underscored the importance of establishing the provenance of digital content to help users assess its trustworthiness. "We need more of the places where users consume their content to show this information...When you are scrolling through your feeds you want to know: 'can I trust this image, this video...'" Rosenthol said. Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the importance of a global approach to the problem, given there is currently no single international watchdog focusing on detecting manipulated material. "If we have patchworks of standards and solutions, then the harmful deepfake can be more effective," she told Reuters. The ITU is currently developing standards for watermarking videos - which make up 80% of internet traffic - to embed provenance data such as creator identity and timestamps. Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users. "AI will only get more powerful, faster or smarter... We'll need to upskill people to make sure that they are not victims of the systems," he said. (Reporting by Olivia Le Poidevin; Editing by Hugh Lawson)
Share
Copy Link
The United Nations' International Telecommunication Union urges companies to implement advanced tools for detecting and eliminating AI-generated misinformation and deepfakes to counter risks of election interference and financial fraud.
The United Nations' International Telecommunication Union (ITU) has released a report urging companies to implement stronger measures to detect and combat AI-driven deepfakes. The report, unveiled at the "AI for Good Summit" in Geneva, emphasizes the growing risks posed by deepfakes, including election interference and financial fraud 12.
Deepfakes, which include AI-generated images, videos, and audio that convincingly impersonate real people, are becoming increasingly sophisticated. The ITU report highlights the mounting risks associated with these manipulated multimedia content, particularly in the context of social media and online information consumption 1.
Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted, "Trust in social media has dropped significantly because people don't know what's true and what's fake." He identified combating deepfakes as a top challenge, citing Generative AI's ability to fabricate realistic multimedia 12.
The ITU report calls for robust standards to combat manipulated multimedia and recommends that content distributors, such as social media platforms, utilize digital verification tools to authenticate images and videos before sharing 12.
Leonard Rosenthol of Adobe, a company that has been addressing deepfakes since 2019, emphasized the importance of establishing the provenance of digital content. He stated, "We need more of the places where users consume their content to show this information...When you are scrolling through your feeds you want to know: 'can I trust this image, this video...'" 12.
Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the need for a global approach to the problem. She pointed out that there is currently no single international watchdog focusing on detecting manipulated material, warning that "If we have patchworks of standards and solutions, then the harmful deepfake can be more effective" 12.
The ITU is currently developing standards for watermarking videos, which make up 80% of internet traffic. These standards aim to embed provenance data such as creator identity and timestamps, providing a means to verify the authenticity of digital content 12.
Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users. He emphasized the need for continuous adaptation, stating, "AI will only get more powerful, faster or smarter... We'll need to upskill people to make sure that they are not victims of the systems" 12.
Source: Reuters
As AI technology continues to advance, the report underscores the importance of developing comprehensive strategies to detect, combat, and mitigate the risks associated with deepfakes. This multifaceted approach involves technological solutions, global cooperation, and user education to maintain trust in digital content and protect against potential misuse of AI-generated media.
Summarized by
Navi
[2]
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
7 hrs ago
5 Sources
Technology
7 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago