2 Sources
[1]
UN report urges stronger measures to detect AI-driven deepfakes
GENEVA, July 11 (Reuters) - Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations' International Telecommunication Union urged in a report on Friday. Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its "AI for Good Summit" in Geneva. The ITU called for robust standards to combat manipulated multimedia and recommended that content distributors such as social media platforms use digital verification tools to authenticate images and videos before sharing. "Trust in social media has dropped significantly because people don't know what's true and what's fake," Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted. Combatting deepfakes was a top challenge due to Generative AI's ability to fabricate realistic multimedia, he said. Leonard Rosenthol of Adobe, a digital editing software leader that has been addressing deepfakes since 2019, underscored the importance of establishing the provenance of digital content to help users assess its trustworthiness. "We need more of the places where users consume their content to show this information...When you are scrolling through your feeds you want to know: 'can I trust this image, this video...'" Rosenthol said. Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the importance of a global approach to the problem, given there is currently no single international watchdog focusing on detecting manipulated material. "If we have patchworks of standards and solutions, then the harmful deepfake can be more effective," she told Reuters. The ITU is currently developing standards for watermarking videos - which make up 80% of internet traffic - to embed provenance data such as creator identity and timestamps. Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users. "AI will only get more powerful, faster or smarter... We'll need to upskill people to make sure that they are not victims of the systems," he said. Reporting by Olivia Le Poidevin; Editing by Hugh Lawson Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence
[2]
UN report urges stronger measures to detect AI-driven deepfakes
GENEVA (Reuters) -Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations' International Telecommunication Union urged in a report on Friday. Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its "AI for Good Summit" in Geneva. The ITU called for robust standards to combat manipulated multimedia and recommended that content distributors such as social media platforms use digital verification tools to authenticate images and videos before sharing. "Trust in social media has dropped significantly because people don't know what's true and what's fake," Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted. Combatting deepfakes was a top challenge due to Generative AI's ability to fabricate realistic multimedia, he said. Leonard Rosenthol of Adobe, a digital editing software leader that has been addressing deepfakes since 2019, underscored the importance of establishing the provenance of digital content to help users assess its trustworthiness. "We need more of the places where users consume their content to show this information...When you are scrolling through your feeds you want to know: 'can I trust this image, this video...'" Rosenthol said. Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the importance of a global approach to the problem, given there is currently no single international watchdog focusing on detecting manipulated material. "If we have patchworks of standards and solutions, then the harmful deepfake can be more effective," she told Reuters. The ITU is currently developing standards for watermarking videos - which make up 80% of internet traffic - to embed provenance data such as creator identity and timestamps. Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users. "AI will only get more powerful, faster or smarter... We'll need to upskill people to make sure that they are not victims of the systems," he said. (Reporting by Olivia Le Poidevin; Editing by Hugh Lawson)
Share
Copy Link
The United Nations' International Telecommunication Union urges companies to implement advanced tools for detecting and eliminating AI-generated misinformation and deepfakes to counter risks of election interference and financial fraud.
The United Nations' International Telecommunication Union (ITU) has released a report urging companies to implement stronger measures to detect and combat AI-driven deepfakes. The report, unveiled at the "AI for Good Summit" in Geneva, emphasizes the growing risks posed by deepfakes, including election interference and financial fraud 12.
Deepfakes, which include AI-generated images, videos, and audio that convincingly impersonate real people, are becoming increasingly sophisticated. The ITU report highlights the mounting risks associated with these manipulated multimedia content, particularly in the context of social media and online information consumption 1.
Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted, "Trust in social media has dropped significantly because people don't know what's true and what's fake." He identified combating deepfakes as a top challenge, citing Generative AI's ability to fabricate realistic multimedia 12.
The ITU report calls for robust standards to combat manipulated multimedia and recommends that content distributors, such as social media platforms, utilize digital verification tools to authenticate images and videos before sharing 12.
Leonard Rosenthol of Adobe, a company that has been addressing deepfakes since 2019, emphasized the importance of establishing the provenance of digital content. He stated, "We need more of the places where users consume their content to show this information...When you are scrolling through your feeds you want to know: 'can I trust this image, this video...'" 12.
Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the need for a global approach to the problem. She pointed out that there is currently no single international watchdog focusing on detecting manipulated material, warning that "If we have patchworks of standards and solutions, then the harmful deepfake can be more effective" 12.
The ITU is currently developing standards for watermarking videos, which make up 80% of internet traffic. These standards aim to embed provenance data such as creator identity and timestamps, providing a means to verify the authenticity of digital content 12.
Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users. He emphasized the need for continuous adaptation, stating, "AI will only get more powerful, faster or smarter... We'll need to upskill people to make sure that they are not victims of the systems" 12.
Source: Reuters
As AI technology continues to advance, the report underscores the importance of developing comprehensive strategies to detect, combat, and mitigate the risks associated with deepfakes. This multifaceted approach involves technological solutions, global cooperation, and user education to maintain trust in digital content and protect against potential misuse of AI-generated media.
Summarized by
Navi
[2]
Goldman Sachs is testing Devin, an AI software engineer developed by Cognition, potentially deploying thousands of instances to augment its human workforce. This move signals a significant shift towards AI adoption in the financial sector.
5 Sources
Technology
13 hrs ago
5 Sources
Technology
13 hrs ago
RealSense, Intel's depth-sensing camera technology division, has spun out as an independent company, securing $50 million in Series A funding to scale its 3D perception technology for robotics, AI, and computer vision applications.
13 Sources
Technology
13 hrs ago
13 Sources
Technology
13 hrs ago
AI adoption is rapidly increasing across businesses and consumers, with tech giants already looking beyond AGI to superintelligence, suggesting the AI revolution may be further along than publicly known.
2 Sources
Technology
21 hrs ago
2 Sources
Technology
21 hrs ago
Elon Musk's artificial intelligence company xAI is preparing for a new funding round that could value the company at up to $200 billion, marking a significant increase from its previous valuation and positioning it as one of the world's most valuable private companies.
3 Sources
Business and Economy
12 hrs ago
3 Sources
Business and Economy
12 hrs ago
A security researcher discovered a method to manipulate ChatGPT into divulging Windows product keys, highlighting potential vulnerabilities in AI safety measures and raising concerns about data security.
3 Sources
Technology
12 hrs ago
3 Sources
Technology
12 hrs ago