2 Sources
[1]
New software will help combat deep fake image threats to personal security
Realistic images created by artificial intelligence (AI), including those generated from a text description and those used in video, pose a genuine threat to personal security. From identity theft to misuse of a personal image, spotting what's real and what's fake is getting harder and harder. A research collaboration involving the University of Portsmouth's Artificial Intelligence and Data Science (PAIDS) Research Center has developed an innovative solution to accurately distinguish between fake and genuine images, as well as identify the source of the artificial image. The solution, known as DeepGuard, combines three advanced AI techniques, which are binary classification, ensemble learning, and multi-class classification. These methods enable the AI to learn from labeled data, making smarter and more reliable predictions. It is a tool that can be used to investigate and prosecute criminal activity such as fraud, or by the media to ensure images used in their stories are authentic to prevent misinformation or unintentional bias. DeepGuard has been developed by a research team led by Dr. Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the University of Frères Mentouri in Algeria, and involving Dr. Stavros Shiaeles from the University's PAIDS Research Center and School of Computing. Dr. Shiaeles said, "With ever-evolving technological capabilities, it will be a constant challenge to spot fake images with the human eye. Manipulated images pose a significant threat to our privacy and security as they can be used to forge documents for blackmail, undermine elections, falsify electronic evidence and damage reputations, and can even be used to incite harm, by adults, to children. "People are also profiteering disingenuously on social media platforms like TikTok, where images of models are being turned into characters and animated in different scenarios in games or for entertainment. "DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts." The research, published in Electronics, will also support further academic research in this area, with additional datasets available to academics. During its development, the team reviewed and analyzed methods for both image manipulation and detection, focusing specifically on fake images involving facial and bodily alterations. They considered 255 research articles published between 2016 and 2023 that examined various techniques for detecting manipulated images -- such as changes in expression, pose, voice, or other facial or bodily features.
[2]
New Software Will Help Combat Deep Fake Image Threats to Personal Security | Newswise
Newswise -- Realistic images created by artificial intelligence (AI), including those generated from a text description and those used in video, pose a genuine threat to personal security. From identity theft to misuse of a personal image, spotting what's real and what's fake is getting harder and harder. A research collaboration involving the University of Portsmouth's Artificial Intelligence and Data Science (PAIDS) Research Centre, has developed an innovative solution to accurately distinguish between fake and genuine images, as well as identify the source of the artificial image. The solution, known as 'DeepGuard', combines three advanced AI techniques, which are binary classification, ensemble learning, and multi-class classification. These methods enable the AI to learn from labelled data, making smarter and more reliable predictions. It is a tool that can be used to investigate and prosecute criminal activity such as fraud, or by the media to ensure images used in their stories are authentic to prevent misinformation or unintentional bias. DeepGuard has been developed by a research team led by Dr Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the University of Frères Mentouri in Algeria, and involving Dr Stavros Shiaeles from the University's PAIDS Research Centre and School of Computing Dr Shiaeles said: "With ever evolving technological capabilities it will be a constant challenge to spot fake images with the human eye. Manipulated images pose a significant threat to our privacy and security as they can be used to forge documents for blackmail, undermine elections, falsify electronic evidence and damage reputations, and can even be used to incite harm, by adults, to children. People are also profiteering disingenuously on social media platforms like TikTok where images of models are being turned into characters and animated in different scenarios in games or for entertainment. "DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts." The research, published in The Journal of Information Security and Applications, will also support further academic research in this area, with additional datasets available to academics. During its development, the team reviewed and analysed methods for both image manipulation and detection, focusing specifically on fake images involving facial and bodily alterations. They considered 255 research articles published between 2016 and 2023 that examined various techniques for detecting manipulated images - such as changes in expression, pose, voice, or other facial or bodily features. Anyone who has access to an image or images that they believe to be connected to illegal activity can contact the Police Fraud and Cyber Security Unit or, if it is a child or adult safeguarding issue, to contact their local Police Constabulary on the non-emergency 101 telephone number. If a child or adult is in immediate danger of harm, the emergency 999 number should be used.
Share
Copy Link
Researchers develop DeepGuard, an innovative AI-powered solution to distinguish between fake and genuine images, addressing growing concerns about deepfake threats to personal security and misinformation.
In an era where artificial intelligence (AI) can create increasingly realistic fake images, a new software called DeepGuard has emerged as a potential solution to combat deepfake threats. Developed by a research collaboration involving the University of Portsmouth's Artificial Intelligence and Data Science (PAIDS) Research Center, DeepGuard aims to accurately distinguish between fake and genuine images, as well as identify the source of artificial images 12.
As AI technology advances, the ability to create convincing fake images and videos poses significant risks to personal security and societal trust. These deepfakes can be used for various malicious purposes, including:
The challenge of distinguishing between real and fake images with the human eye is becoming increasingly difficult, necessitating the development of advanced technological solutions 12.
DeepGuard combines three advanced AI techniques to combat deepfake threats:
These methods enable the AI to learn from labeled data, resulting in smarter and more reliable predictions. The software can not only differentiate between fake and genuine images but also identify the source of artificial images 12.
The DeepGuard project was led by Dr. Gueltoum Bendiab and Yasmine Namani from the University of Frères Mentouri in Algeria, with collaboration from Dr. Stavros Shiaeles of the University of Portsmouth's PAIDS Research Center and School of Computing 12.
During the development process, the research team:
DeepGuard has several potential applications across various sectors:
Dr. Shiaeles emphasized the importance of DeepGuard, stating, "DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts" 12.
As technology continues to evolve, the battle against deepfakes will likely remain an ongoing challenge. DeepGuard represents a significant step forward in this fight, offering a tool that can help maintain the integrity of visual information in an increasingly digital world. The research team's work, published in Electronics and The Journal of Information Security and Applications, will contribute to the growing body of knowledge in this critical field 12.
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
18 hrs ago
3 Sources
Technology
18 hrs ago
The UK's technology secretary and OpenAI's CEO discussed a potential multibillion-pound deal to provide ChatGPT Plus access to all UK residents, highlighting the government's growing interest in AI technology.
2 Sources
Technology
2 hrs ago
2 Sources
Technology
2 hrs ago
Multiple news outlets, including Wired and Business Insider, have been duped by AI-generated articles submitted under a fake freelancer's name, raising concerns about the future of journalism in the age of artificial intelligence.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago
Google inadvertently revealed a new smart speaker during its Pixel event, sparking speculation about its features and capabilities. The device is expected to be powered by Gemini AI and could mark a significant upgrade in Google's smart home offerings.
5 Sources
Technology
1 day ago
5 Sources
Technology
1 day ago
As AI and new platforms transform search behavior, brands must adapt their strategies beyond traditional SEO to remain visible in an increasingly fragmented digital landscape.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago