Curated by THEOUTPOST
On Mon, 3 Mar, 7:04 PM UTC
2 Sources
[1]
New software will help combat deep fake image threats to personal security
Realistic images created by artificial intelligence (AI), including those generated from a text description and those used in video, pose a genuine threat to personal security. From identity theft to misuse of a personal image, spotting what's real and what's fake is getting harder and harder. A research collaboration involving the University of Portsmouth's Artificial Intelligence and Data Science (PAIDS) Research Center has developed an innovative solution to accurately distinguish between fake and genuine images, as well as identify the source of the artificial image. The solution, known as DeepGuard, combines three advanced AI techniques, which are binary classification, ensemble learning, and multi-class classification. These methods enable the AI to learn from labeled data, making smarter and more reliable predictions. It is a tool that can be used to investigate and prosecute criminal activity such as fraud, or by the media to ensure images used in their stories are authentic to prevent misinformation or unintentional bias. DeepGuard has been developed by a research team led by Dr. Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the University of Frères Mentouri in Algeria, and involving Dr. Stavros Shiaeles from the University's PAIDS Research Center and School of Computing. Dr. Shiaeles said, "With ever-evolving technological capabilities, it will be a constant challenge to spot fake images with the human eye. Manipulated images pose a significant threat to our privacy and security as they can be used to forge documents for blackmail, undermine elections, falsify electronic evidence and damage reputations, and can even be used to incite harm, by adults, to children. "People are also profiteering disingenuously on social media platforms like TikTok, where images of models are being turned into characters and animated in different scenarios in games or for entertainment. "DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts." The research, published in Electronics, will also support further academic research in this area, with additional datasets available to academics. During its development, the team reviewed and analyzed methods for both image manipulation and detection, focusing specifically on fake images involving facial and bodily alterations. They considered 255 research articles published between 2016 and 2023 that examined various techniques for detecting manipulated images -- such as changes in expression, pose, voice, or other facial or bodily features.
[2]
New Software Will Help Combat Deep Fake Image Threats to Personal Security | Newswise
Newswise -- Realistic images created by artificial intelligence (AI), including those generated from a text description and those used in video, pose a genuine threat to personal security. From identity theft to misuse of a personal image, spotting what's real and what's fake is getting harder and harder. A research collaboration involving the University of Portsmouth's Artificial Intelligence and Data Science (PAIDS) Research Centre, has developed an innovative solution to accurately distinguish between fake and genuine images, as well as identify the source of the artificial image. The solution, known as 'DeepGuard', combines three advanced AI techniques, which are binary classification, ensemble learning, and multi-class classification. These methods enable the AI to learn from labelled data, making smarter and more reliable predictions. It is a tool that can be used to investigate and prosecute criminal activity such as fraud, or by the media to ensure images used in their stories are authentic to prevent misinformation or unintentional bias. DeepGuard has been developed by a research team led by Dr Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the University of Frères Mentouri in Algeria, and involving Dr Stavros Shiaeles from the University's PAIDS Research Centre and School of Computing Dr Shiaeles said: "With ever evolving technological capabilities it will be a constant challenge to spot fake images with the human eye. Manipulated images pose a significant threat to our privacy and security as they can be used to forge documents for blackmail, undermine elections, falsify electronic evidence and damage reputations, and can even be used to incite harm, by adults, to children. People are also profiteering disingenuously on social media platforms like TikTok where images of models are being turned into characters and animated in different scenarios in games or for entertainment. "DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts." The research, published in The Journal of Information Security and Applications, will also support further academic research in this area, with additional datasets available to academics. During its development, the team reviewed and analysed methods for both image manipulation and detection, focusing specifically on fake images involving facial and bodily alterations. They considered 255 research articles published between 2016 and 2023 that examined various techniques for detecting manipulated images - such as changes in expression, pose, voice, or other facial or bodily features. Anyone who has access to an image or images that they believe to be connected to illegal activity can contact the Police Fraud and Cyber Security Unit or, if it is a child or adult safeguarding issue, to contact their local Police Constabulary on the non-emergency 101 telephone number. If a child or adult is in immediate danger of harm, the emergency 999 number should be used.
Share
Share
Copy Link
Researchers develop DeepGuard, an innovative AI-powered solution to distinguish between fake and genuine images, addressing growing concerns about deepfake threats to personal security and misinformation.
In an era where artificial intelligence (AI) can create increasingly realistic fake images, a new software called DeepGuard has emerged as a potential solution to combat deepfake threats. Developed by a research collaboration involving the University of Portsmouth's Artificial Intelligence and Data Science (PAIDS) Research Center, DeepGuard aims to accurately distinguish between fake and genuine images, as well as identify the source of artificial images 12.
As AI technology advances, the ability to create convincing fake images and videos poses significant risks to personal security and societal trust. These deepfakes can be used for various malicious purposes, including:
The challenge of distinguishing between real and fake images with the human eye is becoming increasingly difficult, necessitating the development of advanced technological solutions 12.
DeepGuard combines three advanced AI techniques to combat deepfake threats:
These methods enable the AI to learn from labeled data, resulting in smarter and more reliable predictions. The software can not only differentiate between fake and genuine images but also identify the source of artificial images 12.
The DeepGuard project was led by Dr. Gueltoum Bendiab and Yasmine Namani from the University of Frères Mentouri in Algeria, with collaboration from Dr. Stavros Shiaeles of the University of Portsmouth's PAIDS Research Center and School of Computing 12.
During the development process, the research team:
DeepGuard has several potential applications across various sectors:
Dr. Shiaeles emphasized the importance of DeepGuard, stating, "DeepGuard, and future iterations, should prove to be a valuable security measure for verifying images, including those in videos, in a wide range of contexts" 12.
As technology continues to evolve, the battle against deepfakes will likely remain an ongoing challenge. DeepGuard represents a significant step forward in this fight, offering a tool that can help maintain the integrity of visual information in an increasingly digital world. The research team's work, published in Electronics and The Journal of Information Security and Applications, will contribute to the growing body of knowledge in this critical field 12.
As deepfake technology becomes more sophisticated, tech companies are developing advanced detection tools to combat the growing threat of AI-generated scams and disinformation.
3 Sources
3 Sources
Deepfake technology is increasingly being used to target businesses and threaten democratic processes. This story explores the growing prevalence of deepfake scams in the corporate world and their potential impact on upcoming elections.
2 Sources
2 Sources
A recent study by iProov reveals that only 2 out of 2,000 participants could accurately distinguish between real and AI-generated deepfake content, highlighting the growing threat of misinformation and identity fraud in the digital age.
3 Sources
3 Sources
Honor is set to globally roll out its AI deepfake detection feature in April, aiming to protect users from scams during video calls. While innovative, the technology's limited scope highlights the need for broader solutions in combating digital misinformation.
2 Sources
2 Sources
Google is set to implement a new feature in its search engine that will label AI-generated images. This move aims to enhance transparency and combat the spread of misinformation through deepfakes.
14 Sources
14 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved