Curated by THEOUTPOST
On Mon, 26 Aug, 4:02 PM UTC
2 Sources
[1]
How to identify AI-generated images
First, the bad news: it's really hard to detect AI-generated images. The telltale signs that used to be giveaways -- warped hands and jumbled text -- are increasingly rare as AI models improve at a dizzying pace. It's no longer obvious what images are created using popular tools like Midjourney, Stable Diffusion, DALL-E, and Gemini. In fact, AI-generated images are starting to dupe people even more, which has created major issues in spreading misinformation. The good news is that it's usually not impossible to identify AI-generated images, but it takes more effort than it used to. These tools use computer vision to examine pixel patterns and determine the likelihood of an image being AI-generated. That means, AI detectors aren't completely foolproof, but it's a good way for the average person to determine whether an image merits some scrutiny -- especially when it's not immediately obvious. "Unfortunately, for the human eye -- and there are studies -- it's about a fifty-fifty chance that a person gets it," said Anatoly Kvitnitsky, CEO of AI image detection platform AI or Not. "But for AI detection for images, due to the pixel-like patterns, those still exist, even as the models continue to get better." Kvitnitsky claims AI or Not achieves a 98 percent accuracy rate on average. Other AI detectors that have generally high success rates include Hive Moderation, SDXL Detector on Hugging Face, and Illuminarty. We tested ten AI-generated images on all of these detectors to see how they did. AI or Not gives a simple "yes" or "no" unlike other AI image detectors, but it correctly said the image was AI-generated. With the free plan, you get 10 uploads a month. We tried with 10 images and got an 80 percent success rate. We tried Hive Moderation's free demo tool with over 10 different images and got a 90 percent overall success rate, meaning they had a high probability of being AI-generated. However, it failed to detect the AI-qualities of an artificial image of a chipmunk army scaling a rock wall. The SDXL Detector on Hugging Face takes a few seconds to load, and you might initially get an error on the first try, but it's completely free. It also gives a probability percentage instead. It said 70 percent of the AI-generated images had a high probability of being generative AI. Illuminarty has a free plan that provides basic AI image detection. Out of the 10 AI-generated images we uploaded, it only classified 50 percent as having a very low probability. To the horror of rodent biologists, it gave the infamous rat dick image a low probability of being AI-generated. As you can see, AI detectors are mostly pretty good, but not infallible and shouldn't be used as the only way to authenticate an image. Sometimes, they're able to detect deceptive AI-generated images even though they look real, and sometimes they get it wrong with images that are clearly AI creations. This is exactly why a combination of methods is best. Another way to detect AI-generated images is the simple reverse image search which is what Bamshad Mobasher, professor of computer science and the director of the Center for Web Intelligence at DePaul University College of Computing and Digital Media in Chicago recommends. By uploading an image to Google Images or a reverse image search tool, you can trace the provenance of the image. If the photo shows an ostensibly real news event, "you may be able to determine that it's fake or that the actual event didn't happen," said Mobasher. Google Search also has an "About this Image" feature that provides contextual information like when the image was first indexed, and where else it appeared online. This is found by clicking on the three dots icon in the upper right corner of an image. Speaking of which, while AI-generated images are getting scarily good, it's still worth looking for the telltale signs. As mentioned above, you might still occasionally see an image with warped hands, hair that looks a little too perfect, or text within the image that's garbled or nonsensical. Our sibling site PCMag's breakdown recommends looking in the background for blurred or warped objects, or subjects with flawless -- and we mean no pores, flawless -- skin. At a first glance, the Midjourney image below looks like a Kardashian relative promoting a cookbook that could easily be from Instagram. But upon further inspection, you can see the contorted sugar jar, warped knuckles, and skin that's a little too smooth. "AI can be good at generating the overall scene, but the devil is in the details," wrote Sasha Luccioni, AI and climate lead at Hugging Face, in an email to Mashable. Look for "mostly small inconsistencies: extra fingers, asymmetrical jewelry or facial features, incongruities in objects (an extra handle on a teapot)." Mobasher, who is also a fellow at the Institute of Electrical and Electronics Engineers (IEEE), said to zoom in and look for "odd details" like stray pixels and other inconsistencies, like subtly mismatched earrings. "You may find part of the same image with the same focus being blurry but another part being super detailed," Mobasher said. This is especially true in the backgrounds of images. "If you have signs with text and things like that in the backgrounds, a lot of times they end up being garbled or sometimes not even like an actual language," he added. This image of a parade of Volkswagen vans parading down a beach was created by Google's Imagen 3. The sand and busses look flawlessly photorealistic. But look closely, and you'll notice the lettering on the third bus where the VW logo should be is just a garbled symbol, and there are amorphous splotches on the fourth bus. None of the above methods will be all that useful if you don't first pause while consuming media -- particularly social media -- to wonder if what you're seeing is AI-generated in the first place. Much like media literacy that became a popular concept around the misinformation-rampant 2016 election, AI literacy is the first line of defense for determining what's real or not. AI researchers Duri Long and Brian Magerko's define AI literacy as "a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace." Knowing how generative AI works and what to look for is key. "It may sound cliche, but taking the time to verify the provenance and source of the content you see on social media is a good start," said Luccioni. Start by asking yourself about the source of the image in question and the context in which it appears. Who published the image? What does the accompanying text (if any) say about it? Have other people or media outlets published the image? How does the image, or the text accompanying it, make you feel? If it seems like it's designed to enrage or entice you, think about why. As we've seen, so far the methods by which individuals can discern AI images from real ones are patchy and limited. To make matters worse, the spread of illicit or harmful AI-generated images is a double whammy because the posts circulate falsehoods, which then spawn mistrust in online media. But in the wake of generative AI, several initiatives have sprung up to bolster trust and transparency. The Coalition for Content Provenance and Authenticity (C2PA) was founded by Adobe and Microsoft, and includes tech companies like OpenAI and Google, as well as media companies like Reuters and the BBC. C2PA provides clickable Content Credentials for identifying the provenance of images and whether they're AI-generated. However, it's up to the creators to attach the Content Credentials to an image. On the flip side, the Starling Lab at Stanford University is working hard to authenticate real images. Starling Lab verifies "sensitive digital records, such as the documentation of human rights violations, war crimes, and testimony of genocide," and securely stores verified digital images in decentralized networks so they can't be tampered with. The lab's work isn't user-facing, but its library of projects are a good resource for someone looking to authenticate images of, say, the war in Ukraine, or the presidential transition from Donald Trump to Joe Biden. Experts often talk about AI images in the context of hoaxes and misinformation, but AI imagery isn't always meant to deceive per se. AI images are sometimes just jokes or memes removed from their original context, or they're lazy advertising. Or maybe they're just a form of creative expression with an intriguing new technology. But for better or worse, AI images are a fact of life now. And it's up to you to detect them.
[2]
How Big Tech is approaching explicit, nonconsensual deepfakes
From Meta to Snapchat, here's a rundown of each platform's deepfake policies -- and where they fall short. Credit: Stacey Zhu; Dmitry Morgan, Nana_studio, Yaroslav Astakhov/Shutterstock.com In pursuit of technological innovation, generative AI's advocates have thrust the tools for highly-realistic, nonconsensual, synthetic forgeries, more commonly known as deepfake porn, into the hands of the Average Joe. Ads for "nudify" undressing apps may appear on the sidebars of popular websites and in between Facebook posts, while manipulated sexual images of public figures spread as trending fodder for the masses. The problem has trickled down through the online sphere into the real lives of users, including young people. Implicated in it all are AI's creators and distributors. Government leaders are attacking the problem through piecemeal legislative efforts. The tech and social sectors are balancing their responsibility to users with the need for innovation. But deepfakes are a hard concept to fight with the weapon of corporate policy. Solving the deepfake problem is made more difficult by just how hard it is to pinpoint deepfakes, not to mention widespread disagreement on who is responsible for nonconsensual synthetic forgeries. Advocacy and research organization the Cyber Civil Rights Initiative, which fights against the nonconsensual distribution of intimate images (NDII), defines sexually explicit digital forgeries as any manipulated photos or videos that falsely (and almost indistinguishably) depict an actual person nude or engaged in sexual conduct. NDII doesn't inherently involve AI (think Photoshop), but generative AI tools are now commonly associated with their ability to create deepfakes, which is a catchall term originally coined in 2017, that has come to mean any manipulated visual or auditory likeness. Broadly, "deepfake" images could refer to minor edits or a completely unreal rendering of a person's likeness. Some may be sexually explicit, but even more are not. They can be consensually made, or used as a form of Image-Based Sexual Abuse (IBSA). They can be regulated or policed from the moment of their creation or earlier through the policies and imposed limitations of AI tools themselves, or regulated after their creation, as they're spread online. They could even be outlawed completely, or curbed by criminal or civil liabilities to their makers or distributors, depending on the intent. Companies, defining the threat of nonconsensual deepfakes independently, have chosen to view sexual synthetic forgeries in several ways: as a crime addressed through direct policing, as a violation of existing terms of service (like those regulating "revenge porn" or misinformation), or, simply, not their responsibility. Here's a list of just some of those companies, how they fit into the picture, and their own stated policies touching on deepfakes. AI developers like Anthropic and its competitors have to be answerable for products and systems that can be used to generate artificial AI content. To many, that means they also hold more liability for their tools' outputs and users. Advertising itself as a safety-first AI company, Anthropic has maintained a strict anti-NSFW policy, using fairly ironclad terms of service and abuse filters to try to curb bad user behavior from the start. It's also worth noting that Anthropic's Claude chatbot is not allowed to generate images of any kind. Our Acceptable Use Policy (AUP) prohibits the use of our models to generate deceptive or misleading content, such as engaging in coordinated inauthentic behavior or disinformation campaigns. This also includes a prohibition on using our services to impersonate a person by presenting results as human-generated or using results in a manner intended to convince a natural person that they are communicating with a natural person. Users cannot generate sexually explicit content. This includes the usage of our products or services to depict or request sexual intercourse or sex acts, generate content related to sexual fetishes or fantasies, facilitate, promote, or depict incest or bestiality, or engage in erotic chats. Users cannot create, distribute, or promote child sexual abuse material. We strictly prohibit and will report to relevant authorities and organizations where appropriate any content that exploits or abuses minors. In contrast to companies like Anthropic, tech conglomerates play the role of host or distributor for synthetic content. Social platforms, for example, provide opportunity for users to swap images and videos. Online marketplaces, like app stores, become avenues for bad actors to sell or access generative AI tools and their building blocks. As companies dive deeper into AI, though, these roles are becoming more blurred. Recent scrutiny has fallen on Apple's App Store and other marketplaces for allowing explicit deepfake apps. While it's App Store policies aren't as direct as its competitors, notably Google Play, the company has reinforced anti-pornography policies in both its advertising and store rules. But controversy remains among the wide array of Apple products. In recent months, the company has been accused of underreporting the role of its devices and services in the spread of both real and AI-generated child sexual abuse materials. And Apple's recent launch of Apple Intelligence will pose new policing questions. Apple News does not allow ad content that promotes adult-oriented themes or graphic content. For example, pornography, Kama Sutra, erotica, or content that promotes "how to" and other sex games. Apple App Store offerings cannot include content that is overtly sexual or pornographic material, defined as "explicit descriptions or displays of sexual organs or activities intended to stimulate erotic rather than aesthetic or emotional feelings." This includes "hookup" apps and other apps that may include pornography or be used to facilitate prostitution, or human trafficking and exploitation. Apps with user-generated content or services that end up being used primarily for pornographic content, Chatroulette-style experiences, objectification of real people (e.g. "hot-or-not" voting), making physical threats, or bullying do not belong on the App Store and may be removed without notice. GitHub, as a platform for developers to create, store, and share projects, treats the building and advertising of any non-consensual explicit imagery as a violation of its Acceptable Use Policy -- similar to misinformation. It offers its own generative AI assistant for coding, but doesn't provide any visual or audio outputs. GitHub does not allow any projects that are designed for, encourage, promote, support, or suggest in any way the use of synthetic or manipulated media for the creation of non-consensual intimate imagery or any content that would constitute misinformation or disinformation under this policy. Google plays a multifaceted role in the creation of synthetic images as both host and developer. It's announced several policy changes to curb both access to and the dissemination of nonconsensual synthetic content in Search, as well as advertising of "nudify" apps in Google Play. This came after the tech giant was called out for its role in surfacing nonconsensual digital forgeries on Google.com. AI-generated synthetic porn will be lowered in Google Search rankings. Users can ask to remove explicit non-consensual fake imagery from Google. Shopping ads cannot promote services that generate, distribute, or store synthetic sexually explicit content or synthetic content containing nudity. Shopping ads cannot provide instructions on the creation of such content. Developers on the Google Play Store must ensure generative AI apps do not generate offensive content, including prohibited content, content that may exploit or abuse children, and content that can deceive users or enable dishonest behaviors. As a host for content, YouTube has prioritized moderating user uploads and providing reporting mechanisms for subjects of forgeries. Explicit content meant to be sexually gratifying is not allowed on YouTube. Posting pornography may result in content removal or channel termination. Creators are required to disclose [altered or synthetic content] content when it's realistic, meaning that a viewer could easily mistake what's being shown with a real person, place, or event. If someone has used AI to alter or create synthetic content that looks or sounds like you, you can ask for it to be removed. In order to qualify for removal, the content should depict a realistic altered or synthetic version of your likeness. Microsoft offers its own generative AI tools, including image generators hosted on Bing and Copilot, that also harness external AI models like OpenAI's DALL-E 3. The company applies its larger content policies to users engaging with this AI, and has instituted prompt safeguards and watermarking, but it likely bears the responsibility for anything that falls through the cracks. Microsoft does not allow the sharing or creation of sexually intimate images of someone without their permission -- also called non-consensual intimate imagery, or NCII. This includes photorealistic NCII content that was created or altered using technology. Bing does not permit the use of Image Creator to create or share adult content, violence or gore, hateful content, terrorism and violent extremist content, glorification of violence, child sexual exploitation or abuse material, or content that is otherwise disturbing or offensive. OpenAI is one of the biggest names in AI development, and its models and products are incorporated into -- or are the foundations of -- many of the generative AI tools offered by companies worldwide. OpenAI retains strong terms of use to try to protect itself from the ripple effects of such widespread use of its AI models. In May, OpenAI announced it was exploring the possibility of allowing NSFW outputs in age-appropriate content on its own ChatGPT and associated API. Up until that point, the company had remained firm in banning any such content. OpenAI told Mashable at the time that despite the potential chatbot uses, the company still prohibited AI-generated pornography and deepfakes. Users can't repurpose or distribute output from OpenAI services to harm others. Examples include output to defraud, scam, spam, mislead, bully, harass, defame, discriminate based on protected attributes, sexualize children, or promote violence, hatred or the suffering of others. Users cannot use OpenAI technologies to impersonate another individual or organization without consent or legal right. Users cannot build tools that may be inappropriate for minors, including sexually explicit or suggestive content. While parent company Meta continues to explore generative AI integration on its platforms, its come under intense scrutiny for failing to curb explicit synthetic forgeries and IBSA. Following widespread controversy, Facebook's taken a more strict stance on nudify apps advertising on the site. Meta, meanwhile, has turned toward stronger AI labelling efforts and moderation, as its Oversight Board reviews Meta's power to address sexually explicit and suggestive AI-generated content. To protect survivors, we remove images that depict incidents of sexual violence and intimate images shared without the consent of the person(s) pictured. We do not allow content that attempts to exploit people by: Coercing money, favors or intimate imagery from people with threats to expose their intimate imagery or intimate information (sextortion); or sharing, threatening, stating an intent to share, offering or asking for non-consensual intimate imagery (NCII)... We do not allow promoting, threatening to share, or offering to make non-real non-consensual intimate imagery (NCII) either by applications, services, or instructions, even if there is no (near) nude commercial or non-commercial imagery shared in the content. Instagram similarly moderates visual media posted to its site, bolstered by its community guidelines. We don't allow nudity on Instagram. This includes photos, videos, and some digitally-created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks. Snapchat's generative AI tools do include limited image generation, so its potential liability stems from its reputation as a site known for sexual content swapping and as a possible creator of synthetic explicit images. We prohibit promoting, distributing, or sharing pornographic content. We also don't allow commercial activities that relate to pornography or sexual interactions (whether online or offline). Don't use My AI to generate political, sexual, harassing, or deceptive content, spam, malware, or content that promotes violence, self-harm, human-trafficking, or that would violate our Community Guidelines. TikTok, which has its own creative AI suite known as TikTok Symphony, has recently waded into murkier generative AI waters after launching AI-generated digital avatars. It appears the company's legal and ethical standing will rest on establishing proof of consent for AI-generated likenesses. TikTok has general community guidelines rules against nudity, the exposure of young people's bodies, and sexual activity or services. AI-generated content containing the likeness (visual or audio) of a real or fictional person aren't allowed, even if disclosed with the AI-generated content label, and may be removed. This applies to AI-generated content featuring a public figure -- adults (18 years and older) with a significant public role, such as a government official, politician, business leader, or celebrity -- when used for political or commercial endorsements. Content featuring a private figure (any person who isn't a public figure, including people under 18 years old) are also prohibited. Elon Musk's artificial intelligence investment, xAI, has recently added image generation to its platform chatbot Grok, and the image generator is capable of some eyebrow-raising facsimiles of celebrities. Grok's interface is built right into to the X platform, which is in turn a major forum for users to share their own content, moderated haphazardly through the site's community and advertising guidelines. X recently announced new policies that allow consensual adult content on the platform, but did not specify the posting of sexual digital forgeries, consensual or otherwise. You may not post or share intimate photos or videos of someone that were produced or distributed without their consent. We will immediately and permanently suspend any account that we identify as the original poster of intimate media that was created or shared without consent. We will do the same with any account that posts only this type of content, e.g., accounts dedicated to sharing upskirt images. You can't post or share explicit images or videos that were taken, appear to have been taken or that were shared without the consent of the people involved. This includes images or videos that superimpose or otherwise digitally manipulate an individual's face onto another person's nude body.
Share
Share
Copy Link
As AI-generated images become more prevalent, concerns about their impact on society grow. This story explores methods to identify AI-created images and examines how major tech companies are addressing the issue of explicit deepfakes.
In recent years, the proliferation of artificial intelligence (AI) technology has led to a surge in AI-generated images, raising concerns about their potential impact on society. As these images become increasingly sophisticated and difficult to distinguish from real photographs, the need for effective identification methods and robust policies has become more pressing 1.
Experts have developed several strategies to help identify AI-generated images:
As AI technology advances, a particularly concerning application has emerged: explicit deepfakes. These are AI-generated images or videos that superimpose a person's likeness onto explicit content without their consent. This practice raises serious ethical and legal questions, prompting major tech companies to address the issue 2.
Several prominent social media and technology companies have implemented policies to combat the spread of explicit deepfakes:
While these policies are a step in the right direction, experts argue that they may not go far enough. Many platforms rely on user reports to identify problematic content, which can be ineffective for victims who may not be aware of the existence of deepfakes featuring their likeness. Additionally, the policies often focus on explicit content, potentially overlooking other harmful applications of deepfake technology 2.
As AI technology continues to evolve, the challenge of identifying and regulating AI-generated images will likely become more complex. While tools and techniques for detection are improving, they may struggle to keep pace with advancements in AI image generation. This ongoing battle highlights the need for continued research, policy development, and public awareness to mitigate the potential negative impacts of AI-generated content on individuals and society as a whole.
Reference
[1]
U.S. law enforcement agencies are cracking down on the spread of AI-generated child sexual abuse imagery, as the Justice Department and states take action to prosecute offenders and update laws to address this emerging threat.
7 Sources
As AI technology advances, it's becoming increasingly difficult to distinguish between human-created content and AI-generated music and speech. This article explores the methods and tools available to identify AI-created songs and voices.
2 Sources
A comprehensive look at the latest developments in AI, including OpenAI's internal struggles, regulatory efforts, new model releases, ethical concerns, and the technology's impact on Wall Street.
6 Sources
As AI technology advances, it offers new tools for enhancing work productivity. However, its application in creative fields like novel writing raises concerns among authors. This story explores the potential benefits and controversies surrounding AI in various industries.
2 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved