2 Sources
2 Sources
[1]
Real or AI? It's Harder Than Ever to Spot AI Videos. These Tips Can Help
We've all been advised not to believe everything we see on the internet, and that's never been more true in the age of generative AI. AI-generated videos are everywhere, from deepfakes of celebrities and false disaster broadcasts to viral videos of bunnies on a trampoline. Sora, the AI video generator from ChatGPT's parent company, OpenAI, has only made it more difficult to separate truth from fiction. And the Sora 2 model, a brand-new social media app, is becoming more sophisticated by the day. In the last few months, the TikTok-like app has gone viral, with AI enthusiasts determined to hunt down invite codes. But Sora isn't like any other social media platform. Everything you see on Sora is fake, and all the videos are AI-generated. I described it as an AI deepfake fever dream, innocuous at first glance, with dangerous risks lurking just beneath the surface. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. From a technical standpoint, Sora videos are impressive compared to competitors such as Midjourney's V1 and Google's Veo 3. They have high resolution, synchronized audio and surprising creativity. Sora's most popular feature, dubbed "cameo," lets you use other people's likenesses and insert them into nearly any AI-generated scene. It's an impressive tool, resulting in scarily realistic videos. That's why so many experts are concerned about Sora. The app makes it easier for anyone to create dangerous deepfakes, spread misinformation and blur the line between what's real and what's not. Public figures and celebrities are especially vulnerable to these deepfakes, and unions like SAG-AFTRA have pushed OpenAI to strengthen its guardrails. Identifying AI content is an ongoing challenge for tech companies, social media platforms and everyone else. But it's not totally hopeless. Here are some things to look out for to determine whether a video was made using Sora. Every video made on the Sora iOS app includes a watermark when you download it. It's the white Sora logo -- a cloud icon -- that bounces around the edges of the video. It's similar to the way TikTok videos are watermarked. Watermarking content is one of the biggest ways AI companies can visually help us spot AI-generated content. Google's Gemini "nano banana" model, for example, automatically watermarks its images. Watermarks are great because they serve as a clear sign that the content was made with the help of AI. But watermarks aren't perfect. For one, if the watermark is static (not moving), it can easily be cropped out. Even for moving watermarks like Sora's, there are apps designed specifically to remove them, so watermarks alone can't be fully trusted. When OpenAI CEO Sam Altman was asked about this, he said society will have to adapt to a world where anyone can create fake videos of anyone. Of course, prior to OpenAI's Sora, there wasn't a popular, easily accessible, no-skill-needed way to make those videos. But his argument raises a valid point about the need to rely on other methods to verify authenticity. I know, you're probably thinking that there's no way you're going to check a video's metadata to determine if it's real. I understand where you're coming from; it's an extra step, and you might not know where to start. But it's a great way to determine if a video was made with Sora, and it's easier to do than you think. Metadata is a collection of information automatically attached to a piece of content when it's created. It gives you more insight into how an image or video was created. It can include the type of camera used to take a photo, the location, date and time a video was captured and the filename. Every photo and video has metadata, no matter whether it was human- or AI-created. And a lot of AI-created content will have content credentials that denote its AI origins, too. OpenAI is part of the Coalition for Content Provenance and Authenticity, which, for you, means that Sora videos include C2PA metadata. You can use the Content Authenticity Initiative's verification tool to check a video, image or document's metadata. Here's how. (The Content Authenticity Initiative is part of C2PA.) How to check a photo, video or document's metadata: 1. Navigate to this URL: https://verify.contentauthenticity.org/ 2. Upload the file you want to check. 3. Click Open. 4. Check the information in the right-side panel. If it's AI-generated, it should include that in the content summary section. When you run a Sora video through this tool, it'll say the video was "issued by OpenAI," and will include the fact that it's AI-generated. All Sora videos should contain these credentials that allow you to confirm that it was created with Sora. This tool, like all AI detectors, isn't perfect. There are a lot of ways AI videos can avoid detection. If you have other, non-Sora videos, they may not contain the necessary signals in the metadata for the tool to determine whether or not they're AI-created. AI videos made with Midjourney, for example, don't get flagged, as I confirmed in my testing. Even if the video was created by Sora, but then run through a third-party app (like a watermark removal one) and redownloaded, that makes it less likely the tool will flag it as AI. If you're on one of Meta's social media platforms, like Instagram or Facebook, you may get a little help determining whether something is AI. Meta has internal systems in place to help flag AI content and label it as such. These systems aren't perfect, but you can clearly see the label for posts that have been flagged. TikTok and YouTube have similar policies for labelling AI content. The only truly reliable way to know if something is AI-generated is if the creator discloses it. Many social media platforms now offer settings that let users label their posts as AI-generated. Even a simple credit or disclosure in your caption can go a long way to help everyone understand how something was created. You know while you're scrolling Sora that nothing is real. But once you leave the app and share AI-generated videos, it's our collective responsibility to disclose how a video was created. As AI models like Sora continue to blur the line between reality and AI, it's up to all of us to make it as clear as possible when something is real or AI. There's no one foolproof method to accurately tell from a single glance if a video is real or AI. The best thing you can do to prevent yourself from being duped is to not automatically, unquestioningly believe everything you see online. Follow your gut instinct -- if something feels unreal, it probably is. In these unprecedented, AI-slop-filled times, your best defense is to inspect the videos you're watching more closely. Don't just quickly glance and scroll away without thinking. Check for mangled text, disappearing objects and physics-defying motions. And don't beat yourself up if you get fooled occasionally; even experts get it wrong. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
[2]
What parents need to know about Sora, the generative AI video app blurring the line between real and fake
The app can create realistic AI-generated videos from simple text prompts. A new generative artificial intelligence app called Sora is quickly becoming one of the most talked about and debated tech launches of the year. Created by OpenAI, the company behind ChatGPT, Sora allows users to turn written prompts into realistic AI-generated videos in seconds. But experts warn that this innovation comes with potentially significant child-safety risks, from misinformation to the misuse of kids' likenesses. In an October safety report, Common Sense Media, a nonprofit that monitors children's digital well-being, gave Sora an "Unacceptable Risk" rating for use by kids and teens, citing its "relative lack of safety features" and the potential for misuse of AI-generated video. "The biggest difference is that Sora is simply better than its competitors," said Michael Dobuski, a technology reporter for ABC News Audio. "Its videos look passably real, not uncanny or obviously computer-generated. That's a double-edged sword, because when something looks real, it's easier to spread misinformation or create harmful videos." Titania Jordan, Chief Parent Officer at Bark Technologies, a parental control online safety app, told ABC News that Sora's capabilities go far beyond a photo filter or animation tool. "It can create mind-blowingly realistic videos that can fool even the most tech-savvy among us," she explained. "The most important thing for parents to understand is that it can create scenes that look 100% real, but are completely fake. It blurs the line between reality and fiction in a way we've never seen before." Sora is what's known as a text-to-video platform. Users type a prompt -- for example, "a man riding a bike through the park at sunset" -- and the app generates a lifelike AI video that appears to have been filmed in the real world. The app is currently available to the public through OpenAI's platform and iOS app, with initial access included for ChatGPT Plus and Pro subscribers. There's also a free introductory tier, which allows users to create a limited number of short, lower-resolution AI-generated videos each month. OpenAI's terms of use require users to be at least 13 years old, with anyone under 18 needing parental permission. However, experts say that the app's teen protections are minimal and that videos created on Sora can easily be shared on other platforms like TikTok or YouTube, where kids of any age can view them. Users can even add themselves or their friends into these clips using a feature called Cameos, which lets people upload their face or voice and have it animated into new scenes. OpenAI says Cameos are "consent-based," meaning users decide who can access their likeness and can revoke that permission at any time. The company also says it blocks depictions of public figures and applies "extra safety guardrails" to any video featuring a Cameo. Users can see every video that includes their image and delete or report it directly from the app. Still, Jordan cautions that these protections may not be enough. "Once your likeness is out there, you lose control over how it's used," she said. "Someone could take your child's face or voice to create a fake video about them. That can lead to bullying, humiliation, or worse. And when kids see so many hyper-realistic videos online, it becomes harder for them to tell what's true, which can really affect self-esteem and trust." Dobuski said that while OpenAI has included several safeguards, enforcement has been spotty. "When Sora first launched, people were making videos of copyrighted characters like SpongeBob and Pikachu, even fake clips of OpenAI's CEO doing illegal things," he said. "So, it appears a lot of those restrictions are easy to get around." When reached for comment about Sora's safety features, OpenAI pointed ABC News to its website, where its emphasized that Sora 2 and the Sora app were "built with safety from the start." Every AI-generated Sora video includes both visible and invisible provenance signals, a visible watermark, and C2PA metadata, an industry-standard digital signature that identifies AI-generated content, it said. The company also maintains internal reverse-image and audio search tools designed to trace videos back to Sora with high accuracy. OpenAI says it has implemented teen-specific safeguards, including limits on mature content, restrictions on messaging between adults and teens, and feed filters designed to make content appropriate for younger users. Parents can also manage whether teens can send or receive direct messages and limit continuous scrolling. While those measures sound promising, Jordan says parents should be cautious. "Even with filters and watermarks, Sora can generate disturbing or inappropriate content," she said. "And because kids can unlink their accounts from their parents' supervision at any time, those safeguards aren't foolproof." Dobuski agrees that the larger issue isn't just content moderation, it's speed. "Given how easy these videos are to make," he said, "they can spread online before any platform is able to meaningfully crack down." For now, there's no federal law governing AI-generated video content. Some states, including California, have proposed laws requiring AI videos to include clear labeling and banning the creation of "non-consensual intimate imagery" or child sexual-abuse material. But enforcement varies, and many experts say it's not enough. "The Silicon Valley ethos of 'move fast and break things' is still alive," Dobuski said. "Companies are racing to dominate the AI market before regulations are in place, and that strategy isn't without risk, especially when it comes to kids." Jordan recommends parents take a hands-on approach. She suggests starting by explaining what Sora is and why it matters. "Tell your kids, 'What you see online might be fake -- always question it,'" she said. "Teach them not to upload their face or voice anywhere and to come to you if they see something that makes them uncomfortable." Families should review new apps together, establish rules about sharing personal media, and keep devices in shared spaces, Jordan said, recommending parents also discuss with their kids the wider influence of AI-generated media. "Even if your child doesn't use Sora directly, they're going to see its content," she said. "That means you have to talk not just about what they make, but what they're consuming."
Share
Share
Copy Link
OpenAI's Sora app enables users to create realistic AI-generated videos from text prompts, sparking concerns about misinformation, deepfakes, and child safety as experts struggle to develop effective detection methods.
OpenAI's Sora app has emerged as a groundbreaking yet controversial addition to the generative AI landscape, enabling users to create remarkably realistic videos from simple text prompts. The technology represents a significant leap forward in AI-generated content, producing high-resolution videos with synchronized audio that rival traditional video production methods
1
.
Source: CNET
Unlike previous AI video generators, Sora's output quality has reached a level where distinguishing between real and artificial content has become increasingly challenging for average users. The app functions as a text-to-video platform where users input prompts like "a man riding a bike through the park at sunset" and receive lifelike footage that appears authentically filmed
2
.Sora is currently available through OpenAI's platform and iOS app, with access included for ChatGPT Plus and Pro subscribers. The service also offers a free introductory tier that allows users to create limited short, lower-resolution AI-generated videos monthly
2
.
Source: ABC News
One of Sora's most notable features is "Cameos," which enables users to upload their face or voice and have it animated into new scenes. This consent-based system allows users to control who can access their likeness and revoke permission at any time, though experts question the effectiveness of these protections
2
.Common Sense Media, a nonprofit monitoring children's digital well-being, assigned Sora an "Unacceptable Risk" rating for use by kids and teens in their October safety report. The organization cited the app's "relative lack of safety features" and potential for misuse of AI-generated video content
2
.Titania Jordan, Chief Parent Officer at Bark Technologies, emphasized that Sora's capabilities extend far beyond simple photo filters or animation tools. "It can create mind-blowingly realistic videos that can fool even the most tech-savvy among us," Jordan explained, highlighting how the technology blurs the line between reality and fiction in unprecedented ways
2
.Related Stories
OpenAI has implemented several measures to help identify AI-generated content. Every video created through the Sora iOS app includes a visible watermark featuring the white Sora logo that bounces around the video's edges, similar to TikTok's watermarking system
1
.More sophisticated detection involves checking C2PA metadata through the Content Authenticity Initiative's verification tool. As a member of the Coalition for Content Provenance and Authenticity, OpenAI ensures Sora videos include digital signatures that identify their AI origins. Users can verify content by uploading files to verify.contentauthenticity.org, where Sora-generated videos will display as "issued by OpenAI" with AI-generation confirmation
1
.Despite these safeguards, experts warn that protection mechanisms remain insufficient. Watermarks can be removed using specialized applications, and enforcement of content restrictions has proven inconsistent. Technology reporter Michael Dobuski noted that when Sora first launched, users successfully created videos featuring copyrighted characters and even fake clips of OpenAI's CEO engaging in illegal activities
2
.The app's age restrictions require users to be at least 13 years old, with parental permission needed for those under 18. However, experts point out that these protections are minimal, and Sora-generated content can easily be shared on other platforms where children of any age can view it
2
.Summarized by
Navi