3 Sources
3 Sources
[1]
What Is Sora? Everything You Need to Know About OpenAI's Video Generator
Barbara is a tech writer specializing in AI and emerging technologies. With a background as a systems librarian in software development, she brings a unique perspective to her reporting. Having lived in the USA and Ireland, Barbara now resides in Croatia. She covers the latest in artificial intelligence and tech innovations. Her work draws on years of experience in tech and other fields, blending technical know-how with a passion for how technology shapes our world. If you've been anywhere near social media over the past few weeks, then you would have seen a wave of AI-generated videos floating around out there, racking up millions of views. Many of them are produced in Sora, ChatGPT's sister AI tool. Sora is a generative video model developed by OpenAI that transforms text descriptions, images or video inputs into short video clips. The tool lets you type something like "a plastic bag floating around the air, carried by the wind" and receive a matching video clip. OpenAI first revealed Sora in early 2024 and made it available to ChatGPT Plus and Pro subscribers in December of last year. The model builds on OpenAI's earlier text-to-image systems, such as Dall-E, but uses new architectures designed for more natural motion and visual consistency. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Don't confuse OpenAI's Sora video-generation desktop-based tool with the new social iOS and Android app of the same name, or with the unrelated Sora reading app. The social app runs on Sora 2, while the desktop version can use either the original model or Sora 2, depending on the region. Sora is a diffusion model. It starts video creation with a screen of static noise and gradually removes it until shapes, textures and motion form a coherent scene that matches the text prompt. The Sora 2 model, released on Sept. 30, also supports synchronized dialogue and sound effects, while earlier versions produced only mute clips. Sora breaks images and frames into small chunks of data called patches, which help it understand motion, texture and detail across different formats and lengths. These patches function similarly to tokens in language models, which break down text into smaller units, such as words or punctuation, allowing the AI tool to process and generate output. You can upload text, still images and short video clips as starting points, and set the length between 5 and 20 seconds at resolutions from 480p to 1080p in the current public version. Beyond understanding what the prompt describes, Sora also models how those elements behave and interact in the real world. Older models had issues simulating those actions. For example, a video of someone eating a cookie might omit the bite mark. Sora now simulates those cause-and-effect details more accurately. Even so, OpenAI acknowledges that Sora 2 "still makes certain mistakes," despite being "better about obeying the laws of physics compared to prior systems." For detailed instructions on how to use Sora to create an AI video, read our guide next. In its effort to establish a closer relationship with professional creators, Sora has introduced features previously reserved for advanced video tools. The new storyboarding option, available to Plus and Pro users on the desktop, allows creators to outline scenes before generating videos, much like filmmakers plan shots. Until now, most Sora clips have been short and casual. However, updates such as storyboarding, longer runtimes and higher resolutions suggest that OpenAI aims to make the platform suitable for more polished and professional work. Some artists, like Arvida Byström, have successfully used AI imagery in imaginative ways, expanding the possibilities creatively. When the AI tool distorts a body -- say, by adding an extra limb or reshaping it in strange ways -- Byström treats it as part of the art rather than a mistake. She leaves room for the model's interpretation, finding beauty in those accidents and in the unfamiliar forms that emerge from "AI misunderstanding the body." But for most people, it's about convenience, not artistry. Generative AI becomes a shortcut for churning out quick, trend-driven content that offers little to no value but is purely for entertainment purposes, called AI slop. "Best case scenario, people just ignore it," says Nathaniel Fast, director of USC Marshall's Neely Center for Ethical Leadership and Decision Making. "Second best case scenario, it ends up being a big distraction ... at worst, it will really erode our sense of trust and our ability to understand what's real." Byström echoes that concern about the challenges of differentiating what's real and fake. "Maybe one good thing is that we'll finally start questioning what we see," Byström says. "The visual has always been powerful, but when it becomes so easy to fake, people might return to more trusted sources." OpenAI has split Sora's accessibility into two components: a desktop web tool designed for professional use and a mobile app intended primarily for social video creation and sharing. If you want high-quality, long-form content creation, the web interface is your best bet, as it offers advanced features like storyboarding and longer video durations. The free Sora apps on iOS and Android started as invite-only. Since late October, people in the US, Canada, Japan and South Korea have been able to log in without a code. The company intends to expand access to additional countries. The mobile app focuses heavily on creation, remixing and sharing short-form video clips, resembling TikTok, making it a social-first experience. The cost to use Sora is integrated into the existing ChatGPT subscription plans. If you have a free ChatGPT account, as a teaser, you receive a limited daily allowance of around 30 Sora generations. Core Sora functionality is available to ChatGPT Plus subscribers for $20 per month, granting a generous daily allowance of video generations. For professionals needing better output, the Pro subscription costs $200 per month and unlocks superior features, including higher-resolution videos, the longest durations and the ability to download creations without a watermark. As the platform's demand skyrocketed, OpenAI introduced a pay-as-you-go model for everyone who hits their daily free limit. This lets you purchase small bundles of extra video generations for around $4 per pack of 10. With Sora, OpenAI transitioned from image generation to video, further extending the disruption that image models have brought to the graphics and illustration industries. Video creation, which once required large teams or specialized software, can now be done from a prompt on your phone. This could alter the economics of film, entertainment and media production, as well as the level of trust that people place in what they see. When manipulated video spreads misinformation or impersonates public figures, it's a problem we shouldn't ignore. OpenAI's Likeness Misuse filter is designed to stop you from generating videos that depict real people without consent. If someone tries to prompt Sora with a celebrity name or recognizable individual, the system either blocks the request or returns an error message. Sora 2 also introduced a Cameo feature that lets you upload your own likeness to create an AI version of yourself and control how it's used. You decide who can include your cameo in videos, remove access or delete clips that feature you at any time. Soon after launch, celebrity video platform Cameo filed a lawsuit against OpenAI, alleging the feature could create brand confusion and mislead the public by making it seem associated with or endorsed by the company. Initially, Sora 2 used an opt-out policy for copyrighted characters, meaning rights holders had to request exclusion if they didn't want their material used. However, in response to backlash, OpenAI announced it's giving rights holders "more granular control," moving closer to an opt-in model where content creators must grant permission, rather than simply excluding content after the fact. William Schultz, partner at Merchant and Gould, focusing on internet law and emerging technology, tells CNET that while Sora's safeguards are improving, they're still imperfect. You can sometimes work around likeness filters, and the system occasionally flags harmless content. He says it ultimately "comes down to transparency and responsible use." "Companies that are relying on AI systems to generate ads and content may not have the ability to obtain a copyright registration, which is required to enforce a copyright," he says, adding that a potential solution could be to "add human-generated content to the output." Aside from legal concerns, there are also ethical ones. "I would like to see OpenAI put out products that are aimed at serving, like either solving problems or helping us meet these aspirational goals that we have of making ourselves better. It's hard for me to understand what Sora 2 is doing other than just trying to make money," Fast tells CNET. If video generation becomes widespread, the economics of creation, distribution and authenticity will change dramatically.This signals a pivot in generative AI from silly images at the beginning to motion pictures in the near future. For some creators, that means new potential. For everyone else, it means new caution. Fast says that new tools are always exciting and unlock new potential, but warns that "the overall mission is to shift the paradigm in the tech ecosystem away from a profit-first-purpose-later kind of mentality to a purpose-first AI mentality."
[2]
Real or AI? It's Harder Than Ever to Spot AI Videos. These Tips Can Help
We've all been advised not to believe everything we see on the internet, and that's never been more true in the age of generative AI. AI-generated videos are everywhere, from deepfakes of celebrities and false disaster broadcasts to viral videos of bunnies on a trampoline. Sora, the AI video generator from ChatGPT's parent company, OpenAI, has only made it more difficult to separate truth from fiction. And the Sora 2 model, a brand-new social media app, is becoming more sophisticated by the day. In the last few months, the TikTok-like app has gone viral, with AI enthusiasts determined to hunt down invite codes. But Sora isn't like any other social media platform. Everything you see on Sora is fake, and all the videos are AI-generated. I described it as an AI deepfake fever dream, innocuous at first glance, with dangerous risks lurking just beneath the surface. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. From a technical standpoint, Sora videos are impressive compared to competitors such as Midjourney's V1 and Google's Veo 3. They have high resolution, synchronized audio and surprising creativity. Sora's most popular feature, dubbed "cameo," lets you use other people's likenesses and insert them into nearly any AI-generated scene. It's an impressive tool, resulting in scarily realistic videos. That's why so many experts are concerned about Sora. The app makes it easier for anyone to create dangerous deepfakes, spread misinformation and blur the line between what's real and what's not. Public figures and celebrities are especially vulnerable to these deepfakes, and unions like SAG-AFTRA have pushed OpenAI to strengthen its guardrails. Identifying AI content is an ongoing challenge for tech companies, social media platforms and everyone else. But it's not totally hopeless. Here are some things to look out for to determine whether a video was made using Sora. Every video made on the Sora iOS app includes a watermark when you download it. It's the white Sora logo -- a cloud icon -- that bounces around the edges of the video. It's similar to the way TikTok videos are watermarked. Watermarking content is one of the biggest ways AI companies can visually help us spot AI-generated content. Google's Gemini "nano banana" model, for example, automatically watermarks its images. Watermarks are great because they serve as a clear sign that the content was made with the help of AI. But watermarks aren't perfect. For one, if the watermark is static (not moving), it can easily be cropped out. Even for moving watermarks like Sora's, there are apps designed specifically to remove them, so watermarks alone can't be fully trusted. When OpenAI CEO Sam Altman was asked about this, he said society will have to adapt to a world where anyone can create fake videos of anyone. Of course, prior to OpenAI's Sora, there wasn't a popular, easily accessible, no-skill-needed way to make those videos. But his argument raises a valid point about the need to rely on other methods to verify authenticity. I know, you're probably thinking that there's no way you're going to check a video's metadata to determine if it's real. I understand where you're coming from; it's an extra step, and you might not know where to start. But it's a great way to determine if a video was made with Sora, and it's easier to do than you think. Metadata is a collection of information automatically attached to a piece of content when it's created. It gives you more insight into how an image or video was created. It can include the type of camera used to take a photo, the location, date and time a video was captured and the filename. Every photo and video has metadata, no matter whether it was human- or AI-created. And a lot of AI-created content will have content credentials that denote its AI origins, too. OpenAI is part of the Coalition for Content Provenance and Authenticity, which, for you, means that Sora videos include C2PA metadata. You can use the Content Authenticity Initiative's verification tool to check a video, image or document's metadata. Here's how. (The Content Authenticity Initiative is part of C2PA.) How to check a photo, video or document's metadata: 1. Navigate to this URL: https://verify.contentauthenticity.org/ 2. Upload the file you want to check. 3. Click Open. 4. Check the information in the right-side panel. If it's AI-generated, it should include that in the content summary section. When you run a Sora video through this tool, it'll say the video was "issued by OpenAI," and will include the fact that it's AI-generated. All Sora videos should contain these credentials that allow you to confirm that it was created with Sora. This tool, like all AI detectors, isn't perfect. There are a lot of ways AI videos can avoid detection. If you have other, non-Sora videos, they may not contain the necessary signals in the metadata for the tool to determine whether or not they're AI-created. AI videos made with Midjourney, for example, don't get flagged, as I confirmed in my testing. Even if the video was created by Sora, but then run through a third-party app (like a watermark removal one) and redownloaded, that makes it less likely the tool will flag it as AI. If you're on one of Meta's social media platforms, like Instagram or Facebook, you may get a little help determining whether something is AI. Meta has internal systems in place to help flag AI content and label it as such. These systems aren't perfect, but you can clearly see the label for posts that have been flagged. TikTok and YouTube have similar policies for labelling AI content. The only truly reliable way to know if something is AI-generated is if the creator discloses it. Many social media platforms now offer settings that let users label their posts as AI-generated. Even a simple credit or disclosure in your caption can go a long way to help everyone understand how something was created. You know while you're scrolling Sora that nothing is real. But once you leave the app and share AI-generated videos, it's our collective responsibility to disclose how a video was created. As AI models like Sora continue to blur the line between reality and AI, it's up to all of us to make it as clear as possible when something is real or AI. There's no one foolproof method to accurately tell from a single glance if a video is real or AI. The best thing you can do to prevent yourself from being duped is to not automatically, unquestioningly believe everything you see online. Follow your gut instinct -- if something feels unreal, it probably is. In these unprecedented, AI-slop-filled times, your best defense is to inspect the videos you're watching more closely. Don't just quickly glance and scroll away without thinking. Check for mangled text, disappearing objects and physics-defying motions. And don't beat yourself up if you get fooled occasionally; even experts get it wrong. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
[3]
What parents need to know about Sora, the generative AI video app blurring the line between real and fake
The app can create realistic AI-generated videos from simple text prompts. A new generative artificial intelligence app called Sora is quickly becoming one of the most talked about and debated tech launches of the year. Created by OpenAI, the company behind ChatGPT, Sora allows users to turn written prompts into realistic AI-generated videos in seconds. But experts warn that this innovation comes with potentially significant child-safety risks, from misinformation to the misuse of kids' likenesses. In an October safety report, Common Sense Media, a nonprofit that monitors children's digital well-being, gave Sora an "Unacceptable Risk" rating for use by kids and teens, citing its "relative lack of safety features" and the potential for misuse of AI-generated video. "The biggest difference is that Sora is simply better than its competitors," said Michael Dobuski, a technology reporter for ABC News Audio. "Its videos look passably real, not uncanny or obviously computer-generated. That's a double-edged sword, because when something looks real, it's easier to spread misinformation or create harmful videos." Titania Jordan, Chief Parent Officer at Bark Technologies, a parental control online safety app, told ABC News that Sora's capabilities go far beyond a photo filter or animation tool. "It can create mind-blowingly realistic videos that can fool even the most tech-savvy among us," she explained. "The most important thing for parents to understand is that it can create scenes that look 100% real, but are completely fake. It blurs the line between reality and fiction in a way we've never seen before." Sora is what's known as a text-to-video platform. Users type a prompt -- for example, "a man riding a bike through the park at sunset" -- and the app generates a lifelike AI video that appears to have been filmed in the real world. The app is currently available to the public through OpenAI's platform and iOS app, with initial access included for ChatGPT Plus and Pro subscribers. There's also a free introductory tier, which allows users to create a limited number of short, lower-resolution AI-generated videos each month. OpenAI's terms of use require users to be at least 13 years old, with anyone under 18 needing parental permission. However, experts say that the app's teen protections are minimal and that videos created on Sora can easily be shared on other platforms like TikTok or YouTube, where kids of any age can view them. Users can even add themselves or their friends into these clips using a feature called Cameos, which lets people upload their face or voice and have it animated into new scenes. OpenAI says Cameos are "consent-based," meaning users decide who can access their likeness and can revoke that permission at any time. The company also says it blocks depictions of public figures and applies "extra safety guardrails" to any video featuring a Cameo. Users can see every video that includes their image and delete or report it directly from the app. Still, Jordan cautions that these protections may not be enough. "Once your likeness is out there, you lose control over how it's used," she said. "Someone could take your child's face or voice to create a fake video about them. That can lead to bullying, humiliation, or worse. And when kids see so many hyper-realistic videos online, it becomes harder for them to tell what's true, which can really affect self-esteem and trust." Dobuski said that while OpenAI has included several safeguards, enforcement has been spotty. "When Sora first launched, people were making videos of copyrighted characters like SpongeBob and Pikachu, even fake clips of OpenAI's CEO doing illegal things," he said. "So, it appears a lot of those restrictions are easy to get around." When reached for comment about Sora's safety features, OpenAI pointed ABC News to its website, where its emphasized that Sora 2 and the Sora app were "built with safety from the start." Every AI-generated Sora video includes both visible and invisible provenance signals, a visible watermark, and C2PA metadata, an industry-standard digital signature that identifies AI-generated content, it said. The company also maintains internal reverse-image and audio search tools designed to trace videos back to Sora with high accuracy. OpenAI says it has implemented teen-specific safeguards, including limits on mature content, restrictions on messaging between adults and teens, and feed filters designed to make content appropriate for younger users. Parents can also manage whether teens can send or receive direct messages and limit continuous scrolling. While those measures sound promising, Jordan says parents should be cautious. "Even with filters and watermarks, Sora can generate disturbing or inappropriate content," she said. "And because kids can unlink their accounts from their parents' supervision at any time, those safeguards aren't foolproof." Dobuski agrees that the larger issue isn't just content moderation, it's speed. "Given how easy these videos are to make," he said, "they can spread online before any platform is able to meaningfully crack down." For now, there's no federal law governing AI-generated video content. Some states, including California, have proposed laws requiring AI videos to include clear labeling and banning the creation of "non-consensual intimate imagery" or child sexual-abuse material. But enforcement varies, and many experts say it's not enough. "The Silicon Valley ethos of 'move fast and break things' is still alive," Dobuski said. "Companies are racing to dominate the AI market before regulations are in place, and that strategy isn't without risk, especially when it comes to kids." Jordan recommends parents take a hands-on approach. She suggests starting by explaining what Sora is and why it matters. "Tell your kids, 'What you see online might be fake -- always question it,'" she said. "Teach them not to upload their face or voice anywhere and to come to you if they see something that makes them uncomfortable." Families should review new apps together, establish rules about sharing personal media, and keep devices in shared spaces, Jordan said, recommending parents also discuss with their kids the wider influence of AI-generated media. "Even if your child doesn't use Sora directly, they're going to see its content," she said. "That means you have to talk not just about what they make, but what they're consuming."
Share
Share
Copy Link
OpenAI's Sora AI video generator transforms text prompts into realistic video clips, sparking concerns about misinformation, deepfakes, and child safety as the technology makes it increasingly difficult to distinguish between real and AI-generated content.
OpenAI's Sora has emerged as a groundbreaking AI video generator that transforms simple text descriptions into remarkably realistic video clips. The tool, available to ChatGPT Plus and Pro subscribers since December 2024, represents a significant leap forward in generative AI technology
1
. Users can input prompts like "a plastic bag floating around the air, carried by the wind" and receive matching video content within seconds.
Source: CNET
The technology operates as a diffusion model, beginning with static noise and gradually removing it until coherent scenes emerge that match the text prompt. Sora breaks images and frames into small data chunks called patches, similar to how language models process text into tokens
1
. The latest Sora 2 model, released in September 2024, supports synchronized dialogue and sound effects, while maintaining video lengths between 5 and 20 seconds at resolutions from 480p to 1080p.
Source: CNET
What sets Sora apart from competitors like Midjourney's V1 and Google's Veo 3 is its impressive technical capabilities and surprising creativity. The platform now includes advanced features previously reserved for professional video tools, including a storyboarding option that allows creators to outline scenes before generating videos
1
.Sora's "Cameos" feature represents one of its most controversial capabilities, allowing users to upload faces or voices and animate them into new scenes. While OpenAI describes these as "consent-based," with users controlling who can access their likeness, experts remain concerned about potential misuse
3
.The platform's sophisticated output has raised significant concerns among safety experts and child advocacy groups. Common Sense Media awarded Sora an "Unacceptable Risk" rating for use by children and teens, citing its "relative lack of safety features" and potential for misuse
3
."The biggest difference is that Sora is simply better than its competitors," explained Michael Dobuski, a technology reporter for ABC News Audio. "Its videos look passably real, not uncanny or obviously computer-generated. That's a double-edged sword, because when something looks real, it's easier to spread misinformation or create harmful videos"
3
.Related Stories
As AI-generated videos become increasingly sophisticated, identifying fake content has become more challenging. Sora videos include several detection mechanisms, including a bouncing white cloud watermark and C2PA metadata that identifies AI-generated content
2
. Users can verify content authenticity through the Content Authenticity Initiative's verification tool at verify.contentauthenticity.org.However, these safeguards have proven insufficient. Watermarks can be removed using specialized apps, and enforcement of content restrictions has been inconsistent. When Sora first launched, users successfully created videos featuring copyrighted characters and even fake clips of OpenAI's CEO engaging in illegal activities
3
.The proliferation of AI-generated content through platforms like Sora raises fundamental questions about truth and authenticity in digital media. Nathaniel Fast, director of USC Marshall's Neely Center for Ethical Leadership and Decision Making, warns of potential consequences: "Best case scenario, people just ignore it. Second best case scenario, it ends up being a big distraction... at worst, it will really erode our sense of trust and our ability to understand what's real"
1
.Summarized by
Navi
1
Business and Economy

2
Technology

3
Policy and Regulation
