4 Sources
4 Sources
[1]
ByteDance's new AI video generation model, Dreamina Seedance 2.0, comes to CapCut | TechCrunch
OpenAI may be dialing back its efforts in the video generation market with the shutdown of its Sora app, but ByteDance on Thursday confirmed that its new audio and video model, Dreamina Seedance 2.0, is now rolling out in its editing platform, CapCut. ByteDance says the model allows creators to draft, edit, and sync video and audio content by using prompts, images, or reference videos. The phased rollout will begin with CapCut users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam, with more markets added over time. The news of the launch in CapCut follows a recent report that said the model's global rollout would be paused, while it worked to address intellectual property issues that drew criticism from Hollywood over alleged copyright infringement. That likely explains the limited number of markets where the model is currently available within CapCut. In China, the model is available to users of ByteDance's Jianying app. The video generation model works without reference images, even if the creator only uses a few words to describe the scene they have in mind, ByteDance says in its announcement. CapCut is also good at rendering realistic textures, movement, and lighting across a range of visual perspectives and angles, which the company notes could be used to edit, enhance, or correct creators' own footage. Another use case would be allowing creators to test potential ideas based on early concepts or sketches before filming the real video. In addition, Dreamina Seedance 2.0 can be used for a wide range of content, including cooking recipes, fitness tutorials, business or product overviews, and videos with motion or action-focused content, where AI video models have historically faced challenges, the company explains. At launch, the model supports clips of up to 15 seconds long across six aspect ratios. In CapCut, the model will roll out across different areas, including editing features such as AI Video and generation tools like Video Studio. It will also come to ByteDance's AI generation platform, Dreamina, and its marketing platform, Pippit. Given its ability to create realistic content, ByteDance says it has added safety restrictions, so the model won't have the ability to make videos from images or videos that contain real faces. CapCut will also block the use of unauthorized generation of intellectual property. (However, if the restrictions were working properly, the model would be available now in the United States. Likely, more tweaks are still being made.) The content produced by Dreamina Seedance 2.0 will also include an invisible watermark, which will help to identify content made with the model when it's shared off-platform, ByteDance added. This could aid in things like takedown requests from rights holders in the event that the model allowed copyright content through. ByteDance says it will partner with experts and creative communities as the model rolls out to iterate and improve upon the model's capabilities.
[2]
ByteDance adds watermarking and IP guardrails to Seedance 2.0 ahead of global rollout
Six weeks ago, a video of Tom Cruise fighting Brad Pitt on a rooftop went viral. It was, of course, not real. It was generated by Seedance 2.0, ByteDance's AI video model, and it set off a firestorm that drew cease-and-desist letters from six major Hollywood studios, a formal denunciation from the Motion Picture Association, and a pointed rebuke from SAG-AFTRA over the unauthorised use of its members' likenesses. Rhett Reese, the screenwriter behind the Deadpool films, watched the clip and offered a blunt assessment of the technology's implications for his profession. Now ByteDance is attempting something delicate: relaunching the very tool that provoked that backlash, but with enough safeguards to make the case that it has heard the criticism. On Wednesday, the TikTok parent company said its global safety and intellectual property teams had worked with a third-party red-teaming partner to bolster Seedance 2.0 ahead of its international release through CapCut, ByteDance's video editing platform, which reports more than 400 million monthly active users. The new safeguards are substantive, at least on paper. Seedance 2.0 now blocks video generation from images or videos containing real faces, a direct response to the deepfake controversy that engulfed the model in February. CapCut will also block the unauthorised generation of copyrighted characters, addressing the parade of AI-rendered Shreks, SpongeBobs, Darth Vaders, and Deadpools that the MPA had cited in its complaint. On the transparency front, all output will carry both visible watermarks and embedded C2PA Content Credentials, the industry-standard protocol for identifying AI-generated content across platforms. ByteDance is also introducing what it calls an "advanced invisible watermarking" technology designed to identify content made with the model even after it has been shared or altered off-platform, and the company says it will conduct proactive monitoring for IP violations. The rollout itself reflects a calculated caution. CapCut will initially make Seedance 2.0 available to paid users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam. Conspicuously absent from the list are the United States and India, ByteDance's two most complex regulatory markets. Europe, Africa, South America, and Southeast Asia are expected to follow, according to the company, though no firm timeline has been offered for the US. The timing of the relaunch is notable. Just days earlier, OpenAI announced it was shutting down Sora, its own AI video generation tool, after downloads fell 45 per cent by January and a licensing deal with Disney collapsed. Where OpenAI retreated, ByteDance is advancing, though into a market now acutely sensitised to the regulatory questions that AI-generated content raises. The EU AI Act's transparency requirements, which take effect in August 2026, will mandate that providers of generative AI systems mark their output in machine-readable formats and disclose the artificial origin of deepfakes. ByteDance's adoption of C2PA watermarking and invisible marking appears to anticipate these obligations, though whether its safeguards will satisfy European regulators remains to be seen. Red-teaming reports suggest the guardrails are not impenetrable. According to testing documented by industry observers, creative prompting can still bypass the filters to produce what have been described as "likeness-adjacent" characters, content that evokes a real person or copyrighted figure without technically reproducing them. It is a familiar challenge in AI governance: the gap between what a policy forbids and what a model can be coaxed into producing. ByteDance's vertical integration gives it a unique position in this contest. It builds the AI model, owns the editing platform where it is deployed, and controls TikTok, the dominant short-form video distribution channel. That control means it can, in theory, enforce IP protections across the entire pipeline from generation to distribution. Whether it will do so with sufficient rigour to satisfy Hollywood and its lawyers is another matter entirely. The AI boom of 2025 produced a generation of tools that could generate text, images, and code at scale. Video was always the next frontier, and the hardest to govern. ByteDance's bet is that it can be the company to commercialise AI video generation globally without drowning in litigation. The safeguards it has added to Seedance 2.0 are a necessary first step. Whether they are sufficient is a question that Hollywood, regulators, and policymakers across multiple jurisdictions will be answering for months to come.
[3]
ByteDance quietly rolls out SeeDance 2.0 globally
Beijing (AFP) - Chinese artificial intelligence powerhouse and TikTok creator ByteDance has quietly rolled out its latest video generator SeeDance 2.0 worldwide, while its US rival OpenAI called time on a similar product. The SeeDance 2.0 model was launched in China last month, both stunning and spooking the entertainment industry with its ability to produce near-Hollywood-quality clips from simple text prompts. However, it has also sparked concerns over copyright infringement "We have further expanded Dreamina Seedance 2.0 in more markets in CapCut today, across Africa, South America, the Middle East and Southeast Asia, with more regions coming soon," CapCut, ByteDance's popular video editing tool, posted on X on Thursday. It said the SeeDance 2.0 model would initially be available to some paid users. The rollout includes "firm safeguards" to prevent violations of its safety policies, including the unauthorised use of individuals' likenesses or intellectual property, CapCut said. Major Hollywood production studios including Disney, Paramount, Warner Bros and Netflix, have threatened legal action against Beijing-based ByteDance over accusations of copyright infringement. Reports this month suggested that backlash had prompted ByteDance to pause SeeDance 2.0's global launch. It was not immediately clear if ByteDance had resolved those legal issues. The United States is not among the current rollout markets. ByteDance, which runs popular short video platforms TikTok and Douyin, has invested heavily in AI in recent years against a backdrop of increasing global regulatory scrutiny of such platforms. ByteDance announced on Friday the sale of Moonton, an important gaming asset, to a subsidiary of Saudi Arabia's sovereign fund for more than $6 billion. Moonton runs Mobile Legends: Bang Bang, one of Southeast Asia's most popular gaming titles. ByteDance's move coincides with a broader shift in the AI industry towards more "agentic" tools that focus on performing practical, real-life tasks. US AI giant OpenAI said on Tuesday it was shutting down its popular consumer-facing video-generating service Sora, a move widely understood to focus more on providing business users with agentic AI capacities.
[4]
ByteDance's Dreamina Seedance 2.0 comes to CapCut
ByteDance is integrating its new audio and video generation model, Dreamina Seedance 2.0, into its CapCut editing platform. The artificial intelligence model allows users to create, edit, and synchronize video and audio content using prompts, images, or reference videos, marking a direct competitor to similar technologies. The phased rollout of Dreamina Seedance 2.0 will begin for CapCut users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam. This deployment follows a report indicating a temporary halt to the model's global release due to intellectual property concerns, which may explain the limited initial market availability. In China, the model is already accessible through ByteDance's Jianying application. The video generation model can function without reference images, creating scenes based on text descriptions alone, ByteDance stated. CapCut can render realistic textures, movement, and lighting, which allows creators to edit, enhance, or correct their own footage. Creators can also use the model to test conceptual ideas before filming final videos. Dreamina Seedance 2.0 supports various content types, including cooking recipes, fitness tutorials, and product overviews, areas where previous AI video models faced challenges. At its launch, the model supports video clips up to 15 seconds in length across six aspect ratios. The model will integrate into CapCut's AI Video and Video Studio features and will also be available on ByteDance's AI generation platform, Dreamina, and its marketing platform, Pippit. ByteDance has implemented safety restrictions to prevent the model from generating videos with real faces from images or videos, and CapCut will block unauthorized intellectual property generation. Content produced by Dreamina Seedance 2.0 will include an invisible watermark to identify AI-generated material when shared off-platform, aiding in intellectual property enforcement. ByteDance plans to collaborate with experts and creative communities to refine the model's capabilities during its rollout.
Share
Share
Copy Link
ByteDance is rolling out its Dreamina Seedance 2.0 AI video generation model through CapCut, starting in seven markets across Asia and Latin America. The launch includes invisible watermarking and blocks on generating real faces, addressing earlier Hollywood copyright concerns. OpenAI shut down Sora as ByteDance advances with over 400 million CapCut users.
ByteDance confirmed Thursday that its new audio and video model, Dreamina Seedance 2.0, is now rolling out in its editing platform CapCut, marking a bold move in AI video generation just as OpenAI shuttered its competing Sora app
1
. The TikTok parent company is integrating the AI video generation model into CapCut, which reports more than 400 million monthly active users2
. The phased global rollout begins with CapCut users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam, with more markets added over time1
. Conspicuously absent from the initial launch are the United States and India, ByteDance's two most complex regulatory markets2
.
Source: France 24
The limited CapCut integration follows intense scrutiny after Seedance 2.0 sparked controversy six weeks ago when a viral deepfake video of Tom Cruise fighting Brad Pitt drew cease-and-desist letters from six major Hollywood studios, a formal denunciation from the Motion Picture Association, and a pointed rebuke from SAG-AFTRA over unauthorized use of its members' likenesses
2
. Major Hollywood production studios including Disney, Paramount, Warner Bros and Netflix threatened legal action against Beijing-based ByteDance over accusations of copyright infringement. Reports this month suggested that backlash had prompted ByteDance to pause the model's global launch while it worked to address intellectual property concerns1
.ByteDance's global safety and intellectual property teams worked with a third-party red-teaming partner to bolster Seedance 2.0 ahead of its international release
2
. The company implemented safety restrictions so the model won't generate videos from images or videos that contain real faces, a direct response to the deepfake controversy1
. CapCut will also block unauthorized generation of copyrighted characters, addressing the parade of AI-rendered Shreks, SpongeBobs, Darth Vaders, and Deadpools that the MPA had cited in its complaint2
. All output will carry both visible watermarks and embedded C2PA Content Credentials, the industry-standard protocol for identifying AI-generated content across platforms2
. Content produced by Dreamina Seedance 2.0 will include an invisible watermark to identify AI-generated material when shared off-platform, aiding in intellectual property enforcement and takedown requests from rights holders1
.Related Stories
The AI video generation model works without reference images, even if content creators only use a few words or text prompts to describe the scene they have in mind
1
. CapCut can render realistic textures, movement, and lighting across a range of visual perspectives and angles, which could be used to edit, enhance, or correct creators' own footage1
. The model supports various content types, including cooking recipes, fitness tutorials, business or product overviews, and videos with motion or action-focused content, where AI video models have historically faced challenges4
. At launch, the model supports clips of up to 15 seconds long across six aspect ratios1
.
Source: TechCrunch
ByteDance's vertical integration gives it a unique position in this contest, as it builds the AI model, owns the video editing platform where it is deployed, and controls TikTok, the dominant short-form video distribution channel
2
. The timing proves notable as OpenAI announced it was shutting down Sora after downloads fell 45 percent by January and a licensing deal with Disney collapsed2
. ByteDance's adoption of C2PA watermarking and invisible marking appears to anticipate the EU AI Act's transparency requirements, which take effect in August 2026 and will mandate that providers of generative AI systems mark their output in machine-readable formats and disclose the artificial origin of deepfakes2
. Red-teaming reports suggest the safeguards are not impenetrable, as creative prompting can still bypass the filters to produce "likeness-adjacent" characters that evoke a real person or copyrighted figure without technically reproducing them2
. Whether ByteDance's safeguards will satisfy Hollywood, regulators, and policymakers across multiple jurisdictions remains a question that will be answered for months to come2
.Summarized by
Navi
[1]
[2]
[3]
[4]
09 Feb 2026•Technology

13 Feb 2026•Policy and Regulation

14 Mar 2026•Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
