Curated by THEOUTPOST
On Wed, 11 Sept, 4:06 PM UTC
22 Sources
[1]
Adobe Firefly gets AI-generated video, and you're going to love the Sora AI rival
Miniature adorable felt monsters made in Adobe Firefly Video Model. (Image credit: Adobe) Adobe has added generative video to Firefly's skill set, and the results certainly look promising. You'll be able to use Adobe Firefly Video Model to create AI-generated video just as easily as you currently can with AI-generated images using Firefly. The initial video that Adobe has produced certainly looks impressive: Rather than seeing Adobe Firefly Video Model as a way to replace human-created content, Adobe is pitching it as a way to augment your existing video content, giving you the ability to seamlessly fill in gaps in your project by generating B-Roll, adding intros, or extending scenes. The ability to extend a scene is integrated into Premiere Pro. By generating extra frames you can hold a shot for a moment or two longer and it should look as if you simply carried on filming. The same can be done with audio, so you can generate audio to help you transition from one scene to another. If you want to generate new video using Adobe Firefly Video Model there's a text-to-video mode, so you can simply enter text prompts to describe the sort of video you want to create and Firefly will produce it. This means you can enter a prompt like "Close up of a dog on a skateboard on a sunny day", and it will generate the video. You can also mention camera controls in your prompt, like angle, motion, or zoom. Another thing you can do with Adobe FireFly Video model is feed it images and get it to generate video from them. Used in this way, Firefly becomes a part of a larger number of apps all working together to help you create content. A big advantage of using Adobe Firefly Video Mode, and something that sets it apart from its competitors, like Sora, is that it is commercially safe. Because Firefly is trained on Adobe's own library of video and images, there should be no copyright issues, which have so far plagued many AI image and video generators. Adobe Firefly Video Model will be coming to Creative Cloud, Experience Cloud, and Adobe Express later this year.
[2]
Adobe is entering the AI video space with new Firefly Video model
Adobe has unveiled its new artificial intelligence video generation model: Firefly Video, offering the ability to create short clips from text, image, or video. The new model will be integrated into a future version of Premiere Pro and be available as a standalone tool. This is similar to the way Adobe rolled out the Firefly AI image generator into Photoshop and as a standalone app. Whether or not it becomes one of the best AI video generators remains to be seen. Adobe worked with creative professionals and the video editing community when creating the video model and only trained it on "commercially safe" videos where it had permission. This is the same approach Adobe took with the Firefly image model. Unlike other video models, Adobe has focused on adding to human-created content, including clip extensions and alternative perspective generations. It will also be able to generate standalone clips from images or text. Firefly Video will be available later this year on the Adobe website, as well as within Adobe Premiere Pro for the generative clip extension feature. With the text-to-video model, you'll be able to use descriptive prompts the same way you can now with Runway or Luma Labs Dream Machine, From the example clips it seems videos will be about five seconds initially, shorter than Runway Gen-3. Adobe is pitching it as a way to generate b-roll that "seamlessly fills gaps in your timeline" or to enhance real footage with effects. The model will also include camera controls like angle, motion and zoom as we've seen recently in Dream Machine. Other use cases include creating atmospheric elements within a video including fire, smoke and dust particles. Using AI these could be "layered over existing content using blend modes or keying inside Adobe's tools like Premiere Pro and After Effects." Adobe says the image-to-video model will allow users to take an existing image and make it move. This includes animating text or objects. Adobe plans to embed generative AI, through the Firefly Video model, into Adobe Premiere Pro and do for video what it did for images in Adobe Photoshop. It will let users turn to AI to extend clips to cover gaps in footage, smooth out transitions and even hold a shot longer than you managed to film to help for smoother and "perfectly timed" edits. "Building upon our foundational Firefly models for imaging, design and vector creation, our Firefly foundation video model is designed to help the professional video community unlock new possibilities, streamline workflows and support their creative ideation," said Ashley Still, senior VP, Creative Product Group at Adobe. Still added: "We are excited to bring new levels of creative control and efficiency to video editing with Firefly-powered Generative Extend in Premiere Pro." It is hard to tell how this will stack up to the third-generation AI video tools like Kling, Runway Gen-3, Luma Labs Dream Machine and OpenAI's Sora until I get my hands on it to try it, but it has to significant features the other models don't. It is 'commercially safe' and verifiably so, and it is integrated into Adobe products.
[3]
Adobe Has Joined the AI Video Generation Party
Adobe Firefly, Adobe's generative AI, is getting video generation capabilities. Adobe just announced that the Firefly Video Model will be available for beta testing before the year ends, and shared some impressive results generated with the model. So far, Adobe Firefly (accessible through Photoshop, Lightroom, Illustrator, or the web app) has generative fill and generative remove features. It can also create images from scratch using text prompts. The Firefly Video Model is supposed to work just the same way. You can generate videos from text or image prompts, or you can use it to fill in footage gaps in Adobe Premiere Pro. Adobe also showed off the generative video dashboard, and a few video generations created with it. Other than a big preview window and a text input field (where you can also drop images), the tool also has menu options for configuring the output. You can decide the aspect ratio and the frame rate. You can make the shot a closeup, long shot, or medium shot. It also lets you choose the camera angle and how it moves. Set it to "aerial" or "top down" if you want to simulate a drone shot, and "eye level" if you want to create a more handheld feel. In the upcoming beta versions of Premiere Pro, editors will be able to drag and extend clip layers to "cover gaps in footage, smooth out transitions, or hold on shots longer for perfectly timed edits." Adobe's presentation includes a sample sequence where an original wide shot is seamlessly mixed in with an AI closeup generated from a specific frame. Close Most of the clips are under 3-4 seconds, and were generated in less than 2 minutes, according to Adobe. The samples show natural landscapes, fireworks, animals, CGI dolls, highly-detailed portraits, motion graphics, claymation, line art, 3D models, and custom text effects. Adobe claims you don't have to worry about copyright issues with Firefly generations. The company said, "Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use -- never on Adobe users' content." Source: Adobe
[4]
Adobe Firefly Turns Text or Images Into Realistic AI-Generated Video
Adobe previewed its generative video tools earlier this year, but details were relatively scarce. Today, Adobe shared much more information, including actual videos created in its Adobe Firefly Video Model. Adobe says its Adobe Firefly Video Model is designed to "empower film editors and video professionals" with a wide variety of tools to inspire creative vision, fill gaps in a video's editing timeline, or add new elements to existing clips. "The Firefly Video Model will extend Adobe's family of generative AI models, which already includes an Image Model, Vector Model and Design Model, making Firefly the most comprehensive model offering for creative teams," says Adobe. "To date, Adobe Firefly has been used to generate over 12 billion images globally." Arriving to the public in beta form later this year, new Firefly-powered Text to Video and Image to Video capabilities will come to Adobe Firefly on the web, and some AI features will be implemented natively in Adobe Premiere Pro, which was updated yesterday with a suite of new color grading tools. Text to Video enables users to generate video clips through simple text prompts. These prompts are reactive to specific camera-related text, including things like angle, motion, and zoom. With Image to Video, users can feed Firefly reference still frames to generate motion clips. Adobe published numerous AI-generated clips, all of which were created in "under two minutes" using the Adobe Firefly Video Model. "Building upon our foundational Firefly models for imaging, design and vector creation, our Firefly foundation video model is designed to help the professional video community unlock new possibilities, streamline workflows and support their creative ideation," says Ashley Still, senior vice president, Creative Product Group at Adobe. "We are excited to bring new levels of creative control and efficiency to video editing with Firefly-powered Generative Extend in Premiere Pro." Adobe notes that the camera control prompts, like angle and motion, can be combined with real video to further augment the look, flow, and feel of content without needing to reshoot something. Adobe also shared clips that it generated to augment existing real-world footage. The first clip below is original, human-captured footage, while the second was generated using Firefly. The final clip is the combined, edited footage put together into a single sequence. "Over the past several months, we've worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators' rights in mind, we're developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage," Still writes. Adobe believes video creators and editors can use Adobe's AI technology to address gaps in footage, remove unwanted objects from a scene, smoothing out transitions, and creating the perfect B-roll clips. As with Adobe Firefly's other tools and functions, the Firefly Video Model is designed to be commercially safe and has been trained exclusively using content Adobe has permission to use. The Adobe Firefly Video Model beta will be released later this year.
[5]
Adobe says video generation is coming to Firefly this year
Users will get their first chance to try out Adobe's AI model for video generation in just a couple months. The company says features powered by Adobe's Firefly Video model will become available before the end of 2024 on the Premiere Pro beta app and on a free website. Adobe says three features - Generative Extend, Text to Video, and Image to Video - are currently in a private beta, but will be public soon. Generative Extend, which lets you extend any input video by two seconds, will be embedded into the Premiere Pro beta app later this year. Firefly's Text to Video and Image to Video models, which create five second videos from prompts or input images, will be available on Firefly's dedicated website later this year as well. (The time limit may increase, Adobe noted.) Adobe's software has been a favorite among creatives for decades, but generative AI tools like these may upend the very industry the company serves, for better or worse. Firefly is Adobe's answer to the recent wave of generative AI models, including OpenAI's Sora and Runway's Gen-3 Alpha. The tools have captivated audiences, making clips in minutes that would have taken hours for a human to create. However, these early attempts at tools are generally considered too unpredictable to use in professional settings. But controllability is where Adobe thinks it can set itself apart. Adobe's CTO of digital media, Ely Greenfield, tells TechCrunch there is a "huge appetite" for Firefly's AI tools where they can complement or accelerate existing workflows. For instance, Greenfield says Firefly's generative fill feature, added to Adobe Photoshop last year, is "one of the most frequently used features we've introduced in the past decade." Adobe would not disclose the price of these AI video features. For other Firefly tools, Adobe allots Creative Cloud customers a certain number of "generative credits," where one credit typically yields one generation result. More expensive plans, obviously, provide more credits. In a demo with TechCrunch, Greenfield showcased the Firefly-powered features coming later this year. Generative Extend can pick up where the original videos stops, adding on an extra two seconds of footage in a relatively seamless way. The feature takes the last few frames in a scene, running them through Firefly's Video model to predict the next couple seconds. For the scene's audio, Generative Extend will recreate background noise, such as traffic or the sounds of nature, but not people's voices or music. Greenfield says that's to comply with licensing requirements from the music industry. In one example, Greenfield showed a video clip of an astronaut looking out into space that had been modified with the feature. I was able to tell the moment it had been extended, just after an unusual lens flare appeared on screen, but the camera pan and objects in the scene stayed consistent. I could see it being useful when your scene ends a moment too soon, and you need to draw it out just a bit longer to transition or fade out. Firefly's Text to Video and Image to Video feature are more familiar. They allow you to input a text or image prompt and get up to five seconds of video out. Users will be able to access these AI video generators on firefly.adobe.com, likely with rate limits (though Adobe did not specify). Adobe also says Firefly's Text to Video features are quite good at spelling words correctly, something AI video models tend to struggle with. In terms of safeguards, Adobe is erring on the side of caution to start out. Greenfield says Firefly's video models have blocks around generating videos including nudity, drugs and alcohol. Further, he added, Adobe's video generation models are not trained on public figures, like politicians and celebrities. The same certainly can't be said for some of the competition.
[6]
Adobe previews Firefly Video AI model offering high-quality generations
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Adobe has announced a new addition to its suite of creative tools: the Adobe Firefly Video Model, its own foray into the increasingly competitive AI video generation space, built atop its existing and ever-expanding family of Firefly generative AI still image models, which the company claims as ethically trained and commercially safe thanks to using only data it owns or has the license to, uploaded by contributors to its Adobe Stock service. However, as VentureBeat reported last year, some Adobe Stock creators dispute this since even though they had to agree to terms of service that allowed Adobe broad usage of their works in order to upload them to the service, they never imagined that the generative AI era would occur and use said works to create new ones potentially in their styles and that compete with their efforts. Adobe's Firefly Video Model will be available in beta later this year, and the company is offering early access through a waitlist -- interested parties can apply here. It supports text-to-video, image-to-video, and even video editing features all in the same model. Impressive high-quality examples Early examples of clips generated by Firefly Video posted to Adobe's blog show off impressive quality and adherence to text prompts, generated in less than 2 minutes. For example, this prompt: "Cinematic closeup and detailed portrait of a reindeer in a snowy forest at sunset. The lighting is cinematic and gorgeous and soft and sun-kissed, with golden backlight and dreamy bokeh and lens flares. The color grade is cinematic and magical." ...results in this video: This prompt: "Slow-motion fiery volcanic landscape, with lava spewing out of craters. the camera flies through the lava and lava splatters onto the lens. The lighting is cinematic and moody. The color grade is cinematic, dramatic, and high-contrast." For enterprise decision makers looking to use AI to craft internal videos for employees and training or just "vibes," or external videos for customers and marketing efforts, even full advertisements -- Adobe's new commercially safe Firefly Video may be a very compelling option. After all, Adobe offers indemnification for users, that is, an agreement to defend users from infringement lawsuits and legal actions taken against them when using its AI models, though it has not yet explicitly stated whether Firefly Video will be covered. But also, enterprise decision makers will need to weigh the costs of waiting to get access to Adobe Firefly Video versus jumping in and using one of the many other high-quality AI video generators publicly available now, such as Runway Gen-3 Alpha Turbo or Luma AI's Dream Machine. The next stage of Firefly's advancement Since launching Adobe Firefly in March 2023, Adobe has used the model as the basis of new AI features sprinkled throughout its widely used Creative Cloud software suite. Firefly tools are already embedded in popular applications such as Photoshop, Illustrator, and Lightroom, empowering users with features like Generative Fill, Generative Remove, and Text-to-Template. According to Adobe, over 12 billion images and vectors have been generated using Firefly since its launch, making it one of the company's most rapidly adopted innovations. Ashley Still, an Adobe executive leading the project and author of Adobe's announcement blog post stated that the firm "worked closely with the video editing community to advance the Firefly Video Model." "Guided by their feedback and built with creators' rights in mind, we're developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage." More than just AI video generation As such, Firefly Video does far more than just generate new videos from text prompts. It also offers AI-powered editing features, including removing unwanted objects and perfecting transitions. he rise of short-form video content, combined with ever-tighter production deadlines, has pushed editors and filmmakers to work across multiple disciplines. Adobe's Firefly Video Model helps address this by offering tools that enable editors to not only cut footage but also manage color correction, animation, visual effects, and more. Editors can now leverage AI to speed up these processes while maintaining high-quality output, freeing them to focus on creativity. The model also supports a broad range of creative effects, including generating b-roll, macro shots, and atmospheric elements like fire, smoke, or water that users can layer over prerecorded or animated footage. One standout feature coming to Premiere Pro later this year is Generative Extend. This function will allow editors to extend video clips, fill gaps in footage, or smooth transitions -- helping to match cuts to the pacing of audio and other visual elements. Adobe demonstrated this capability by showcasing how Generative Extend could hold on a shot longer to perfectly align with a musical crescendo: A look ahead at Firefly Video and Adobe's plans Adobe sees the Firefly Video Model as part of a broader push to integrate AI into creative workflows. The model is designed to handle various use cases, from generating 2D and 3D animations to creating atmospheric elements like smoke and fire. Still's blog post did not mention Adobe's earlier stated and previewed plans to add rival AI video models from other companies -- such as OpenAI's Sora and Runway's Gen-3 Alpha -- to its Premiere Pro video editor software. This makes me wonder if Adobe is rethinking the approach given the increased competition in the space. Adobe envisions that these tools will give creators more time to explore new ideas, enhance their projects, and ultimately deliver better results for their clients. By incorporating generative AI into its suite of video editing tools, Adobe is positioning Firefly Video as an essential part of the modern editor's toolkit -- empowering creators to elevate their projects with the help of cutting-edge technology.
[7]
Adobe to launch generative AI video creation tool later this year
Adobe will unveil a new generative AI-powered video creation and editing tool in a limited release later this year, the software maker said on Wednesday, as it looks to beef up its suite of applications catering to creative professionals. Dubbed Adobe Firefly Video Model, the artificial intelligence tool will be released in beta and will join the Photoshop maker's existing line of Firefly image-generating applications that allow users to produce still images, designs and vector graphics. The model will establish Adobe in the growing market for AI-based video generation tools, a space already targeted by OpenAI's Sora, Stability AI's Stable Video Diffusion and other AI video apps from smaller startups. The tool can generate a five-second clip for a single prompt and can interpret both text and image prompts, said Alexandru Costin, vice president of generative AI at Adobe. Users can also specify the required camera angle, panning, motion and zoom. "We've invested in making this model reach the level of quality and prompt understanding that videographers expect. We've invested in making sure we really pay attention to the prompt ... respecting guidance from videographers much better than other (AI video) models," Costin told Reuters in an interview. Adobe said the video model is trained on public domain or licensed content that it has permission to use, and not on any Adobe customer content. "We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos that are curated to not contain intellectual property, trademarks or recognizable characters," Costin said. Adobe is also rolling out Generative Extend, a tool that will be available in its Premiere Pro video editing software, which can extend any existing clip by two seconds by generating an appropriate insert to fill gaps in the footage. First previewed in April, the tool has seen "a huge positive reaction from all of our customers", Costin said.
[8]
Adobe to launch generative AI video creation tool later this year
(Reuters) - Adobe will unveil a new generative AI-powered video creation and editing tool in a limited release later this year, the software maker said on Wednesday, as it looks to beef up its suite of applications catering to creative professionals. Dubbed Adobe Firefly Video Model, the artificial intelligence tool will be released in beta and will join the Photoshop maker's existing line of Firefly image-generating applications that allow users to produce still images, designs and vector graphics. The model will establish Adobe in the growing market for AI-based video generation tools, a space already targeted by OpenAI's Sora, Stability AI's Stable Video Diffusion and other AI video apps from smaller startups. The tool can generate a five-second clip for a single prompt and can interpret both text and image prompts, said Alexandru Costin, vice president of generative AI at Adobe. Users can also specify the required camera angle, panning, motion and zoom. "We've invested in making this model reach the level of quality and prompt understanding that videographers expect. We've invested in making sure we really pay attention to the prompt ... respecting guidance from videographers much better than other (AI video) models," Costin told Reuters in an interview. Adobe said the video model is trained on public domain or licensed content that it has permission to use, and not on any Adobe customer content. "We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos that are curated to not contain intellectual property, trademarks or recognizable characters," Costin said. Adobe is also rolling out Generative Extend, a tool that will be available in its Premiere Pro video editing software, which can extend any existing clip by two seconds by generating an appropriate insert to fill gaps in the footage. First previewed in April, the tool has seen "a huge positive reaction from all of our customers", Costin said. (Reporting by Deborah Sophia in Bengaluru; Editing by Vijay Kishore)
[9]
Adobe finally gives us a glimpse of its Firefly AI video model
And it looks genuinely useful for Premiere Pro and After Effects users. When Adobe announced a Premiere Pro update earlier this week, I initially assumed it would involve text-to-video generative AI. After all, Adobe teased AI video generation in Premiere Pro earlier in the year, and most of its recent updates across Creative Cloud have focused heavily on AI. Instead, we got a new colour management system and UI improvements. But, perhaps sensing that an update on its AI video plans was due, the software giant has now, rather quietly, revealed a first glimpse of its Firefly AI video model in action. In a blog post, Adobe says Firefly AI video will come to Premiere Pro in beta this year. It says it's been working "with the video editing community" to advance the model and that, based on that feedback, it's now developing workflows that use the the model to "help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage". The AI tools will aim to "take the tedium out of post-production" and give editors more time to explore ideas. We'll be able to use text prompts in Firefly Text-to-Video and use reference images to generate B-Roll footage. The demonstration shows that the Firefly Video Model will allow the use of camera controls, like angle, motion and zoom to create the desired perspective for the generated video. How the Firefly Video Model compares to the likes of Sora and Runway remains to be seen, but Adobe says it does particularly well at generating videos of the natural world, including landscapes, plants or animals. Like with many of the best AI image generators, the more detailed the prompt, the more convincing the results tend to be. The focus so far seems very much on ideation (including in claymation) and helping production teams that might be missing a key establishing shot to set the scene or complementary footage to fill a space in a time line rather than on using AI to generate entire films or sequences, which could be healthiest and most useful approach for Adobe users. The model will also be able to create atmospheric elements like fire, smoke, dust particles and water against a black or green background for layering over existing content using blend modes or keying in Premiere Pro and After Effects. And the model will also allow Image-to-Video, adding motion to stills. Later in the year, Premiere Pro (beta) will get Generative Extend allowing clips to be extended to cover gaps in footage, smooth out transitions or hold on shots longer, which sounds like a potential game-changer for editors.
[10]
Adobe Firefly Will Introduce AI-Generated Videos This Year
Katelyn is a writer with CNET covering social media, AI and online services. She graduated from the University of North Carolina at Chapel Hill with a degree in media and journalism. You can often find her with a novel and an iced coffee during her time off. Adobe is diving deeper into generative AI as it announced this week that it's upgrading its AI model Firefly to include video generation capabilities. Adobe teased the new feature in a video where we can see someone inserting a lengthy prompt in a query box to create different kinds of videos. We also see a sidebar menu that lets you adjust the motion of a shot and insert a reference image in your prompt. It also showed how you can use it to create new footage to extend existing clips and fill in potential gaps. This video is all we have for now, as the feature won't be available with Firefly until later this year. You can join the waitlist here, but make sure to use the email that's associated with your Adobe account. Read More: Why Procreate's Anti-AI Pledge Is Resonating With Its Creators Earlier this summer, Adobe upgraded its AI tools in Photoshop and Illustrator, introducing an AI image generator in Photoshop and a generative shape-fill tool in Illustrator. Firefly's current privacy policy states that it does not train its models on Creative Cloud subscribers' personal content; Adobe says its models are trained on licensed databases like Adobe Stock (excluding editorial content) and public domain images with expired copyright. Adobe's foray into generative AI has certainly had its bumps in the road. Adobe users were angered over a vague update to its terms of service that had them concerned the company could scan and read their content. Adobe later clarified that it can only do its content moderation on files stored on the Cloud, not users' devices. It cited generative AI's ability to be used to create illegal content as part of the motivation behind the changes. Adobe is also facing a lawsuit from the US Department of Justice that alleges its cancellation process is convoluted with hidden cancellation fees.
[11]
Adobe unveils Firefly Text-to-Video AI model, launching later this year
Adobe has announced its AI-powered Firefly Video Model, a tool designed to enhance video editing workflows in Adobe Premiere Pro and other video tools. Highlighting the importance of video as a "currency of engagement," Adobe provided a glimpse into the upcoming Firefly Video Model and the innovative professional workflows it aims to support. Adobe first launched the Firefly AI in March 2023, rapidly advancing with new models in imaging, design, and vectors. These models have been integrated into key features across Adobe's Creative Cloud and Express tools, such as Generative Fill in Photoshop, Generative Remove in Lightroom, Generative Shape Fill in Illustrator, and Text-to-Template in Express. Since its launch, the Firefly AI has been widely adopted, with users generating over 12 billion images and vectors. Based on feedback from the creative community and enterprise customers, Adobe has developed the Firefly Video Model to advance video editing capabilities. The new model is designed with creator rights in mind, providing tools to help editors explore creative ideas, fill timeline gaps, and add new elements to footage, all while ensuring content is "commercially safe" and only trained on materials Adobe has permission to use. The increasing demand for short-form video content has resulted in editors, filmmakers, and content creators needing to do more in less time. To address these needs, Adobe's Firefly Video Model incorporates AI to enhance various aspects of video production, from color correction and visual effects to animation and audio mixing. Key tasks such as removing unwanted objects, smoothing jump cuts, and finding the perfect b-roll can now be completed more efficiently, allowing editors to focus on creating a compelling narrative. Adobe's AI tools also streamline collaboration, integrating with tools like Frame.io for a smooth review and approval process. The Firefly Video Model includes a "Text-to-Video" feature, allowing users to generate B-roll and other footage using text prompts, camera controls, and reference images. Key capabilities of this model include: Additionally, a feature called "Generative Extend" will be available in Premiere Pro (beta) later this year. This will allow users to extend clips, smooth transitions, or hold shots longer for precise edits. The Adobe Firefly Video Model will enter beta later this year. Interested users can sign up for the waitlist on Adobe's official website.
[12]
Get Ready: Adobe's AI Video Tool is Coming Soon
On Tuesday, Adobe Inc. revealed its plans to launch a new generative AI-powered video creation tool, Adobe Firefly Video Model, in a limited beta release later this year. With the new tool, the company is expanding its suite of Firefly AI-powered applications into video, aimed at making life easier for professionals in design, photography, and video. The Firefly Video Model, in particular, will enable users to generate video clips based on text or image prompts and is engineered to react to specific directions like camera angle, panning, zoom, and motion. According to an interview with Adobe's Vice President of Generative AI, Alexandru Costin, the model has been fine-tuned to the quality expected by videographers. "We've invested in making sure we pay attention to the prompts and respect the guidance from professionals much better than other AI video models on the market," said Costin.
[13]
Adobe to launch generative AI video creation tool later this year
Adobe will unveil a new generative AI-powered video creation and editing tool in a limited release later this year, the software maker said on Wednesday, as it looks to beef up its suite of applications catering to creative professionals. Dubbed Adobe Firefly Video Model, the artificial intelligence tool will be released in beta and will join the Photoshop maker's existing line of Firefly image-generating applications that allow users to produce still images, designs and vector graphics. The model will establish Adobe in the growing market for AI-based video generation tools, a space already targeted by OpenAI's Sora, Stability AI's Stable Video Diffusion and other AI video apps from smaller startups. The tool can generate a five-second clip for a single prompt and can interpret both text and image prompts, said Alexandru Costin, vice president of generative AI at Adobe. Users can also specify the required camera angle, panning, motion and zoom. Google, Adobe, Microsoft, and Meta executives to testify before U.S. Senate about election threats "We've invested in making this model reach the level of quality and prompt understanding that videographers expect. We've invested in making sure we really pay attention to the prompt ... respecting guidance from videographers much better than other (AI video) models," Costin told Reuters in an interview. Adobe said the video model is trained on public domain or licensed content that it has permission to use, and not on any Adobe customer content. "We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos that are curated to not contain intellectual property, trademarks or recognizable characters," Costin said. Adobe is also rolling out Generative Extend, a tool that will be available in its Premiere Pro video editing software, which can extend any existing clip by two seconds by generating an appropriate insert to fill gaps in the footage. First previewed in April, the tool has seen "a huge positive reaction from all of our customers", Costin said. Published - September 12, 2024 10:41 am IST Read Comments
[14]
Adobe previews AI video tools that arrive later this year
Powerful shortcuts will let film editors create AI clips from text, still images and existing video. On Wednesday, Adobe unveiled Firefly AI video generation tools that will arrive in beta later this year. Like many things related to AI, the examples are equal parts mesmerizing and terrifying as the company slowly integrates tools built to automate much of the creative work its prized user base is paid for today. Echoing AI salesmanship found elsewhere in the tech industry, Adobe frames it all as supplementary tech that "helps take the tedium out of post-production." Adobe describes its new Firefly-powered text-to-video, Generative Extend (which will be available in Premiere Pro) and image-to-video AI tools as helping editors with tasks like "navigating gaps in footage, removing unwanted objects from a scene, smoothing jump cut transitions, and searching for the perfect b-roll." The company says the tools will give video editors "more time to explore new creative ideas, the part of the job they love." (To take Adobe at face value, you'd have to believe employers won't simply increase their output demands from editors once the industry has fully adopted these AI tools. Or pay less. Or employ fewer people. But I digress.) Firefly Text-to-Video lets you -- you guessed it -- create AI-generated videos from text prompts. But it also includes tools to control camera angle, motion and zoom. It can take a shot with gaps in its timeline and fill in the blanks. It can even use a still reference image and turn it into a convincing AI video. Adobe says its video models excel with "videos of the natural world," helping to create establishing shots or b-rolls on the fly without much of a budget. For an example of how convincing the tech appears to be, check out Adobe's examples in the promo video: Although these are samples curated by a company trying to sell you on its products, their quality is undeniable. Detailed text prompts for an establishing shot of a fiery volcano, a dog chilling in a field of wildflowers or (demonstrating it can handle the fantastical as well) miniature wool monsters having a dance party produce just that. If these results are emblematic of the tools' typical output (hardly a guarantee), then TV, film and commercial production will soon have some powerful shortcuts at its disposal -- for better or worse. Meanwhile, Adobe's example of image-to-video begins with an uploaded galaxy image. A text prompt prods it to transform it into a video that zooms out from the star system to reveal the inside of a human eye. The company's demo of Generative Extend shows a pair of people walking across a forest stream; an AI-generated segment fills in a gap in the footage. (It was convincing enough that I couldn't tell which part of the output was AI-generated.) Reuters reports that the tool will only generate five-second clips, at least at first. To Adobe's credit, it says its Firefly Video Model is designed to be commercially safe and only trains on content the company has permission to use. "We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos that are curated to not contain intellectual property, trademarks or recognizable characters," Adobe's VP of Generative AI, Alexandru Costin, told Reuters. The company also stressed that it never trains on users' work. However, whether or not it puts its users out of work is another matter altogether. Adobe says its new video models will be available in beta later this year. You can sign up for a waitlist to try them.
[15]
Adobe to launch generative AI video creation tool later this year
Dubbed Adobe Firefly Video Model, the artificial intelligence tool will be released in beta and will join the Photoshop maker's existing line of Firefly image-generating applications that allow users to produce still images, designs and vector graphics. The model will establish Adobe in the growing market for AI-based video generation tools, a space already targeted by OpenAI's Sora, Stability AI's Stable Video Diffusion and other AI video apps from smaller startups. The tool can generate a five-second clip for a single prompt and can interpret both text and image prompts, said Alexandru Costin, vice president of generative AI at Adobe. Users can also specify the required camera angle, panning, motion and zoom. "We've invested in making this model reach the level of quality and prompt understanding that videographers expect. We've invested in making sure we really pay attention to the prompt ... respecting guidance from videographers much better than other (AI video) models," Costin told Reuters in an interview. Adobe said the video model is trained on public domain or licensed content that it has permission to use, and not on any Adobe customer content. "We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos that are curated to not contain intellectual property, trademarks or recognizable characters," Costin said. Adobe is also rolling out Generative Extend, a tool that will be available in its Premiere Pro video editing software, which can extend any existing clip by two seconds by generating an appropriate insert to fill gaps in the footage. First previewed in April, the tool has seen "a huge positive reaction from all of our customers", Costin said. Books in, screens out: some Finnish pupils go back to paper after tech push (Reporting by Deborah Sophia in Bengaluru; Editing by Vijay Kishore)
[16]
A first look at Adobe's new text-to-video AI tools | Digital Trends
Adobe previewed its upcoming video AI tools, part of the Firefly video model the company announced in April, in a newly released YouTube post. The features (and model) are expected to arrive by the end of the year and be available on both the Premiere Pro beta app, as well as on a free website. Adobe Firefly Video Model Coming Soon | Adobe Video The company highlighted three new features that are currently in private beta but will be ready for public release later this year: Generative Extend, Text to Video, and Image to Video. Generative Extend will lengthen any input video by up to two seconds, while the Text and Image to Video functions allow users to generate high-definition, five-second-long clips using word and picture prompts. Recommended Videos You can then edit and modify those videos, adjusting camera controls to change the camera angles, their motion, and the shooting distance. And unlike Grok, Firefly's guardrails will block the generation of content that includes nudity, drugs, and alcohol. Generative Extend will arrive later this year as part of Premiere Pro beta, while Text and Image to Video will be available on Firefly's website. The latter two should be available to free tier users, though likely with usage limits. The company reportedly plans to eventually integrate the video generation features into the rest of its Creative Cloud, Experience Cloud, and Adobe Express applications. Adobe bills Firefly's image and video generations as "commercially safe," the model having been trained exclusively on licensed, public domain, and Adobe Stock content. From the teaser video above, Firefly's upcoming video capabilities appear equivalent to OpenAI's Sora (itself still unreleased), Kuaishou Technology's Kling, and Runay's Gen-3 Alpha model. It certainly looks significantly better, and far less hallucinatory, than what the bevy of free video generators currently available on the internet can produce. Midjourney is also reportedly working on a text-to-video model. CEO and Founder David Holz, in an "Office Hour" Discord session in December 2023, announced that the model would be released in "a few months," though no updates have been released since then.
[17]
Adobe's AI Video Generator Might Be as Good as OpenAI's
AI-generated videos aren't just the future: They're here, and they're scary. AI companies are rolling out tech that can produce realistic videos from simple text prompts. Adobe is just the latest, and their AI-generated videos are impressive -- even if the demos are brief. Adobe Firefly Video Model Firefly Video Model is a little different than others we've seen before. Most AI video generators work like AI image generators: You write out a prompt of what you'd like the model to make, then the model produces an output based on its training set. That's still happening here, as you can ask the model to produce a specific video. But Adobe is incorporating more AI video editing tools to the mix than something like OpenAI's Sora. For instance, Adobe says you'll be able to use camera controls, like angle, motion, and zoom, to "fine-tune" videos. In one of the demonstrated prompts, Adobe tells the AI to produce a video with "dramatic dolly zoom camera effect," while the sidebar shows multiple camera controls, including shot size, camera angle, and motion controls. You could, in theory, generate a video, click the "Handheld" motion option to add a shaky-cam look to the product, and control the intensity of that shake via a slider that appears in this menu. The company also shows off examples of how this technology can be added to real video content: Adobe says you will be able to extend existing clips in your timeline using AI-generated video, through the Premiere Pro beta. The goal, according to the company, is to fill gaps in your timeline, so if you have a shot that isn't long enough, AI can lengthen it artificially. The model is also reportedly capable of turning images into videos, as well. If you have a picture or a drawing you want to use as reference for an AI-generated video, you can use that in place of a text prompt. You can also use the tool to generate 2D and 3D animated effects to your videos. The demo video shows off a 2D motion effect applied to a real video of a person dancing, while another example shows the word "TIGER" made of fur over a field, blowing in the wind. Adobe makes a point to say the video model is trained on works in the public domain, and is designed to be "commercially safe." That is, of course, in stark contrast with other players in the AI space, like OpenAI, Midjourney, and Stability AI, many of whom are being sued for allegedly using copyrighted materials to train their AI models. But any goodwill Adobe may have won from this decision may be cancelled out by the outrage over its policies, which seem to suggest the company can access any work users save to Creative Cloud for the purpose of training non-generative AI programs. Sure, it's great that Firefly doesn't steal from artists and isn't going to get creatives in commercial trouble, but if you need to give up your own creative privacy to use it, is it worth it? These tools will be available in Creative Cloud, Experience Cloud and Adobe Express, as well as via firely.adobe.com. Adobe has a waitlist to be notified when Firefly Video Model is available in beta, which you can sign up for here. The bottom line Here's the thing: The products in Adobe's video look good. If you were watching the video out of context, you might not realize that most -- if any -- of the demo shots presented were, in fact, totally artificially generated. But Adobe cleverly only shows most clips for a second or two at most, which makes it difficult to get a sense of how well the generator really works. The quality of the subjects is solid and convincing, but without seeing how well the model replicated motion, or how realistic the outputs remain over the course of, say, a minute, it's tough to say how this model will stack up against others. The longest clip I've seen from Adobe is this four- or five-second video of a reindeer: It's pretty darn realistic, and the wide-angle lens with a handheld feel probably helps sell the effect. It's possible Adobe has made some breakthroughs in the quality of AI-generated video. It's also possible these videos will be subject to the same flaws existing generators have, and will fall apart under time and scrutiny. Once Adobe shares longer demo videos, or rolls out the beta, we'll have a better idea.
[18]
Adobe previews its upcoming text-to-video generative AI tools
Adobe has teased some of its upcoming generative AI video tools, including a new feature that can produce video clips from still images. This latest preview builds on the in-development Firefly video model that the software giant demonstrated in April, which is set to power AI video and audio editing features across Adobe's Creative Cloud applications. The new promotional teaser shows footage produced by Firefly's text-to-video capabilities that Adobe announced (but didn't demonstrate) earlier this year. The tool allows users to generate video clips using text descriptions and adjust the results using a variety of "camera controls" that simulate camera angles, motion, and shooting distance. An image-to-video feature was also demonstrated for the Firefly video model that can generate clips using specific reference images. Adobe suggests this could be useful for making additional B-roll footage or to patch gaps in production timelines.
[19]
Adobe to launch generative AI video creation tool later this year
The model will establish Adobe in the growing market for AI-based video generation tools, a space already targeted by OpenAI's Sora, Stability AI's Stable Video Diffusion and other AI video apps from smaller startups. The tool can generate a five-second clip for a single prompt and can interpret both text and image prompts, said Alexandru Costin, vice president of generative AI at Adobe. Users can also specify the required camera angle, panning, motion and zoom. "We've invested in making this model reach the level of quality and prompt understanding that videographers expect. We've invested in making sure we really pay attention to the prompt ... respecting guidance from videographers much better than other (AI video) models," Costin told Reuters in an interview. Adobe said the video model is trained on public domain or licensed content that it has permission to use, and not on any Adobe customer content. "We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos that are curated to not contain intellectual property, trademarks or recognizable characters," Costin said. Adobe is also rolling out Generative Extend, a tool that will be available in its Premiere Pro video editing software, which can extend any existing clip by two seconds by generating an appropriate insert to fill gaps in the footage. First previewed in April, the tool has seen "a huge positive reaction from all of our customers", Costin said. (Reporting by Deborah Sophia in Bengaluru; Editing by Vijay Kishore)
[20]
Adobe Shakes Up Video Editing With AI-powered Creation Tools - What's On The Cards? - Adobe (NASDAQ:ADBE)
Firefly has generated over 12 billion images, with new video AI tools set to launch on Firefly.com and Premiere Pro later this year. On Wednesday, Adobe Inc. ADBE unveiled its advancements in generative AI video capabilities powered by the Adobe Firefly Video Model. Notably, Firefly has already generated over 12 billion images globally. This new model, which extends Adobe's Firefly suite that includes Image, Vector, and Design Models, is set to enhance video editing by helping professionals fill timeline gaps and add new elements to footage. The upcoming features, available later this year, will include Text to Video and Image to Video capabilities on Firefly.Adobe.com and within Premiere Pro. Text to Video will allow users to generate video from text prompts and adjust elements like camera angle and zoom, while Image to Video will animate still shots into live-action clips. Ashley Still, senior vice president, Creative Product Group at Adobe, said, "Building upon our foundational Firefly models for imaging, design and vector creation, our Firefly foundation video model is designed to help the professional video community unlock new possibilities, streamline workflows and support their creative ideation." In August, Adobe launched Adobe Journey Optimizer (AJO) B2B Edition, using generative AI to boost customer engagement and drive growth for B2B companies. The company is set to release third-quarter FY24 results on September 12. Investors can gain exposure to the stock via REX FANG & Innovation Equity Premium Income ETF FEPI and IShares Expanded Tech-Software Sector ETF IGV. Price Action: ADBE shares are down 0.41% at $572.13 at the last check Wednesday. Image via Shutterstock Read Next: MKBHD Weighs In On Procreate CEO's 'Hate Generative AI' Stance: 'Take Notes Adobe...But Wonder If They Ever Bend This Rule Someday' Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[21]
Adobe to launch generative AI video creation tool later this year
Paid parking in Dubai: Residents face up to Dh4,000 extra yearly costs when new rates kick in Adobe will unveil a new generative AI-powered video creation and editing tool in a limited release later this year, the software maker said on Wednesday, as it looks to beef up its suite of applications catering to creative professionals. Dubbed Adobe Firefly Video Model, the artificial intelligence tool will be released in beta and will join the Photoshop maker's existing line of Firefly image-generating applications that allow users to produce still images, designs and vector graphics. The model will establish Adobe in the growing market for AI-based video generation tools, a space already targeted by OpenAI's Sora, Stability AI's Stable Video Diffusion and other AI video apps from smaller startups. The tool can generate a five-second clip for a single prompt and can interpret both text and image prompts, said Alexandru Costin, vice president of generative AI at Adobe. Users can also specify the required camera angle, panning, motion and zoom. First previewed in April, the tool has seen "a huge positive reaction from all of our customers", Costin said. (Reporting by Deborah Sophia in Bengaluru; Editing by Vijay Kishore)
[22]
Adobe introduces AI-powered video editing enhancements By Investing.com
SAN JOSE, Calif. - Adobe (NASDAQ: NASDAQ:ADBE) announced the upcoming release of new AI-driven video editing tools within its Creative Cloud suite. The company revealed that later this year, users will gain access to new Text to Video and Image to Video capabilities, as well as Generative Extend in Premiere Pro, all powered by the Adobe Firefly Video Model. This model is part of Adobe's suite of generative AI models, which include tools for imaging, design, and vector creation, and has been utilized to generate over 12 billion images worldwide. The new features aim to streamline the video editing process, offering professionals the ability to generate video content from text prompts and still images. Adobe's Text to Video tool will allow editors to control various camera settings, such as angle, motion, and zoom, to fine-tune videos and create B-Roll to complete their timelines. Similarly, the Image to Video capability will enable the transformation of still shots or illustrations into live-action clips. Adobe's senior vice president of the Creative Product Group, Ashley Still, emphasized that the Firefly Video Model is designed to support the professional video community by unlocking new creative possibilities and enhancing workflow efficiency. The model offers fine-grained controls for producing animations and effects, as well as the ability to pair generated video with professional footage for storytelling. The company assures that the Firefly Video Model is commercially safe, as it is trained only on content with proper usage permissions, never incorporating Adobe customer content. Adobe invites interested parties to join a waitlist for beta access to the Firefly Video Model. This announcement is based on a press release statement from Adobe. The company continues to innovate in the digital experience space, with more information available on its website. In other recent news, Adobe Inc. reported a record second-quarter revenue of $5.31 billion, an 11% year-over-year increase, primarily driven by the Acrobat AI Assistant and the Firefly platform. JPMorgan (NYSE:JPM) reaffirmed their Overweight rating and $580.00 price target on Adobe shares, citing a positive outlook for the second half of the year. The firm highlighted pricing, GenAI traction, and product vision as key drivers for Adobe's expected performance. Adobe's Creative Cloud Net New ARR is forecasted to grow year-over-year in the third and fourth quarters, marking a shift from the previous three quarters of decline. Mizuho Securities and TD Cowen both maintained their positive ratings for Adobe, highlighting the company's strategic position to capitalize on the ongoing digital transformation trend. Significant executive changes include the resignation of Adobe's Senior Vice President and Chief Accounting Officer, Mark Garfield, and the appointment of Adobe executive Scott Belsky to the Board of Directors of Atlassian (NASDAQ:TEAM) Corporation. In other recent developments, top executives from tech companies including Adobe, Google (NASDAQ:GOOGL), Microsoft (NASDAQ:MSFT), and Meta Platforms (NASDAQ:META) are scheduled to appear before the U.S. Senate Intelligence Committee to discuss threats to election security. This testimony is part of ongoing efforts to safeguard U.S. elections from both domestic and foreign threats. As Adobe (NASDAQ: ADBE) continues to push the boundaries of creative technology with its AI-driven video editing tools, the company's financial health and market performance provide a broader context for its innovative capacity. Adobe's impressive gross profit margin of 88.24% in the last twelve months as of Q2 2024, as reported by InvestingPro, underscores the company's efficiency in generating revenue from its sales, a key factor in its ability to invest in new technologies like the Firefly Video Model. Investors are keeping a keen eye on Adobe's valuation metrics, with the company trading at a high earnings multiple, reflected in a P/E ratio of 51.26. This suggests that the market has high expectations for future earnings growth, despite the company trading at a high P/E ratio relative to near-term earnings growth with a PEG ratio of 8.21. Adobe's strong return over the last three months, with a 24.16% price total return, indicates investor confidence in the company's direction and growth prospects. For those considering an investment in Adobe, there are 16 additional InvestingPro Tips available, offering deeper insights into the company's financial health and market position. These tips, along with Adobe's latest data such as its market cap of 254.72 billion USD and revenue growth of 10.85% in the last twelve months as of Q2 2024, can be found on the InvestingPro platform at https://www.investing.com/pro/ADBE.
Share
Share
Copy Link
Adobe announces the addition of AI-generated video capabilities to its Firefly platform, positioning itself as a competitor to OpenAI's Sora. The new feature is set to revolutionize video creation for both professionals and casual users.
Adobe has announced a significant expansion of its Firefly AI platform, introducing AI-generated video capabilities. This move positions Adobe as a direct competitor to OpenAI's Sora in the rapidly evolving field of artificial intelligence-driven content creation 1.
The new Firefly video model allows users to generate videos from text prompts or still images. Users can create videos up to 15 seconds long in various aspect ratios, including square, vertical, and horizontal formats 2. The AI can also extend existing videos, potentially revolutionizing how creators work with video content.
One of the standout features is the ability to generate videos that seamlessly loop, making them ideal for use on social media platforms 3. This functionality caters to the growing demand for engaging, short-form video content across various digital platforms.
Adobe plans to integrate the new video generation capabilities into its existing suite of creative tools. This integration will allow users to enhance their AI-generated videos with Adobe's powerful editing software, providing a comprehensive solution for video content creation 4.
Adobe emphasizes its commitment to ethical AI development. The company states that Firefly is trained on Adobe Stock images, openly licensed content, and public domain material where copyright has expired. This approach aims to address concerns about copyright infringement and ensure that the generated content is safe for commercial use 5.
While an exact release date has not been announced, Adobe has confirmed that the video generation feature will be available to Firefly users later this year. The company is currently refining the technology and gathering feedback from beta testers to ensure a robust and user-friendly experience upon launch [5].
The introduction of AI-generated video to Firefly is expected to have a significant impact on the creative industry. It has the potential to democratize video creation, allowing individuals and businesses with limited resources to produce high-quality video content. However, it also raises questions about the future role of human videographers and editors in the content creation process [1].
Adobe's entry into AI-generated video puts it in direct competition with OpenAI's Sora and other emerging players in this space. As the technology continues to evolve, we can expect to see increased competition and innovation in AI-driven video creation tools, potentially leading to more advanced and accessible options for creators worldwide [2].
Reference
[3]
Adobe's Firefly AI tool is set to introduce video generation capabilities, marking a significant advancement in AI-powered creative software. This development comes as Adobe continues to refine its approach to AI tool development and deployment.
2 Sources
Adobe introduces generative AI video capabilities to Firefly, reaching 12 billion generations. The company faces scrutiny over AI training data while emphasizing safety and expanding its presence in India.
5 Sources
Adobe launches Firefly AI video creator, offering businesses a tool for generating custom content while navigating copyright issues. The new technology promises to streamline video production and democratize content creation across various industries.
29 Sources
Adobe introduces AI-powered features across its Creative Cloud suite, emphasizing the need for artists to adopt AI tools to remain competitive in the evolving creative landscape.
4 Sources
Adobe introduces new AI features while emphasizing responsible use and creative integrity. The company aims to strike a balance between AI innovation and protecting human creativity in the digital realm.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved