10 Sources
10 Sources
[1]
Adobe's AI Videos Get Audio: Is It Better Than Google's Veo 3?
New Adobe Firefly video updates are here to bring your AI-generated videos to the next level and to potentially keep the company competitive in the crowded AI market. You'll be able to add sound effects to your Firefly AI videos, thanks to a beta release of a new AI tool rolling out now, the company announced Thursday. It's the second major tech company to release such capabilities, following Google's Veo 3 launch in May. There are also a couple of other changes coming to Firefly video, including new outside AI models you can use through Firefly and new prompting features and presets. A new composition reference tool lets you upload content, and Firefly will use your references and mimic a video's motions or an image's layout without needing to describe every detail in your prompt. Keyframe cropping will help you resize content, and there are new style presets, including anime, vector art, and black and white. It wasn't until recently that AI-generated videos could also have sound. Adobe first teased this audio feature when it was just a research concept in development at last year's Adobe Max. After a warm reception from the live audience, the company continued work on the tool, which led to this week's beta release. But Adobe's AI audio abilities are not the same as Veo 3's, which made a splash at this spring's I/O conference. There are some key differences between the programs that will affect the content you generate. Here's what you need to know about Firefly's new audible AI videos and how they stack up to Google. For more, check out how to use Photoshop AI and the new Firefly mobile apps. To create AI videos with sound in Firefly, start in your normal Firefly prompting window. Once you have a video you like, hover over the video and click the icon in the upper right corner that says Generate sound effects. This will open a new browser tab where you can generate AI audio. One of the biggest differences between Firefly and Veo 3 is the type of AI audio you're able to create. The AI audio you can generate through Firefly is the kind of audio that could be created by a foley artist, like sound effects and impact noises. That doesn't include dialogue, though. Each prompt generates four variations of audio clips, usually 8 seconds long each. You can record yourself making the sound effects and upload that as part of your prompt, a unique option. For example, if you wanted to give your AI monster a menacing roar, you could record yourself growling, and Firefly will give you four variations of AI-ified audio that match your general cadence. But you can't record yourself saying, "You will never escape my evil lair!" and have those words generated by Firefly. That's different than Veo 3, which can generate audio on its own or from a script you provide in your prompt. Adobe has new, separate AI avatar tool in beta for dialogue creation. Another difference is that in Firefly, you have to manually synchronize your AI audio to your video clips. People familiar with creating in Premiere Pro will recognize the similar set-up, where you can drag and drop audio clips to wherever you want them to play in the timeline. But for folks who don't need or want that kind of manual, hands-on control, Veo 3's automatically matching will take a lot of the work out of creating AI videos. Adobe's AI video generator might not have the same level of audio capabilities as Veo 3, but it does have other things going for it. Firefly lets you access 15 different AI models from popular AI companies like Runway, Luma, Pika and, yes, even Google's Veo 3. Adobe is adding a couple of new outside models to Firefly app roster for you to choose from, including Moonvalley's "clean AI" model Marey, Pika 2.2 and Luma AI's Ray 2. This is important because Adobe's AI policy says it doesn't train its AI models on customer content, and it requires all its partner models to agree to this "do not train" commitment. So if you're worried about maintaining some level of data privacy while using AI tools, generating videos in Adobe comes with a stronger policy than many other AI programs. There's also a small price difference between the two. Adobe Firefly plans begin at $10 per month, with more expensive plans offering additional generation credits. You might have some Firefly credits already, depending on what Adobe plan you're currently subscribed to. Google recently added access to a fast version of Veo 3 to its $20 per month Google AI Pro plan -- a chance for significant savings, since the regular Veo 3 is paywalled to the $250 Google AI Ultra plan.
[2]
Adobe Firefly can now generate AI sound effects for videos - and I'm seriously impressed
Just a year and a half ago, the latest and greatest of Adobe's Firefly generative AI offerings involved producing high-quality images from text with customization options, such as reference images. Since then, Adobe has pivoted into text-to-video generation and is now adding a slew of features to make it even more competitive. Also: Forget Sora: Adobe launches 'commercially safe' AI video generator. How to try it On Thursday, Adobe released a series of upgrades to its video capabilities that give users more control over the final generation, more options to create the video, and even more modalities to create. Even though creating realistic AI-generated videos is an impressive feat that shows how far AI generation has come, one crucial aspect of video generation has been missing: sound. Adobe's new release seeks to give creative professionals the ability to use AI to create audio, too. The new Generate Sound Effects (beta) allows users to create custom sounds by inserting a text description of what they'd like generated. If users want even more control over what is generated, they can also use their voice to demonstrate the cadence or timing, and the intensity they'd like the generated sound to follow. For example, if you want to generate the sound of a lion roar, but want it to match when the subject of your video is opening and closing its mouth, you can watch the video, record a clip of you making the noise to match the character's movement, and then accompany it with a text prompt that describes the sound you'd like created. You'll then be given multiple options to choose from and can pick the one that best matches the project's vibe you were going for. Also: Adobe Firefly now generates AI images with OpenAI, Google, and Flux models - how to access them While other video-generating models like Veo 3 can generate video with audio from text, what really stood out about this feature is the amount of control users have when inputting their own audio. Before launch, I had the opportunity to watch a live demo of the feature in action. It was truly impressive to see how well the generated audio matched the input audio's flow, while also incorporating the text prompt to create a sound that actually sounded like the intended output -- no shade to the lovely demoer who did his best to sound like a lion roaring into the mic. Another feature launching in beta is Text to Avatar, which, as the name implies, allows users to turn scripts into avatar-led videos, or videos that look like a live person reading the script. When picking an avatar, you can browse through the library of avatars, pick a custom background and accents, and then Firefly creates the final output. Adobe shares that some potential use cases for this feature include creating engaging video lessons with a virtual presenter, transforming text content into video articles for social media, or giving any materials a "human touch" -- oh, the irony. Adobe also unveiled some practical, simple features that will improve users' video-generating experience. For example, users will now be able to use the Composition Reference for Video to upload a reference video and then add that composition to the new generation. Also: Why Adobe Firefly might be the only AI image tool that actually matters This is a huge win for creators who rely on generative video because no matter how good you get at writing prompts, the descriptions can often only describe a portion of the visual you are imagining. Now, you can spend less time explaining and still have the model understand your goal. When watching this live demo, the final output resembled the reference image well. A new Style Presets option also allows users to customize their videos more easily by applying a visual style with a tap of a preset. These styles include claymation, anime, line art, vector art, black and white, and more. The new Enhance Prompt feature within the Generate Video Module on the Firefly web helps users get the result they want by adding language to the original prompt so that Firefly can better understand intent. Also: SEO for chatbots: How Adobe aims to help brands get noticed in the age of AI Adobe also added a Keyframe Cropping feature, which allows users to upload their first and last frames, specify how the image will be cropped, add a scene description, and then Firefly will generate a video that fits, according to the release. Lastly, Adobe made improvements to its Firefly Video Model, which improve motion fidelity. This means the generated video will move more smoothly and naturally, better mimicking real-life physics. This is especially important when generating videos of animals, humans, nature, and more. Also: Adobe's Photoshop AI editing magic finally comes to Android - and it's free Adobe has also been progressively adding more models to its video generator, giving users the opportunity to try different styles from the market in one place. Now, Adobe is adding Topaz's Image and Video Upscalers and Moonvalley's Marey to Firefly Boards. It is also adding Luma AI's Ray 2 and Pika 2.2 to Generate Video. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[3]
Adobe Firefly can now generate sound effects for videos - and I'm seriously impressed
Just a year and a half ago, the latest and greatest of Adobe's Firefly generative AI offerings involved producing high-quality images from text with customization options, such as reference images. Since then, Adobe has pivoted into text-to-video generation and is now adding a slew of features to make it even more competitive. Also: Forget Sora: Adobe launches 'commercially safe' AI video generator. How to try it On Thursday, Adobe released a series of upgrades to its video capabilities that give users more control over the final generation, more options to create the video, and even more modalities to create. Even though creating realistic AI-generated videos is an impressive feat that shows how far AI generation has come, one crucial aspect of video generation has been missing: sound. Adobe's new release seeks to give creative professionals the ability to use AI to create audio, too. The new Generate Sound Effects (beta) allows users to create custom sounds by inserting a text description of what they'd like generated. If users want even more control over what is generated, they can also use their voice to demonstrate the cadence or timing, and the intensity they'd like the generated sound to follow. For example, if you want to generate the sound of a lion roar, but want it to match when the subject of your video is opening and closing its mouth, you can watch the video, record a clip of you making the noise to match the character's movement, and then accompany it with a text prompt that describes the sound you'd like created. You'll then be given multiple options to choose from and can pick the one that best matches the project's vibe you were going for. Also: Adobe Firefly now generates AI images with OpenAI, Google, and Flux models - how to access them While other video-generating models like Veo 3 can generate video with audio from text, what really stood out about this feature is the amount of control users have when inputting their own audio. Before launch, I had the opportunity to watch a live demo of the feature in action. It was truly impressive to see how well the generated audio matched the input audio's flow, while also incorporating the text prompt to create a sound that actually sounded like the intended output -- no shade to the lovely demoer who did his best to sound like a lion roaring into the mic. Another feature launching in beta is Text to Avatar, which, as the name implies, allows users to turn scripts into avatar-led videos, or videos that look like a live person reading the script. When picking an avatar, you can browse through the library of avatars, pick a custom background and accents, and then Firefly creates the final output. Adobe shares that some potential use cases for this feature include creating engaging video lessons with a virtual presenter, transforming text content into video articles for social media, or giving any materials a "human touch" -- oh, the irony. Adobe also unveiled some practical, simple features that will improve users' video-generating experience. For example, users will now be able to use the Composition Reference for Video to upload a reference video and then add that composition to the new generation. Also: Why Adobe Firefly might be the only AI image tool that actually matters This is a huge win for creators who rely on generative video because no matter how good you get at writing prompts, the descriptions can often only describe a portion of the visual you are imagining. Now, you can spend less time explaining and still have the model understand your goal. When watching this live demo, the final output resembled the reference image well. A new Style Presets option also allows users to customize their videos more easily by applying a visual style with a tap of a preset. These styles include claymation, anime, line art, vector art, black and white, and more. The new Enhance Prompt feature within the Generate Video Module on the Firefly web helps users get the result they want by adding language to the original prompt so that Firefly can better understand intent. Also: SEO for chatbots: How Adobe aims to help brands get noticed in the age of AI Adobe also added a Keyframe Cropping feature, which allows users to upload their first and last frames, specify how the image will be cropped, add a scene description, and then Firefly will generate a video that fits, according to the release. Lastly, Adobe made improvements to its Firefly Video Model, which improve motion fidelity. This means the generated video will move more smoothly and naturally, better mimicking real-life physics. This is especially important when generating videos of animals, humans, nature, and more. Also: Adobe's Photoshop AI editing magic finally comes to Android - and it's free Adobe has also been progressively adding more models to its video generator, giving users the opportunity to try different styles from the market in one place. Now, Adobe is adding Topaz's Image and Video Upscalers and Moonvalley's Marey to Firefly Boards. It is also adding Luma AI's Ray 2 and Pika 2.2 to Generate Video. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[4]
Adobe's new AI tool turns silly noises into realistic audio effects
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. Adobe is launching new generative AI filmmaking tools that provide fun ways to create sound effects and control generated video outputs. Alongside the familiar text prompts that typically allow you to describe what Adobe's Firefly AI models should make or edit, users can now use onomatopoeia-like voice recordings to generate custom sounds, and use reference footage to guide the movements in Firefly-generated videos. The Generate Sound Effects tool that's launching in beta on the Firefly app can be used with recorded and generated footage, and provides greater control over audio generation than Google's Veo 3 video tool. The interface resembles a video editing timeline and allows users to match the effects they create in time with uploaded footage. For example, users can play a video of a horse walking along a road and simultaneously record "clip clop" noises in time with its hoof steps, alongside a text description that says "hooves on concrete." The tool will then generate four sound effect options to choose from. This builds on the Project Super Sonic experiment that Adobe showed off at its Max event in October. It doesn't work for speech, but does support the creation of impact sounds like twigs snapping, footsteps, zipper effects, and more, as well as atmospheric noises like nature sounds and city ambience. New advanced controls are also coming to the Firefly Text-to-Video generator. Composition Reference allows users to upload a video alongside their text prompt to mirror the composition of that footage in the generated video, which should make it easier to achieve specific results, compared to repeatedly inputting text descriptions alone. Keyframe cropping will let users crop and upload images of the first and last frames that Firefly can use to generate video between, and new style presets provide a selection of visual styles that users can quickly select, including anime, vector art, claymation, and more. These style presets are only available to use with Adobe's own Firefly video AI model. The results leave something to be desired if the live demo I saw was any indication -- the "claymation" option just looked like early 2000s 3D animation. But Adobe is continuing to add support for rival AI models within its own tools, and Adobe's Generative AI lead Alexandru Costin told The Verge that similar controls and presets may be available to use with third-party AI models in the future. That suggests that Adobe is vying to keep its place at the top of the creative software foodchain as AI tools grow in popularity, even if it lags behind the likes of OpenAI and Google in the generative models themselves.
[5]
Adobe Firefly can now generate sound effects from your audio cues
Since rolling out the redesign of its Firefly app in April, Adobe has been releasing major updates for the generative AI hub at a near monthly clip. Today, the company is introducing a handful of new features to assist those who use Firefly's video capabilities. To start, Adobe is making it easier to add sound effects to AI-generated clips. Right now, the majority of video models create footage without any accompanying audio. Adobe is addressing this with a nifty little feature that allows users to first describe the sound effect they want to generate and then record themselves making it. The second part isn't so Adobe's model can mimic the sound. Rather, it's so the system can get a better idea of the intensity and timing the user wants from the effect. In the demo Adobe showed me, one of the company's employees used the feature to add the sound of a zipper being unzipped. They made a "zzzztttt" sound, which Adobe's model faithfully used to reproduce the effect at the intended volume. The translation was less convincing when the employee used the tool to add the sound of footsteps on concrete, though if you're using the feature for ideation as Adobe intended, that may not matter. When adding sound effects, there's a timeline editor along the bottom of the interface to make it easy to time the audio properly. The other new features Adobe is adding today are called Composition Reference, Keyframe Cropping and Video Presets. The first of those allows you to upload a video or image you captured to guide the generation process. In combination with Video Presets, you can define the style of the final output. Some of the options Adobe is offering at launch allow you to create clips with anime, black and white or vector art styles. Lastly, with Keyframe Cropping you can upload the first and final frame of a video and select an aspect ratio. Firefly will then generate a video that stays within your desired format. In June, Adobe added , and this month it's doing the same. Most notable is the inclusion of , which Google premiered at its in May. At the moment, Veo 3 is one of the only AI models that can generate video with sound. Like with all the other partner models Adobe offers in Firefly, Google has agreed not to use data from Adobe users for training future models. Every image and video people create through Firefly is digitally signed with the model that was used to create it. That is one of the safeguards Adobe includes so that Firefly customers don't accidentally ship an asset that infringes on copyrighted material. According to Zeke Koch, vice president of product management for Adobe Firefly, users can expect the fast pace of updates to continue. "We're relentlessly shipping stuff almost as quickly as we can," he said. Koch adds Adobe will continue to integrate more third-party models, as long as their providers agree to the company's data privacy terms.
[6]
Start generating free AI art on iPhone with Adobe Firefly
Adobe Firefly makes generating commercially safe images and video as easy as tapping a couple of buttons. Adobe's Firefly app removes the hassle of finding images and video to illustrate your latest social post, flyer, presentation, article or website, and adds peace of mind over copyright concerns. With a simple text prompt on your iPhone, you can output unique and engaging media in any required format-for free. Content creation can be time-consuming, demands a ton of resources, and often requires more technical experience than you actually have. Adobe is making things easier for creators, influencers, and everyday users by offering a suite of AI-powered tools that can generate images, video, and audio. Firefly is a fantastic tool for all those who want to push the creative experience to the next level. One of Adobe Firefly's defining features is its commitment to providing a commercially safe environment for AI-generated content. Whether you're creating marketing materials for a business, building your portfolio, or generating visuals for use on social platforms, Firefly ensures the media is free from legal complications. Firefly addresses copyright and trademark issues by training its model on only Adobe Stock images, openly licensed content, and public domain content where the copyright has expired. The Firefly mobile app makes creating AI media ultra-convenient. Whether you're on your morning commute, on vacation, or already shut down your computer for the day, you can power up the app on your iPhone and generate artwork whenever inspiration strikes. Integration with Adobe's Creative Cloud even lets you begin a project on your phone, and pick it up later on your Mac. It's easy to go back and forth between the mobile app, web app, and desktop apps such as Adobe Photoshop, Premiere and Lightroom. So, how do you make the magic happen? Firefly uses a simple text-prompting feature that helps you to articulate exactly what you need, removing the steep learning curve often demanded by image- and video-editing tools. Write a detailed description, describe a scene, or just try your luck with a single keyword, and Firefly will generate visuals that meet those requirements. You can continue to refine the prompt, and Firefly will adjust the artwork accordingly, offering you multiple options until you find the right one for your project. Adobe Firefly can also edit a still image or turn it into a video clip using a text prompt. You can adjust the filming angles, motion and style, and even translate the audio in over 20 languages. With these simple commands and time-saving tools, you can use Adobe Firefly to quickly and easily drum up visuals for anything from memes, posters and presentations to b-roll video. Social media managers will know that finding the right image is only half the battle-then you need to output it in a variety of aspect ratios and resolutions for upload to various online platforms. But they can now breathe a sigh of relief: Adobe Firefly can adapt content to your requirements on demand. Not only will it take care of cropping and repositioning images for you, Firefly can also fill in any gaps using Generative Expand. This feature intelligently fills in any empty spaces on the canvas to allow for seamless transitions between frames and platforms for perfectly polished visuals. With plans starting at $0/month, Adobe Firefly is accessible to everyone -- from casual creators to professionals looking for premium features. Firefly Standard costs $9.99/month and includes 2,000 generative credits you can use for video, audio, and images, while Firefly Pro costs $29.99/month and includes 7,000 generative credits that you can also use for storyboards and preview scenes. Creativity is just one text prompt away: download Adobe Firefly and prepare to be amazed!
[7]
Adobe Firefly is about to make its biggest leap in AI video yet with a new model and Veo 3 integration
A new Generate Sound Effects beta app makes it easy to add sound effects to your videos Adobe Firefly has given its AI video generation capabilities a timely upgrade. It has upgraded its video model to version 1.9, which brings more realism and precision in storytelling. It's available in the Firefly Web App right now. Adobe cites one of the strengths of its new Firefly Video Model as "generating dynamic landscapes from natural vistas to urban environments. The model also demonstrates remarkable capability with animal motion and behavior, atmospheric elements like weather patterns and particle effects, and mastering both 2D and 3D animation." You can see this demonstrated in this example video of a cinematic drone shot going between the trees of a snowy forest at sunset golden hour: Adobe has also partnered with other generative video models, so you can now select Veo 3, Luma, Runway, and Topaz all from within the Firefly Web app. As part of the new Firefly, there's also a new beta version of Generate Sound Effects, for creating custom, high-quality audio from text prompts or voice cues. It involves you making voice sounds that are close to the sort of sound effect you want in your video, and the AI then works out what sort of sound you really want to add. So, in a beach scene, if you start making "Kaaw! Kaaw!" noises, the AI works out that you want seagull sounds. I've had a go at using the voice cues method, and it certainly made everybody in the office think I had gone mad! Here's an example of how it works: Using Firefly, you can create AI-generated video from either a text prompt or a reference image, but you can now also upload a reference video, and Firefly will generate a new video that transfers the original composition to your generation. The new video model has a new level of precision control when you're directing video content. There are also several style presets available, allowing you to apply a distinct visual style with a single click. Presets available include claymation, anime, line art, and 2D. Finally, there's keyframe cropping. You can upload your first and last frames, select how your image will be cropped, and describe the scene, and Firefly will generate a video that fits the format.
[8]
Unleash your creativity on the fly
Bringing powerful AI-assisted video and image generation to mobile, the new Adobe Firefly app grants you the freedom to explore your ideas on the move. The commercially safe creative AI solution, Adobe Firefly is an all-in-one destination for AI-assisted ideation, creation and production. Its content creation tools make it easy for anyone to produce amazing images, video, audio and vector graphics. This means you can use Firefly to create everything from b-roll and video backgrounds to hero images to use them in memes, blogs, presentations, posters or anything that could use a little visual pizzazz. New features include the ability to turn a still image into a video, or to create a clip with a simple text prompt. Shots are easily refined by selecting camera angles, motion and style, while improved video controls give creative pros more frame-by-frame precision to shape composition and pacing. Thanks to Generative Expand and Generative Fill, Firefly can bring your favourite snapshots to life, even if they're not quite perfect. If an unwanted passerby spoils the scene, Generative Remove can instantly replace them with whatever new objects you can dream up. Once you're happy with your creation, you'll want to share it with the world. Adobe Firefly is designed for a global audience, with the ability to translate clips and make anyone sound like a native speaker, all while maintaining the same voice, tone and cadence across more than 20 languages. Now available for iOS and Android, the Adobe Firefly app ensures creators can get creative anywhere, anytime. The app uses the same technology underlying many of the most powerful features of Photoshop, Premiere Pro and Lightroom. Struck by inspiration when you're about and about? Just enter a prompt into the Firefly app and then iterate with tweaks and adjustments until you've generated an image or video that's exactly what you envisioned. The Adobe Firefly app is available as a standalone experience or as part of Adobe's Creative Cloud. With Creative Cloud syncing, it's easy to jump between your phone, the web app (where you can experiment with more models and media types) and desktop apps like Photoshop. Firefly Boards is a new space within Adobe Firefly where creative teams can ideate, mood-board and storyboard. Available in public beta on the web, it's a collaborative, intuitive way to brainstorm, offering the ability to explore creative concepts and iterate on ideas before jumping into production. Boards transforms the collaborative process by taking a generative AI-first approach to mood boarding. The addition of video now allows creative professionals to collaboratively explore and iterate across more media types. Once you've fleshed out your ideas, Adobe Firefly can easily bring them to life through generating text, images and now video as well. To help with this, the Adobe Firefly app expands its AI model integrations. Ideogram, Luma AI, Pika and Runway now join OpenAI's image generation, Black Forest Labs' Flux and Google's Imagen and Veo, offering creators even more choice of aesthetic styles. Whether you work for a major corporation or you're running your own side hustle, you'll want to avoid legal issues - especially if you'll be using content commercially. That's why it's essential to be sure every image or video has the appropriate legal permission. With AI generated content, it can be more difficult to identify the sources used in its creation, compared to identifying the copyright or trademark of an existing image. Even so, you need to be absolutely confident that your AI generated content does not include any protected elements without the necessary releases. That's where Adobe Firefly has you covered. As part of Adobe's initiatives to ensure Firefly is commercially safe, it only trains its AI models on content for which it has permission or rights. This means the initial commercial Firefly model is only trained on Adobe Stock images as well as openly licensed and public domain content. Adobe does not train Firefly on customer content, or on content mined from the web. This prevents Firefly from creating content which infringes copyright or intellectual property rights. As a founding member of the international Content Authenticity Initiative (CAI), Adobe sets the industry standard for responsible generative AI. The CAI is a community of media and tech companies, NGOs, academics and others which are working to promote adoption of an open industry standard for content authenticity and provenance. This is in conjunction with the Coalition for Content Provenance and Authenticity (C2PA), which has developed an open technical standard that offers publishers, creators and consumers the ability to understand the origin of different types of media. This includes the option to add a Content Credential that allows a creator to indicate that generative AI was used in its creation. Combining the power of Firefly with Adobe's responsible approach to generative AI ensures that you can unleash your creativity with confidence.
[9]
Adobe's Firefly Video Model Will Now Let You Better Control the Output
Videos generated using Firefly models will also offer Style Presets Adobe announced new upgrades for its Firefly video model on Thursday, and introduced new third-party artificial intelligence (AI) models that will be available on its platform. The California-based software giant stated that it is now improving the motion generation of its Firefly video model to make it more natural and smoother. Additionally, the company is adding advanced video controls to let users generate more consistent video outputs. Further, Adobe also introduced four new third-party AI models that are being added to Firefly Boards. In a blog post, the software giant detailed the new features and tools Adobe Firefly users will soon receive. These features will only be accessible to the paid subscribers, with some of them being exclusive to the web app for now. Adobe's Firefly video model already produces videos with realistic, physics-based motion. Now, the company is enhancing its motion generation capabilities to deliver smoother, more natural transitions. These improvements apply to both 2D and 3D content, enhancing motion fidelity not just for characters but also for elements like floating bubbles, rustling leaves, and drifting clouds. The recently released Firefly app is also getting support for new third-party AI models. Adobe is introducing Topaz Labs' Image and Video Upscalers and Moonvalley's Marey. These will be added to Firefly Boards soon. On the other hand, Luma AI's Ray 2 and Pika 2.2 AI models, which are already available in Boards, will soon support video generation capability (currently, they can only be used to generate images). Coming to new video controls, Adobe has added extra tools to make prompting less exasperating, and reduce the need to make inline edits. The first tool allows users to upload a video as a reference, and Firefly will follow its original composition in the generated output. Another new inclusion is the Style Preset tool. Users generating AI videos can now choose a style such as claymation, anime, line art, or 2D, along with their prompt, and Firefly will adhere to the style instruction in the final output. Keyframe cropping is also now possible at prompting stage. Users can upload the first and last frames of a video, and Firefly will generate a video that matches the format and aspect ratio. Apart from this, Adobe is also introducing a new tool, dubbed Generate Sound Effects, in beta. The tool allows users to create a custom audio using a voice or text prompt, and layer it on an AI generated video. When using their voice, users can also dictate the timing and intensity of the sound as Firefly will generate custom audio matching the energy and rhythm of the voice. Finally, the company is also introducing a Text to avatar feature that converts scripts into avatar-led videos. Users will be able to select their preferred avatar from Adobe's pre-listed library, customise the background, and even select the accent of the generated speech.
[10]
Adobe AI Tools Launches Firefly for Voice-Generated Sound Effects
Adobe AI Tools Introduce Firefly's Voice-to-Sound Feature to Generate Effects Without Audio Recording Adobe is unveiling a new set of Firefly features for film producers and content creators. The company's AI tools are evolving rapidly. Among the most renowned toolkits is Adobe Firefly and its latest addition, the Generate Sound Effects tool. This AI feature can generate high-quality sound effects from voice recordings. This new AI tool will help creators translate raw vocal cues, combined with text prompts, into adept immersive audio for their projects.
Share
Share
Copy Link
Adobe has launched new AI-powered features for its Firefly platform, including the ability to generate sound effects for videos and improved video creation tools, positioning itself as a strong competitor in the AI-generated content market.
Adobe has introduced a groundbreaking feature to its Firefly platform: AI-generated sound effects for videos. This new tool, currently in beta, allows users to create custom audio by combining text descriptions with voice recordings, providing unprecedented control over the generated sound
1
2
. The feature stands out from competitors like Google's Veo 3 by offering more granular control over audio generation, particularly in synchronizing effects with video content1
.Source: Analytics Insight
Alongside the sound effects tool, Adobe has rolled out several improvements to its video generation capabilities:
Composition Reference: Users can now upload reference videos to guide the AI in creating new content, making it easier to achieve specific visual outcomes
2
3
.Style Presets: A new feature that allows quick application of visual styles such as anime, vector art, and black and white, among others
2
4
.Source: NDTV Gadgets 360
Keyframe Cropping: This tool enables users to specify the first and last frames of a video, along with desired aspect ratios, guiding the AI in generating fitting content
4
5
.Enhance Prompt: A feature that helps refine user prompts to better communicate intent to the AI
2
.Adobe continues to expand its offerings by integrating various third-party AI models into Firefly. Notable additions include Google's Veo 3, Luma AI's Ray 2, and Pika 2.2
1
2
. This integration comes with Adobe's commitment to data privacy, ensuring that partner models agree not to use Adobe user data for training future models5
.The company has also launched a Text-to-Avatar feature in beta, allowing users to create avatar-led videos from scripts. This tool has potential applications in creating video lessons, transforming text content into video articles, and adding a "human touch" to various materials
2
.Additionally, Adobe has made improvements to its Firefly Video Model, enhancing motion fidelity for more natural and smooth movements in generated videos
2
.Source: TechRadar
Related Stories
With these updates, Adobe is positioning Firefly as a comprehensive platform for AI-generated content creation. The company's approach of integrating third-party models while maintaining strict data privacy policies sets it apart in the competitive AI market
1
5
.Adobe's Vice President of Product Management for Firefly, Zeke Koch, has indicated that users can expect continued rapid updates and integrations of more third-party models, subject to agreement with Adobe's data privacy terms
5
.Adobe Firefly plans start at $10 per month, with more expensive tiers offering additional generation credits. Some users may already have Firefly credits depending on their current Adobe subscription
1
. The new features are being rolled out to users, with some still in beta testing phases.Summarized by
Navi
17 Jun 2025•Technology
26 Feb 2025•Technology
13 Feb 2025•Technology