8 Sources
[1]
Adobe's AI Videos Get Audio: Is It Better Than Google's Veo 3?
New Adobe Firefly video updates are here to bring your AI-generated videos to the next level and to potentially keep the company competitive in the crowded AI market. You'll be able to add sound effects to your Firefly AI videos, thanks to a beta release of a new AI tool rolling out now, the company announced Thursday. It's the second major tech company to release such capabilities, following Google's Veo 3 launch in May. There are also a couple of other changes coming to Firefly video, including new outside AI models you can use through Firefly and new prompting features and presets. A new composition reference tool lets you upload content, and Firefly will use your references and mimic a video's motions or an image's layout without needing to describe every detail in your prompt. Keyframe cropping will help you resize content, and there are new style presets, including anime, vector art, and black and white. It wasn't until recently that AI-generated videos could also have sound. Adobe first teased this audio feature when it was just a research concept in development at last year's Adobe Max. After a warm reception from the live audience, the company continued work on the tool, which led to this week's beta release. But Adobe's AI audio abilities are not the same as Veo 3's, which made a splash at this spring's I/O conference. There are some key differences between the programs that will affect the content you generate. Here's what you need to know about Firefly's new audible AI videos and how they stack up to Google. For more, check out how to use Photoshop AI and the new Firefly mobile apps. To create AI videos with sound in Firefly, start in your normal Firefly prompting window. Once you have a video you like, hover over the video and click the icon in the upper right corner that says Generate sound effects. This will open a new browser tab where you can generate AI audio. One of the biggest differences between Firefly and Veo 3 is the type of AI audio you're able to create. The AI audio you can generate through Firefly is the kind of audio that could be created by a foley artist, like sound effects and impact noises. That doesn't include dialogue, though. Each prompt generates four variations of audio clips, usually 8 seconds long each. You can record yourself making the sound effects and upload that as part of your prompt, a unique option. For example, if you wanted to give your AI monster a menacing roar, you could record yourself growling, and Firefly will give you four variations of AI-ified audio that match your general cadence. But you can't record yourself saying, "You will never escape my evil lair!" and have those words generated by Firefly. That's different than Veo 3, which can generate audio on its own or from a script you provide in your prompt. Adobe has new, separate AI avatar tool in beta for dialogue creation. Another difference is that in Firefly, you have to manually synchronize your AI audio to your video clips. People familiar with creating in Premiere Pro will recognize the similar set-up, where you can drag and drop audio clips to wherever you want them to play in the timeline. But for folks who don't need or want that kind of manual, hands-on control, Veo 3's automatically matching will take a lot of the work out of creating AI videos. Adobe's AI video generator might not have the same level of audio capabilities as Veo 3, but it does have other things going for it. Firefly lets you access 15 different AI models from popular AI companies like Runway, Luma, Pika and, yes, even Google's Veo 3. Adobe is adding a couple of new outside models to Firefly app roster for you to choose from, including Moonvalley's "clean AI" model Marey, Pika 2.2 and Luma AI's Ray 2. This is important because Adobe's AI policy says it doesn't train its AI models on customer content, and it requires all its partner models to agree to this "do not train" commitment. So if you're worried about maintaining some level of data privacy while using AI tools, generating videos in Adobe comes with a stronger policy than many other AI programs. There's also a small price difference between the two. Adobe Firefly plans begin at $10 per month, with more expensive plans offering additional generation credits. You might have some Firefly credits already, depending on what Adobe plan you're currently subscribed to. Google recently added access to a fast version of Veo 3 to its $20 per month Google AI Pro plan -- a chance for significant savings, since the regular Veo 3 is paywalled to the $250 Google AI Ultra plan.
[2]
Adobe Firefly can now generate AI sound effects for videos - and I'm seriously impressed
Just a year and a half ago, the latest and greatest of Adobe's Firefly generative AI offerings involved producing high-quality images from text with customization options, such as reference images. Since then, Adobe has pivoted into text-to-video generation and is now adding a slew of features to make it even more competitive. Also: Forget Sora: Adobe launches 'commercially safe' AI video generator. How to try it On Thursday, Adobe released a series of upgrades to its video capabilities that give users more control over the final generation, more options to create the video, and even more modalities to create. Even though creating realistic AI-generated videos is an impressive feat that shows how far AI generation has come, one crucial aspect of video generation has been missing: sound. Adobe's new release seeks to give creative professionals the ability to use AI to create audio, too. The new Generate Sound Effects (beta) allows users to create custom sounds by inserting a text description of what they'd like generated. If users want even more control over what is generated, they can also use their voice to demonstrate the cadence or timing, and the intensity they'd like the generated sound to follow. For example, if you want to generate the sound of a lion roar, but want it to match when the subject of your video is opening and closing its mouth, you can watch the video, record a clip of you making the noise to match the character's movement, and then accompany it with a text prompt that describes the sound you'd like created. You'll then be given multiple options to choose from and can pick the one that best matches the project's vibe you were going for. Also: Adobe Firefly now generates AI images with OpenAI, Google, and Flux models - how to access them While other video-generating models like Veo 3 can generate video with audio from text, what really stood out about this feature is the amount of control users have when inputting their own audio. Before launch, I had the opportunity to watch a live demo of the feature in action. It was truly impressive to see how well the generated audio matched the input audio's flow, while also incorporating the text prompt to create a sound that actually sounded like the intended output -- no shade to the lovely demoer who did his best to sound like a lion roaring into the mic. Another feature launching in beta is Text to Avatar, which, as the name implies, allows users to turn scripts into avatar-led videos, or videos that look like a live person reading the script. When picking an avatar, you can browse through the library of avatars, pick a custom background and accents, and then Firefly creates the final output. Adobe shares that some potential use cases for this feature include creating engaging video lessons with a virtual presenter, transforming text content into video articles for social media, or giving any materials a "human touch" -- oh, the irony. Adobe also unveiled some practical, simple features that will improve users' video-generating experience. For example, users will now be able to use the Composition Reference for Video to upload a reference video and then add that composition to the new generation. Also: Why Adobe Firefly might be the only AI image tool that actually matters This is a huge win for creators who rely on generative video because no matter how good you get at writing prompts, the descriptions can often only describe a portion of the visual you are imagining. Now, you can spend less time explaining and still have the model understand your goal. When watching this live demo, the final output resembled the reference image well. A new Style Presets option also allows users to customize their videos more easily by applying a visual style with a tap of a preset. These styles include claymation, anime, line art, vector art, black and white, and more. The new Enhance Prompt feature within the Generate Video Module on the Firefly web helps users get the result they want by adding language to the original prompt so that Firefly can better understand intent. Also: SEO for chatbots: How Adobe aims to help brands get noticed in the age of AI Adobe also added a Keyframe Cropping feature, which allows users to upload their first and last frames, specify how the image will be cropped, add a scene description, and then Firefly will generate a video that fits, according to the release. Lastly, Adobe made improvements to its Firefly Video Model, which improve motion fidelity. This means the generated video will move more smoothly and naturally, better mimicking real-life physics. This is especially important when generating videos of animals, humans, nature, and more. Also: Adobe's Photoshop AI editing magic finally comes to Android - and it's free Adobe has also been progressively adding more models to its video generator, giving users the opportunity to try different styles from the market in one place. Now, Adobe is adding Topaz's Image and Video Upscalers and Moonvalley's Marey to Firefly Boards. It is also adding Luma AI's Ray 2 and Pika 2.2 to Generate Video. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[3]
Adobe's new AI tool turns silly noises into realistic audio effects
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews. Adobe is launching new generative AI filmmaking tools that provide fun ways to create sound effects and control generated video outputs. Alongside the familiar text prompts that typically allow you to describe what Adobe's Firefly AI models should make or edit, users can now use onomatopoeia-like voice recordings to generate custom sounds, and use reference footage to guide the movements in Firefly-generated videos. The Generate Sound Effects tool that's launching in beta on the Firefly app can be used with recorded and generated footage, and provides greater control over audio generation than Google's Veo 3 video tool. The interface resembles a video editing timeline and allows users to match the effects they create in time with uploaded footage. For example, users can play a video of a horse walking along a road and simultaneously record "clip clop" noises in time with its hoof steps, alongside a text description that says "hooves on concrete." The tool will then generate four sound effect options to choose from. This builds on the Project Super Sonic experiment that Adobe showed off at its Max event in October. It doesn't work for speech, but does support the creation of impact sounds like twigs snapping, footsteps, zipper effects, and more, as well as atmospheric noises like nature sounds and city ambience. New advanced controls are also coming to the Firefly Text-to-Video generator. Composition Reference allows users to upload a video alongside their text prompt to mirror the composition of that footage in the generated video, which should make it easier to achieve specific results, compared to repeatedly inputting text descriptions alone. Keyframe cropping will let users crop and upload images of the first and last frames that Firefly can use to generate video between, and new style presets provide a selection of visual styles that users can quickly select, including anime, vector art, claymation, and more. These style presets are only available to use with Adobe's own Firefly video AI model. The results leave something to be desired if the live demo I saw was any indication -- the "claymation" option just looked like early 2000s 3D animation. But Adobe is continuing to add support for rival AI models within its own tools, and Adobe's Generative AI lead Alexandru Costin told The Verge that similar controls and presets may be available to use with third-party AI models in the future. That suggests that Adobe is vying to keep its place at the top of the creative software foodchain as AI tools grow in popularity, even if it lags behind the likes of OpenAI and Google in the generative models themselves.
[4]
Adobe Firefly can now generate sound effects for videos - and I'm seriously impressed
Just a year and a half ago, the latest and greatest of Adobe's Firefly generative AI offerings involved producing high-quality images from text with customization options, such as reference images. Since then, Adobe has pivoted into text-to-video generation and is now adding a slew of features to make it even more competitive. Also: Forget Sora: Adobe launches 'commercially safe' AI video generator. How to try it On Thursday, Adobe released a series of upgrades to its video capabilities that give users more control over the final generation, more options to create the video, and even more modalities to create. Even though creating realistic AI-generated videos is an impressive feat that shows how far AI generation has come, one crucial aspect of video generation has been missing: sound. Adobe's new release seeks to give creative professionals the ability to use AI to create audio, too. The new Generate Sound Effects (beta) allows users to create custom sounds by inserting a text description of what they'd like generated. If users want even more control over what is generated, they can also use their voice to demonstrate the cadence or timing, and the intensity they'd like the generated sound to follow. For example, if you want to generate the sound of a lion roar, but want it to match when the subject of your video is opening and closing its mouth, you can watch the video, record a clip of you making the noise to match the character's movement, and then accompany it with a text prompt that describes the sound you'd like created. You'll then be given multiple options to choose from and can pick the one that best matches the project's vibe you were going for. Also: Adobe Firefly now generates AI images with OpenAI, Google, and Flux models - how to access them While other video-generating models like Veo 3 can generate video with audio from text, what really stood out about this feature is the amount of control users have when inputting their own audio. Before launch, I had the opportunity to watch a live demo of the feature in action. It was truly impressive to see how well the generated audio matched the input audio's flow, while also incorporating the text prompt to create a sound that actually sounded like the intended output -- no shade to the lovely demoer who did his best to sound like a lion roaring into the mic. Another feature launching in beta is Text to Avatar, which, as the name implies, allows users to turn scripts into avatar-led videos, or videos that look like a live person reading the script. When picking an avatar, you can browse through the library of avatars, pick a custom background and accents, and then Firefly creates the final output. Adobe shares that some potential use cases for this feature include creating engaging video lessons with a virtual presenter, transforming text content into video articles for social media, or giving any materials a "human touch" -- oh, the irony. Adobe also unveiled some practical, simple features that will improve users' video-generating experience. For example, users will now be able to use the Composition Reference for Video to upload a reference video and then add that composition to the new generation. Also: Why Adobe Firefly might be the only AI image tool that actually matters This is a huge win for creators who rely on generative video because no matter how good you get at writing prompts, the descriptions can often only describe a portion of the visual you are imagining. Now, you can spend less time explaining and still have the model understand your goal. When watching this live demo, the final output resembled the reference image well. A new Style Presets option also allows users to customize their videos more easily by applying a visual style with a tap of a preset. These styles include claymation, anime, line art, vector art, black and white, and more. The new Enhance Prompt feature within the Generate Video Module on the Firefly web helps users get the result they want by adding language to the original prompt so that Firefly can better understand intent. Also: SEO for chatbots: How Adobe aims to help brands get noticed in the age of AI Adobe also added a Keyframe Cropping feature, which allows users to upload their first and last frames, specify how the image will be cropped, add a scene description, and then Firefly will generate a video that fits, according to the release. Lastly, Adobe made improvements to its Firefly Video Model, which improve motion fidelity. This means the generated video will move more smoothly and naturally, better mimicking real-life physics. This is especially important when generating videos of animals, humans, nature, and more. Also: Adobe's Photoshop AI editing magic finally comes to Android - and it's free Adobe has also been progressively adding more models to its video generator, giving users the opportunity to try different styles from the market in one place. Now, Adobe is adding Topaz's Image and Video Upscalers and Moonvalley's Marey to Firefly Boards. It is also adding Luma AI's Ray 2 and Pika 2.2 to Generate Video. Get the morning's top stories in your inbox each day with our Tech Today newsletter.
[5]
Adobe Firefly can now generate sound effects from your audio cues
Since rolling out the redesign of its Firefly app in April, Adobe has been releasing major updates for the generative AI hub at a near monthly clip. Today, the company is introducing a handful of new features to assist those who use Firefly's video capabilities. To start, Adobe is making it easier to add sound effects to AI-generated clips. Right now, the majority of video models create footage without any accompanying audio. Adobe is addressing this with a nifty little feature that allows users to first describe the sound effect they want to generate and then record themselves making it. The second part isn't so Adobe's model can mimic the sound. Rather, it's so the system can get a better idea of the intensity and timing the user wants from the effect. In the demo Adobe showed me, one of the company's employees used the feature to add the sound of a zipper being unzipped. They made a "zzzztttt" sound, which Adobe's model faithfully used to reproduce the effect at the intended volume. The translation was less convincing when the employee used the tool to add the sound of footsteps on concrete, though if you're using the feature for ideation as Adobe intended, that may not matter. When adding sound effects, there's a timeline editor along the bottom of the interface to make it easy to time the audio properly. The other new features Adobe is adding today are called Composition Reference, Keyframe Cropping and Video Presets. The first of those allows you to upload a video or image you captured to guide the generation process. In combination with Video Presets, you can define the style of the final output. Some of the options Adobe is offering at launch allow you to create clips with anime, black and white or vector art styles. Lastly, with Keyframe Cropping you can upload the first and final frame of a video and select an aspect ratio. Firefly will then generate a video that stays within your desired format. In June, Adobe added , and this month it's doing the same. Most notable is the inclusion of , which Google premiered at its in May. At the moment, Veo 3 is one of the only AI models that can generate video with sound. Like with all the other partner models Adobe offers in Firefly, Google has agreed not to use data from Adobe users for training future models. Every image and video people create through Firefly is digitally signed with the model that was used to create it. That is one of the safeguards Adobe includes so that Firefly customers don't accidentally ship an asset that infringes on copyrighted material. According to Zeke Koch, vice president of product management for Adobe Firefly, users can expect the fast pace of updates to continue. "We're relentlessly shipping stuff almost as quickly as we can," he said. Koch adds Adobe will continue to integrate more third-party models, as long as their providers agree to the company's data privacy terms.
[6]
Adobe Firefly is about to make its biggest leap in AI video yet with a new model and Veo 3 integration
A new Generate Sound Effects beta app makes it easy to add sound effects to your videos Adobe Firefly has given its AI video generation capabilities a timely upgrade. It has upgraded its video model to version 1.9, which brings more realism and precision in storytelling. It's available in the Firefly Web App right now. Adobe cites one of the strengths of its new Firefly Video Model as "generating dynamic landscapes from natural vistas to urban environments. The model also demonstrates remarkable capability with animal motion and behavior, atmospheric elements like weather patterns and particle effects, and mastering both 2D and 3D animation." You can see this demonstrated in this example video of a cinematic drone shot going between the trees of a snowy forest at sunset golden hour: Adobe has also partnered with other generative video models, so you can now select Veo 3, Luma, Runway, and Topaz all from within the Firefly Web app. As part of the new Firefly, there's also a new beta version of Generate Sound Effects, for creating custom, high-quality audio from text prompts or voice cues. It involves you making voice sounds that are close to the sort of sound effect you want in your video, and the AI then works out what sort of sound you really want to add. So, in a beach scene, if you start making "Kaaw! Kaaw!" noises, the AI works out that you want seagull sounds. I've had a go at using the voice cues method, and it certainly made everybody in the office think I had gone mad! Here's an example of how it works: Using Firefly, you can create AI-generated video from either a text prompt or a reference image, but you can now also upload a reference video, and Firefly will generate a new video that transfers the original composition to your generation. The new video model has a new level of precision control when you're directing video content. There are also several style presets available, allowing you to apply a distinct visual style with a single click. Presets available include claymation, anime, line art, and 2D. Finally, there's keyframe cropping. You can upload your first and last frames, select how your image will be cropped, and describe the scene, and Firefly will generate a video that fits the format.
[7]
Adobe's Firefly Video Model Will Now Let You Better Control the Output
Videos generated using Firefly models will also offer Style Presets Adobe announced new upgrades for its Firefly video model on Thursday, and introduced new third-party artificial intelligence (AI) models that will be available on its platform. The California-based software giant stated that it is now improving the motion generation of its Firefly video model to make it more natural and smoother. Additionally, the company is adding advanced video controls to let users generate more consistent video outputs. Further, Adobe also introduced four new third-party AI models that are being added to Firefly Boards. In a blog post, the software giant detailed the new features and tools Adobe Firefly users will soon receive. These features will only be accessible to the paid subscribers, with some of them being exclusive to the web app for now. Adobe's Firefly video model already produces videos with realistic, physics-based motion. Now, the company is enhancing its motion generation capabilities to deliver smoother, more natural transitions. These improvements apply to both 2D and 3D content, enhancing motion fidelity not just for characters but also for elements like floating bubbles, rustling leaves, and drifting clouds. The recently released Firefly app is also getting support for new third-party AI models. Adobe is introducing Topaz Labs' Image and Video Upscalers and Moonvalley's Marey. These will be added to Firefly Boards soon. On the other hand, Luma AI's Ray 2 and Pika 2.2 AI models, which are already available in Boards, will soon support video generation capability (currently, they can only be used to generate images). Coming to new video controls, Adobe has added extra tools to make prompting less exasperating, and reduce the need to make inline edits. The first tool allows users to upload a video as a reference, and Firefly will follow its original composition in the generated output. Another new inclusion is the Style Preset tool. Users generating AI videos can now choose a style such as claymation, anime, line art, or 2D, along with their prompt, and Firefly will adhere to the style instruction in the final output. Keyframe cropping is also now possible at prompting stage. Users can upload the first and last frames of a video, and Firefly will generate a video that matches the format and aspect ratio. Apart from this, Adobe is also introducing a new tool, dubbed Generate Sound Effects, in beta. The tool allows users to create a custom audio using a voice or text prompt, and layer it on an AI generated video. When using their voice, users can also dictate the timing and intensity of the sound as Firefly will generate custom audio matching the energy and rhythm of the voice. Finally, the company is also introducing a Text to avatar feature that converts scripts into avatar-led videos. Users will be able to select their preferred avatar from Adobe's pre-listed library, customise the background, and even select the accent of the generated speech.
[8]
Adobe AI Tools Launches Firefly for Voice-Generated Sound Effects
Adobe AI Tools Introduce Firefly's Voice-to-Sound Feature to Generate Effects Without Audio Recording Adobe is unveiling a new set of Firefly features for film producers and content creators. The company's AI tools are evolving rapidly. Among the most renowned toolkits is Adobe Firefly and its latest addition, the Generate Sound Effects tool. This AI feature can generate high-quality sound effects from voice recordings. This new AI tool will help creators translate raw vocal cues, combined with text prompts, into adept immersive audio for their projects.
Share
Copy Link
Adobe has launched new AI-powered features for its Firefly platform, including the ability to generate custom sound effects and improved video creation tools, positioning itself as a strong competitor in the AI-generated content market.
Adobe has introduced a groundbreaking feature to its Firefly platform: AI-generated sound effects for videos. This new tool, currently in beta, allows users to create custom audio by providing text descriptions and voice recordings as input 12. The Generate Sound Effects feature gives creators unprecedented control over audio generation, surpassing the capabilities of competitors like Google's Veo 3 3.
Source: Analytics Insight
Users can now add sound effects to their AI-generated videos by following these steps:
This innovative approach enables creators to match sound effects precisely with video content, such as synchronizing footsteps or animal noises with on-screen actions 24.
Alongside the audio generation feature, Adobe has rolled out several improvements to Firefly's video capabilities:
Source: NDTV Gadgets 360
These updates aim to provide creators with more control and flexibility in generating AI videos.
Adobe is positioning Firefly as a hub for AI-generated content by incorporating models from other companies:
This integration strategy allows users to access various AI styles and capabilities within a single platform, while adhering to Adobe's strict data privacy policies 13.
The introduction of these features signifies Adobe's commitment to maintaining its position as a leader in creative software. By combining AI-generated content with traditional editing tools, Adobe is catering to both professional and amateur creators 35.
However, some concerns remain about the quality of certain AI-generated outputs. For instance, the "claymation" style preset was described as resembling early 2000s 3D animation rather than true claymation 3.
Source: CNET
Adobe Firefly plans start at $10 per month, with additional generation credits available in higher-tier plans. Some users may already have access to Firefly credits through existing Adobe subscriptions 1.
As AI-generated content becomes increasingly prevalent in the creative industry, Adobe's latest updates to Firefly demonstrate the company's commitment to innovation and user empowerment. By providing tools that blend AI capabilities with human creativity, Adobe is shaping the future of digital content creation.
Summarized by
Navi
Netflix has incorporated generative AI technology in its original series "El Eternauta," marking a significant shift in content production methods for the streaming giant.
23 Sources
Technology
13 hrs ago
23 Sources
Technology
13 hrs ago
Meta declines to sign the European Union's voluntary AI code of practice, calling it an overreach that could stifle innovation and economic growth in Europe. The decision highlights growing tensions between tech giants and EU regulators over AI governance.
13 Sources
Policy and Regulation
13 hrs ago
13 Sources
Policy and Regulation
13 hrs ago
An advisory board convened by OpenAI recommends that the company should continue to be controlled by a nonprofit, emphasizing the need for democratic participation in AI development and governance.
6 Sources
Policy and Regulation
13 hrs ago
6 Sources
Policy and Regulation
13 hrs ago
Perplexity AI partners with Airtel to offer free Pro subscriptions, leading to a significant increase in downloads and user base in India, potentially reshaping the AI search landscape.
5 Sources
Technology
13 hrs ago
5 Sources
Technology
13 hrs ago
Perplexity AI, an AI-powered search engine startup, has raised $100 million in a new funding round, valuing the company at $18 billion. This development highlights the growing investor interest in AI startups and Perplexity's potential to challenge Google's dominance in internet search.
4 Sources
Startups
13 hrs ago
4 Sources
Startups
13 hrs ago