Curated by THEOUTPOST
On Mon, 14 Oct, 4:04 PM UTC
29 Sources
[1]
Adobe's New AI Video Creator Tackles Copyright Concerns | PYMNTS.com
Adobe is rolling out a new artificial intelligence (AI)-powered video creation tool designed to help businesses generate custom content while sidestepping potential legal issues surrounding copyright infringement. As companies adopt AI in their content strategies, Adobe's latest offering stands out for its use of licensed content, a feature intended to give users peace of mind about intellectual property (IP) concerns. This move could further accelerate AI's integration into corporate media production, opening up new revenue opportunities for Adobe and empowering companies of all sizes to leverage the efficiency of AI-generated video. "This tool enables faster content creation and experimentation, all while ensuring that what is being produced is safe for commercial use," Robert Petrocelli, chief product and technology officer at the video company Vimeo, told PYMNTS. The potential of Adobe's Firefly AI video creator lies in its functionality and its promise of legal safety. Businesses, especially those operating in highly regulated industries, have long been wary of using third-party content due to fears of copyright infringement. This is where Adobe's tool makes a difference. It taps into a library of licensed content, giving users access to various images, video clips, and music preapproved for commercial use. "It all comes down to copyright," said Dmytro Tymoshenko, CEO of Eightify, an AI-powered YouTube video summarization app, told PYMNTS. "With these AI video tools, companies can create high-quality videos without worrying about licensing. Therefore, it removes one of the major obstacles from their production process." Traditional video production can be labor-intensive, often requiring weeks or even months to go from concept to finished product. AI, on the other hand, can drastically reduce that timeline. "AI is about speed and automation," Tymoshenko said. "It allows businesses to create visually appealing videos without wasting human effort and time." This could be particularly attractive for smaller businesses or startups that may not have the resources to hire large video production teams but still need to produce high-quality content to compete in their industries. Petrocelli said that AI-driven tools like Adobe's also offer a layer of security for content creators. "Our team at Vimeo, for example, leverages ethical AI solutions to help our business partners and our roster of creators develop content that aligns with their respective brands while balancing efficiency and authenticity," he added. "This approach allows our partners to leverage the benefits of AI-generated video development while knowing their creative content is protected and safe." While Adobe's AI video creator clearly benefits businesses, its impact on traditional video professionals is still being debated. Generative AI has long been a topic of contention in creative industries, with some professionals viewing it as a threat to their livelihoods. Video production is no exception. "Generative AI in video production continues to be a taboo subject for many who are cautious about integrating it into their work," Petrocelli said. There's a concern that as AI becomes more sophisticated, it could replace many of the technical tasks that human video editors and animators currently perform. However, Tymoshenko offered a more balanced perspective. While he acknowledged that AI could take over some of the more technical aspects of video production, such as editing and animation, he emphasized that creativity remains a critical component of the process. "There's much more to video production than just technical work," he said. "Much of it has to do with conceptualization and creativity, which AI has no deep understanding of yet." One of the most exciting aspects of AI video creation is its potential to democratize video production. Historically, high-quality video content has been expensive and challenging to produce, making it the domain of large companies with significant marketing budgets. "This new technology could democratize video marketing, allowing smaller businesses to produce quality content at a much lower cost," Petrocelli explained. For startups and small businesses, in particular, the ability to produce professional-grade videos without breaking the bank could be a game-changer. The potential for AI to transform content production is clear, yet the balance between automation and creativity remains crucial. "AI might handle the technical work," Tymoshenko said, "but the creativity, the ideas -- that's still where people come in."
[2]
Adobe AI Video Tools and Others Aim at Streamlining Corporate Media Content Creation | PYMNTS.com
Artificial intelligence (AI) video tools, including Adobe's latest release, are transforming corporate media by enabling rapid, low-cost custom content creation. The Firefly Video Model, part of Adobe's Creative Cloud suite, leverages licensed footage and AI to enable rapid video production while navigating potential legal and ethical pitfalls. It joins a growing field of AI-powered video tools, competing with recent offerings from companies like Runway, Synthesia and DeepBrain AI. "Tools like Firefly Video Model are going to shake up the video production world big time," Christine M. Haas, CEO at Christine Haas Media, told PYMNTS. "This AI will automate much of the grunt work -- editing, color correction, simple animations. But here's the thing -- AI is a tool, not a replacement." Adobe's launch of a public beta of Firefly, an AI tool that generates videos from text prompts, expands the company's suite of AI-powered creative tools. The model supports over 100 languages and is integrated into several Adobe applications, positioning it as a competitor to other AI video generation offerings from companies like OpenAI and Meta. According to Michelle Berryman, executive creative director at Hero Digital, an Adobe solutions partner, this new technology offers real efficiency gains. "When you think of a marketer sifting through b-roll or interstitials, it requires hours of searching for the perfect stock video from a library with a poor search experience," she told PYMNTS. "But with AI, you can describe what you want to create, and then it's generated. It's so much more efficient." This streamlined process could reduce the time and resources required for video content creation. "It also allows the creative person to bring his or her vision to life much more accurately than using a library. Instead of finding assets that sort of fit their vision, this gives them a lot more control of the narrative and output," Berryman said. The tool's impact extends beyond efficiency. Automating time-consuming tasks allows creative professionals to focus on higher-value aspects of content creation. "Like a design system, it doesn't negate the need to be creative or think big, but it does eliminate the need to take your best talent and have them do small, uninteresting things like animate a button," Berryman said. AI video tools are set to reshape marketing budgets and strategies across businesses. Berryman suggested it will "democratize video creation skills positively, which will help marketers reduce the cost of traditional campaigns." She said AI can automate time-consuming initial stages of creative development, allowing teams to "bring concepts to a higher level of fidelity faster, enabling them to refine or pivot ideas when needed." Not every business will use AI the same way. "Small businesses can now create professional-looking videos without breaking the bank," Haas said. Medium-sized businesses? "They'll probably use a hybrid approach -- AI for quick, day-to-day content and professionals for high-stakes projects. Big enterprises will integrate AI tools into their existing workflows to boost efficiency, but don't expect them to ditch their video teams entirely." Another advantage of this tool is its integration into Adobe's existing products. "It's a suite of tools that creatives are already using and trust. It provides copyright protection and creates assets in a walled garden, which allows marketers to feel good about using the software," Berryman said. Adobe's AI-powered video creation is not alone in this space. Runway, a startup, has gained attention for its text-to-video capabilities. Synthesia offers AI-powered video creation with virtual presenters, while DeepBrain AI specializes in creating digital humans for video content. Each of these competitors brings unique strengths to the market, but Adobe's established user base and creative suite give it a strong position. According to Haas, these tools' potential lies in cost-cutting and reshaping business strategies. "The smart move isn't just to use AI to cut costs. Imagine a company giving away a simplified version of Firefly for free. They hook users with this awesome free tool and then upsell them on advanced features, training, or complementary services. It's the classic 'Give away the razor, sell the blades' strategy, but for the AI video age." This approach could lead to new business models in the creative software industry. As AI-powered tools become more prevalent, the differentiator may shift from the tools themselves to how companies use them to create value for their customers. "The companies that figure out how to leverage AI tools like Firefly not just for cost-cutting, but as part of a larger strategy to attract and retain customers -- those are the ones who will win big in this new landscape," Haas said. "That's how you stand out when everyone else is using AI and is making the same chess moves as everyone else."
[3]
Here's How Adobe's New AI Video Tool Could Upend the Ad Business
Like other generative AI models, Firefly simply takes in text-based descriptions of what the user wants it to make, and then activates complex algorithms that "dream up" a response based on vast amounts of training data -- in this case video and still imagery. At the top of Adobe's post a short video clip shows a small robot holding a red heart up against glinting sunlight. It's cute, and if you only knew a little about how computer animations are made, it looks like it would've taken a human a couple of hours to produce -- the sunlight effect seems complicated, the plants in the foreground move convincingly. But the clip is of course made by Firefly AI, and under the clip, Adobe's shared the AI prompt used to craft it. "Cinematic realistic detailed shot of a cute robot holding up a red glowing heart," it begins, then it describes more detail including "the lighting is gorgeous and sun-kissed, with dappled lighting on the robot's face and a strong sunny backlight," and adds technical info like "Slow-motion, gentle motion. Camera is static and locked-off." Adobe promises the AI's training database was all properly licensed content, including its own stock image archive and "public domain content," but no Adobe customer material -- so that means videos made using the tool don't accidentally violate someone else's intellectual property right, (your own, as the user, will be protected too). Even in the simple demos shown on the company's blog post announcing the tool, it's clearly very impressive.
[4]
Adobe invites you to 'embrace the tech' with Firefly's new video generator | TechCrunch
Adobe launched video generation capabilities for its Firefly AI platform ahead of its Adobe MAX event on Monday. Starting today, users can test out Firefly's video generator for the first time on Adobe's website, or try out its new AI-powered video feature, Generative Extend, in the Premiere Pro beta app. On the Firefly website, users can try out a text-to-video model or an image-to-video model, both producing up to five seconds of AI-generated video. (The web beta is free to use, but likely has rate limits.) Adobe says it trained Firefly to create both animated content and photo-realistic media, depending on the specifications of a prompt. Firefly is also capable of producing videos with text, in theory at least, which is something AI image generators have historically struggled to produce. The Firefly video web app includes settings to toggle camera pans, the intensity of the camera's movement, angle, and shot size. In the Premiere Pro beta app, users can try out Firefly's Generative Extend feature to extend video clips by up to two seconds. The feature is designed to generate an extra beat in a scene, continuing camera motion and the subject's movements. The background audio will also be extended -- the public's first taste of the AI audio model Adobe has been quietly working on. The background audio extender will not recreate voices or music, however, to avoid copyright lawsuits from record labels. In demos shared with TechCrunch ahead of the launch, Firefly's Generative Extend feature produced more impressive videos than its text-to-video model, and seemed more practical. The text-to-video and image-to-video model don't quite have the same polish or wow factor as Adobe's competitors in AI video, such as Runway's Gen-3 Alpha or OpenAI's Sora (though admittedly, the latter has yet to ship). Adobe says it put more focus on AI editing features than generating AI videos, likely to please its user base. Adobe's AI features have to strike a delicate balance with its creative audience. It's trying to lead in a crowded space of AI startups and tech companies demoing impressive AI models. On the other hand, lots of creatives aren't happy that AI features may soon replace the work they've done with their mouse, keyboard, and stylus for decades. That's why Adobe's first Firefly video feature, Generative Extend, uses AI to solve an existing problem for video editors - your clip isn't long enough - instead of generating new video from scratch. "Our audience is the most pixel perfect audience on Earth," said Adobe's VP of generative AI, Alexandru Costin, in an interview with TechCrunch. "They want AI to help them extend the assets they have, create variations of them, or edit them, versus generating new assets. So for us, it's very important to do generative editing first, and then generative creation." Production-grade video models that make editing easier: that's the recipe Adobe found early success with for Firefly's image model in Photoshop. Adobe executives previously said Photoshop's Generative Fill feature is one of the most used new features of the last decade, largely because it complements and speeds up existing workflows. The company hopes it can replicate that success with video. Adobe is trying to be mindful to creatives, reportedly paying photographers and artists $3 for every minute of video they submit to train its Firefly AI model. That said, many creatives are still wary of using AI tools, or fear that they will make them obsolete. (Adobe also announced AI tools for advertisers to automatically generate content on Monday.) Costin tells these concerned creatives that generative AI tools will create more demand for their work, not less: "If you think about the needs of companies wanting to create individualized and hyper personalized content for any user interacting with them, it's infinite demand." Adobe's AI lead says people should consider how other technological revolutions have benefited creatives, comparing the onset of AI tools to digital publishing and digital photography. He notes how these breakthroughs were originally seen as a threat, and says if creatives reject AI, they're going to have a difficult time. "Take advantage of generative capabilities to uplevel, upskill, and become a creative professional that can create 100 times more content using these tools," said Costin. "The need of content is there, now you can do it without sacrificing your life. Embrace the tech. This is the new digital literacy." Firefly will also automatically insert "AI-generated" watermarks in the metadata of videos created this way. Meta uses identification tools on Instagram and Facebook to label media with these labels as AI-generated. The idea is that platforms or individuals can use AI identification tools like this, as long as content contains the appropriate metadata watermarks, to determine what is and isn't authentic. However, Adobe's videos will not by default have visible labels clarifying they are AI generated, in a way that's easily read by humans. Adobe specifically designed Firefly to generate "commercially safe" media. The company says it did not train Firefly on images and videos including drugs, nudity, violence, political figures, or copyrighted materials. In theory, this should mean that Firefly's video generator will not create "unsafe" videos. Now that the internet has free access to Firefly's video model, we'll see if that's true.
[5]
Adobe launches its AI video generator and it looks spectacular - Softonic
Subscribe to the Softonic newsletter and get the latest in tech, gaming, entertainment and deals right in your inbox. Adobe claims that their model has a different approach compared to other video generators on the market: it is trained exclusively with licensed content, which allows it to avoid the ethical and copyright issues that other AIs have had to deal with. The company states that its model is "the first publicly available video model designed to be commercially safe." Although the general release does not yet have a specific date, Adobe has opened a beta testing period in which only users on the waiting list will be able to access the Firefly Video Model. The development of this tool began in April 2023 and is based on the techniques that Adobe has perfected with Firefly, its image synthesis model, which is already integrated into Photoshop. The main target audience for Adobe is entertainment industry professionals, such as video creators and editors, as Firefly Video Model allows for the creation of sequences that naturally integrate with traditional audiovisual content. The company has not yet disclosed which clients are currently using these tools, although according to Reuters, major brands like Gatorade and Mattel are already using Adobe's technology to customize product designs and packaging, such as Gatorade bottles or Barbie doll packaging. Despite the enthusiasm of some brands, AI video generation tools, like Adobe's new one, might encounter some resistance among the creative community. A recent case was the creepy Toys "R" Us ad, generated by AI with the Sora model from OpenAI, which was heavily criticized on social media for the quality of the results and ethical issues.
[6]
Adobe Introduces Video Generation Capabilities for Firefly AI Model | PYMNTS.com
Adobe is rolling out an artificial intelligence (AI) model that can use text prompts to generate video. The company will begin opening the tool to people on its waitlist but has not given a wider release date. Adobe has not announced any customers using its video tools, but its image generation clients include PepsiCo, IBM, Mattel, IPG Health and Deloitte, who use the technology to "optimize workflows and scale content creation so creatives can spend more time exploring their creative visions," the company said in its announcement. The launch comes 10 days after Meta introduced generative AI research that shows how simple text inputs can be used to create custom videos and sounds and edit existing videos. Dubbed Meta Movie Gen, this AI model expands upon the company's earlier generative AI models Make-A-Scene and Llama Image, and combines the modalities of those earlier generation models and allows further fine-grained control. In other AI news, PYMNTS on Monday explored the rise of AI agents, software programs that carry out specific tasks without constant supervision. "Whether handling customer requests, diagnosing medical conditions or predicting market trends, AI agents are versatile workhorses," the report said. "Instead of waiting for humans to input every command, these agents operate autonomously, reacting to real-time data and adjusting their actions accordingly." AI agents come in several varieties, each with a range of capabilities. The most basic are reactive agents, which respond to environmental changes but don't learn from experiences. They are essentially rule-followers, flawlessly executing instructions, but not anticipating what's coming next. "Proactive agents are more sophisticated," PYMNTS wrote. "They can plan and anticipate future actions, making them useful for businesses that need foresight. They don't just react, they strategize. By analyzing patterns, they can make predictions and optimize processes, often in real time."
[7]
Adobe launches AI video generator in race with OpenAI, Meta
Adobe Inc. unveiled artificial intelligence tools that can create and modify videos, joining Big Tech companies and startups in trying to capitalize on demand for the emerging technology. One feature, integrated into Adobe's video-editing software, Premiere, will let users extend video clips using generative AI, the company announced Monday at its annual product conference in Miami. Other tools, available online, will let users produce video from text prompts and existing images. While OpenAI, Meta Platforms Inc. and Alphabet Inc.'s Google have shown off AI video generators, Adobe is the first big software company to have it widely available for customers. Some startups, such as Runway AI, have already released their video-generating products publicly. "What we hear when we talk to our customers is it's all really cool but they can't use it," Ely Greenfield, Adobe chief technology officer for digital media, said of the competitor's technology. Customers want AI features within applications they already use, Greenfield said. Adobe's new video models are "designed for real work flows and integration into tools," he said. Over the past year, Adobe has focused on adding generative AI features to its portfolio of software for creative professionals, including flagship products Photoshop and Illustrator. The company has released tools that use text to produce images and illustrations that have been used billions of times so far. Adobe has sought to differentiate its models as "commercially safe" due to cautious training data and restrictive moderation. For example, there are certain faces Adobe will block if users try to generate videos of them, Greenfield said. Rivals have come under fire for widely scraping the internet to build AI models. Adobe's video models were trained primarily on videos and photos from its vast library of stock media for marketers and creative agencies, Greenfield said. In some cases, the San Jose, California-based company used public domain or licensed data, he added. Adobe has offered to procure videos for about $3 per minute from its network of creative professionals. OpenAI's demonstration of its video-generation model Sora earlier this year ignited investor fears that Adobe could be disrupted by the new technology. The company's shares declined 17% this year through Friday's close. Adobe isn't yet charging for the use of its AI features beyond its standard subscription fees. Each user is allotted a number of credits for AI generations, but the limits aren't being enforced for most plans, Greenfield said. In the future, Adobe may charge more to use its video-focused AI than its similar tool for photos, company executives have said. At its conference, Adobe also announced improvements to other software, such as making it easier to view 3D content in Photoshop. The company is also working on developing AI models that can generate 3D graphics. 2024 Bloomberg L.P. Distributed by Tribune Content Agency, LLC.
[8]
Adobe starts roll-out of AI video tools, challenging OpenAI and Meta
(Reuters) - Adobe on Monday said it has started publicly distributing an AI model that can generate video from text prompts, joining the growing field of companies trying to upend film and television production using generative artificial intelligence. The Firefly Video Model, as the technology is called, will compete with OpenAI's Sora, which was introduced earlier this year, while TikTok owner ByteDance and Meta Platforms have also announced their video tools in recent months. Facing much larger rivals, Adobe has staked its future on building models trained on data that it has rights to use, ensuring the output can be legally used in commercial work. San Jose, California-based Adobe will start opening up the tool to people who have signed up for its waiting list but did not give a general release date. While Adobe has not yet announced any customers using its video tools, it said on Monday that PepsiCo-owned Gatorade will use its image generation model for a site where customers can order custom-made bottles, and Mattel has been using Adobe tools to help design packaging for its Barbie line of dolls. For its video tools, Adobe has aimed at making them practical for everyday use by video creators and editors, with a special focus on making the footage blend in with conventional footage, said Ely Greenfield, Adobe's chief technology officer for digital media. "We really focus on fine-grain control, teaching the model the concepts that video editors and videographers use - things like camera position, camera angle, camera motion," Greenfield told Reuters in an interview. (Reporting by Stephen Nellis in San Francisco; Editing by Vijay Kishore)
[9]
Adobe MAX Launches Safe and Commercially Ready AI Video Creation with Firefly Video Model
At the Adobe MAX 2024 conference in Miami, Adobe took a major step forward in AI-powered creativity by unveiling the Firefly Video Model, its first commercially safe generative video tool. This announcement highlighted the company's ongoing commitment to integrating artificial intelligence into creative workflows while providing innovative tools for creators. Along with AI advancements, Adobe introduced updates to its Creative Cloud suite and announced new educational initiatives. One of the most anticipated announcements at Adobe MAX was the Firefly Video Model, integrated directly into Adobe Premiere Pro. This new AI tool enables users to generate video content from textual prompts, extending video clips, creating animations and even manipulating shot angles, lighting and motion with ease. The model is designed with commercial safety in mind, being trained on Adobe Stock and public domain content to avoid copyright risks. The Firefly Video Model ensures high-quality video outputs for both realistic and imaginative scenarios. Whether creating B-rolls, motion graphics or full videos from still images. Firefly empowers creators to take their video projects to new heights. Available in beta on firefly.adobe.com, the model is designed to streamline video workflows for creators and businesses alike. In a pre-launch briefing, Alexandru Costin, head of Firefly.AI, emphasized the importance of creating a commercially safe tool. "We asked our community what they needed from this model, and their top requests were for it to be usable commercially, trained responsibly and designed to minimize harm and bias.
[10]
Adobe starts roll-out of AI video tools, challenging OpenAI and Meta
Facing much larger rivals, Adobe has staked its future on building models trained on data that it has rights to use, ensuring the output can be legally used in commercial work. Adobe on Monday said it has started publicly distributing an AI model that can generate video from text prompts, joining the growing field of companies trying to upend film and television production using generative artificial intelligence. The Firefly Video Model, as the technology is called, will compete with OpenAI's Sora, which was introduced earlier this year, while TikTok owner ByteDance and Meta Platforms have also announced their video tools in recent months. Facing much larger rivals, Adobe has staked its future on building models trained on data that it has rights to use, ensuring the output can be legally used in commercial work. San Jose, California-based Adobe will start opening up the tool to people who have signed up for its waiting list but did not give a general release date. While Adobe has not yet announced any customers using its video tools, it said on Monday that PepsiCo-owned Gatorade will use its image generation model for a site where customers can order custom-made bottles, and Mattel has been using Adobe tools to help design packaging for its Barbie line of dolls. Making video tools more practical For its video tools, Adobe has aimed at making them practical for everyday use by video creators and editors, with a special focus on making the footage blend in with conventional footage, said Ely Greenfield, Adobe's chief technology officer for digital media. "We really focus on fine-grain control, teaching the model the concepts that video editors and videographers use - things like camera position, camera angle, camera motion," Greenfield told Reuters in an interview.
[11]
Adobe starts roll-out of AI video tools, challenging OpenAI and Meta
Oct 14 (Reuters) - Adobe (ADBE.O), opens new tab on Monday said it has started publicly distributing an AI model that can generate video from text prompts, joining the growing field of companies trying to upend film and television production using generative artificial intelligence. The Firefly Video Model, as the technology is called, will compete with OpenAI's Sora, which was introduced earlier this year, while TikTok owner ByteDance and Meta Platforms have also announced their video tools in recent months. Advertisement · Scroll to continue Facing much larger rivals, Adobe has staked its future on building models trained on data that it has rights to use, ensuring the output can be legally used in commercial work. San Jose, California-based Adobe will start opening up the tool to people who have signed up for its waiting list but did not give a general release date. While Adobe has not yet announced any customers using its video tools, it said on Monday that PepsiCo-owned (PEP.O), opens new tab Gatorade will use its image generation model for a site where customers can order custom-made bottles, and Mattel (MAT.O), opens new tab has been using Adobe tools to help design packaging for its Barbie line of dolls. Advertisement · Scroll to continue For its video tools, Adobe has aimed at making them practical for everyday use by video creators and editors, with a special focus on making the footage blend in with conventional footage, said Ely Greenfield, Adobe's chief technology officer for digital media. "We really focus on fine-grain control, teaching the model the concepts that video editors and videographers use - things like camera position, camera angle, camera motion," Greenfield told Reuters in an interview. Reporting by Stephen Nellis in San Francisco; Editing by Vijay Kishore Our Standards: The Thomson Reuters Trust Principles., opens new tab
[12]
Adobe launches AI video generator in race with OpenAI, Meta
Adobe Inc. unveiled artificial intelligence tools that can create and modify videos, joining Big Tech companies and startups in trying to capitalize on demand for the emerging technology. One feature, integrated into Adobe's video-editing software, Premiere, will let users extend video clips using generative AI, the company announced Monday at its annual product conference in Miami. Other tools, available online, will let users produce video from text prompts and existing images. While OpenAI, Meta Platforms Inc. and Alphabet Inc.'s Google have shown off AI video generators, Adobe is the first big software company to have it widely available for customers. Some startups, such as Runway AI, have already released their video-generating products publicly. "What we hear when we talk to our customers is it's all really cool but they can't use it," Ely Greenfield, Adobe chief technology officer for digital media, said of the competitor's technology. Customers want AI features within applications they already use, Greenfield said. Adobe's new video models are "designed for real work flows and integration into tools," he said. Over the past year, Adobe has focused on adding generative AI features to its portfolio of software for creative professionals, including flagship products Photoshop and Illustrator. The company has released tools that use text to produce images and illustrations that have been used billions of times so far. Adobe has sought to differentiate its models as "commercially safe" due to cautious training data and restrictive moderation. For example, there are certain faces Adobe will block if users try to generate videos of them, Greenfield said. Rivals have come under fire for widely scraping the internet to build AI models. Adobe's video models were trained primarily on videos and photos from its vast library of stock media for marketers and creative agencies, Greenfield said. In some cases the San Jose, California-based company used public domain or licensed data, he added. Adobe has offered to procure videos for about $3 per minute from its network of creative professionals. OpenAI's demonstration of its video-generation model Sora earlier this year ignited investor fears that Adobe could be disrupted by the new technology. The company's shares declined 17% this year through Friday's close. Adobe isn't yet charging for the use of its AI features beyond its standard subscription fees. Each user is allotted a number of credits for AI generations, but the limits aren't being enforced for most plans, Greenfield said. In the future, Adobe may charge more to use its video-focused AI than its similar tool for photos, company executives have said. At its conference, Adobe also announced improvements to other software, such as making it easier to view 3D content in Photoshop. The company also is working on developing AI models that can generate 3D graphics.
[13]
Adobe launches AI video generator in race with OpenAI, Meta
Adobe unveiled artificial intelligence tools that can create and modify videos, joining Big Tech companies and startups in trying to capitalize on demand for the emerging technology. One feature, integrated into Adobe's video-editing software, Premiere, will let users extend video clips using generative AI, the company announced Monday at its annual product conference in Miami. Other tools, available online, will let users produce video from text prompts and existing images. While OpenAI, Meta Platforms and Alphabet's Google have shown off AI video generators, Adobe is the first big software company to have it widely available for customers. Some startups, such as Runway AI, have already released their video-generating products publicly. "What we hear when we talk to our customers is it's all really cool but they can't use it," Ely Greenfield, Adobe chief technology officer for digital media, said of the competitor's technology. Customers want AI features within applications they already use, Greenfield said. Adobe's new video models are "designed for real work flows and integration into tools," he said. Over the past year, Adobe has focused on adding generative AI features to its portfolio of software for creative professionals, including flagship products Photoshop and Illustrator. The company has released tools that use text to produce images and illustrations that have been used billions of times so far. Adobe has sought to differentiate its models as "commercially safe" due to cautious training data and restrictive moderation. For example, there are certain faces Adobe will block if users try to generate videos of them, Greenfield said. Rivals have come under fire for widely scraping the internet to build AI models. Adobe's video models were trained primarily on videos and photos from its vast library of stock media for marketers and creative agencies, Greenfield said. In some cases the San Jose, California-based company used public domain or licensed data, he added. Adobe has offered to procure videos for about $3 per minute from its network of creative professionals. OpenAI's demonstration of its video-generation model Sora earlier this year ignited investor fears that Adobe could be disrupted by the new technology. The company's shares declined 17% this year through Friday's close. Adobe isn't yet charging for the use of its AI features beyond its standard subscription fees. Each user is allotted a number of credits for AI generations, but the limits aren't being enforced for most plans, Greenfield said. In the future, Adobe may charge more to use its video-focused AI than its similar tool for photos, company executives have said. At its conference, Adobe also announced improvements to other software, such as making it easier to view 3D content in Photoshop. The company also is working on developing AI models that can generate 3D graphics.
[14]
Adobe Firefly: New Text To Video AI Model Unveiled
Adobe has rolled out Firefly in beta, a new text-to-video AI tool designed to generate video content that is both commercially safe and trained on licensed materials. Whether you're a seasoned video editor or just starting to explore digital storytelling, Firefly offers an intuitive, fresh way to create stunning video content. It functions like a creative partner that not only understands your vision but enhances it, all while ensuring your work remains free from copyright concerns. With Firefly, you can easily infuse your videos with atmospheric elements, experiment with diverse styles, and even bring static images to life, all while maintaining a high level of quality and consistency. It sets a new standard in the industry by producing commercially safe videos trained exclusively on licensed content. Designed to integrate seamlessly within Adobe's suite of creative applications, Firefly opens up new possibilities for video creators at all levels. Check out what is possible in the excellent video by Okay Samurai who shows some impressive results using the beta release. Firefly Video prioritizes commercial safety by exclusively using licensed content for its AI training. This approach effectively: By implementing these measures, Adobe ensures that Firefly Video stands out as a responsible and trustworthy tool in the AI-driven creative landscape. The Firefly model excels in generating high-quality videos that maintain consistency with user prompts. This reliability enables creators to produce professional-grade content, significantly elevating the overall quality of their projects. Users can depend on Firefly to deliver: This level of consistency and quality makes Firefly an invaluable asset for content creators across various industries. Firefly's integration into Adobe's ecosystem, particularly within the Adobe Premiere Pro beta, provides users with a powerful tool for enhancing their creative projects. This integration allows creators to: The seamless incorporation of Firefly into Adobe's suite enables a more efficient and creative video production process. Here are more guides from our previous articles and guides that you may find helpful. A standout feature of Firefly is its ability to generate diverse video styles. This versatility is invaluable for creators exploring different artistic directions, allowing them to: Firefly's range of style options enables creators to realize their creative visions across multiple genres and formats. Firefly's image-to-video transformation feature allows users to enhance existing footage or create entirely new video content from static images. This capability: This feature opens up new possibilities for content creators, allowing them to breathe life into static visuals and create more engaging video content. Firefly is engineered for computational efficiency, allowing rapid video content generation without compromising on quality. This makes it an ideal tool for creators working under tight deadlines. The benefits of this efficiency include: These efficiency gains translate to a more productive and cost-effective video production process for users across various scales of operation. Adobe has made Firefly accessible to a broad spectrum of users through its availability in the Adobe Premiere Pro beta. This accessibility: By making these powerful tools widely available, Adobe provide widespread access tos access to advanced video creation techniques, fostering innovation across the creative industry. Adobe Firefly Video focus on commercial safety, high-quality output, and diverse creative capabilities provides a comprehensive solution for modern video content creation. By integrating seamlessly into Adobe's ecosystem, Firefly enables users to explore new creative horizons efficiently and effortlessly, marking a new era in digital video production. Released in its Beta development stage Adobe has opened up a waiting list for those interested in trying out the new text to video AI generator.
[15]
Video-making AI tools are headed into general use
Driving the news: Adobe on Monday released a public beta of its Firefly Video Model, allowing its Creative Cloud subscribers to turn ideas and photos into short video clips. Zoom out: As video turns into the next big frontier in AI content creation, the industry is racing into a new competitive brawl over capabilities and speed. Case in point: Startup Truepic, which specializes in assuring the legitimacy of photos and video, announced Tuesday is has uploaded the first video to YouTube that includes end-to-end content credentials verifying its authenticity. Yes, but: Video creation services are expensive for AI companies to run and pose added safety risks. Even with those limits, Adobe says it knows that it is putting a powerful ability in the hands of lots of creators by enabling a video clip to be generated from a single image. Between the lines: Applying labels to content made using popular AI tools is important, says Truepic's McGregor, but not sufficient, "because bad actors will use open source models that do not have this [technology]." What we're watching: The biggest impact would come if Apple and Google included this technology in the default Android or iOS cameras -- since that's where the majority of photos are taken.
[16]
Adobe Is Bringing AI Video Generation to Premiere Pro, With a Slight Catch
As AI technology improves, it's getting better at generating realistic-looking videos. For a while, these tools were only really available for enthusiasts, but they're slowly becoming something everyone can use. Now, Adobe has brought AI video generation to the public in a big way, albeit you won't be making whole movies with it. Adobe Brings AI Video Generation to Premiere Pro and Firefly As announced in two separate posts on the Adobe blog, you can now use AI to generate videos in a few different ways. The first blog post, titled "Generative Extend in Premiere Pro," reveals that Adobe's professional video editing software can now use AI to extend the length of a clip. It can only add two seconds to your video, so you won't be using this tool to make a new scene. However, it is a good way to ensure your clips finish the way you want them to. You should find this feature in the Premier Pro Beta branch, which is rolling out to people right now. The second blog post, titled "Generate Video (beta) on Firefly Web App," shows off what you can do on the online version of Adobe's AI generator. With Firefly, you can now generate videos using either a text-based prompt or by uploading an image for it to work with. Whichever option you pick, Adobe will limit you to five seconds of video, so it's best for creating short clips instead of entire videos. If you want to make videos using AI but Adobe's restrictions are a little too strict for you, why not try a different tool? For example, one of our authors used an AI text-to-video tool to make a social media video and documented what happened. If that doesn't work for you, check out the best AI video generators for some more ideas.
[17]
Adobe releases new AI video model in beta
The new video model is the latest addition to Adobe's suite of generative AI tools. Adobe has today (14 October) released its AI video model across its creative suite in limited public beta. The Firefly Video Model is bringing a range of tools to the Adobe Creative Cloud, including a function that extends clips in Premiere Pro, a text to video tool and an image to video tool. The new video model is the latest addition to Adobe's suite of generative AI models, known as Firefly. Firefly was first introduced in March of 2023, and already includes an image model, a vector model and a design model. The new video model was first unveiled last month, and is currently only available through a limited public beta to gather feedback from "a small group of creative professionals", which will be used to "refine and improve" the model, according to Adobe. One of the functions introduced with the new video model is called Generative Extend, which can be used in Premiere Pro to extend clips to cover gaps in footage, smooth out transitions or hold on shots longer for edits. The image- and text-to-video tools will allow users to generate video using text prompts, camera controls and reference images, and will be available in the Firefly web app. Along with the video model, Adobe has also released a set of updates for its other Firefly models, including faster image generation for the Firefly Image 3 model and enhancements to the Vector Model functions in Adobe Illustrator. Creator concerns Along with today's announcement, principal product marketing manager for Adobe Pro Video Meagan Keane added a note about Adobe's "commitment to creator-friendly AI innovation". "Our Firefly generative AI models are trained on licensed content, such as Adobe Stock, and public domain content - and are never trained on customer content," said Keane. "In addition, we continue to innovate ways to protect our customers through efforts including Content Credentials with attribution for creators and provenance of content." In June, Adobe faced backlash online from filmmakers and artists after a terms of use update that allowed its machine learning tools to "access" and "view" user content, without a clear explanation of how customer content would be used by the company. This backlash lead to Adobe updating its terms of use again in order to make its legal language more understandable. In a blogpost in June, the company tried to clear the air on its stance and reassure that user content will not be used to train any of its generative AI tool. "We've never trained generative AI on customer content, taken ownership of a customer's work, or allowed access to customer content beyond legal requirements. Nor were we considering any of those practices as part of the recent Terms of Use update. "That said, we agree that evolving our Terms of Use to reflect our commitments to our community is the right thing to do." Earlier this year, Adobe revealed new generative AI features to improve customer experience management services as well as a new partnership with Microsoft. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[18]
Video editors are already impressed by Adobe's AI video editing tool in Premiere Pro
Adobe has announced a bunch of new tools and features across Creative Cloud apps at Adobe MAX 2024, but one of the biggest pieces of news is that for the first time, users can use Firefly AI to generate video from text prompts. And they can do so directly in Premiere Pro, Adobe's industry-standard video editing software. While Adobe has been teasing AI video generation for some time - it first announced that the feature would be coming to Premiere Pro back in April - the first addition in Premiere Pro beta is still surprising video editors. While we're not getting a full batch of generative AI video features yet, Generative Extend has huge potential for regular video editors, providing the ability to lengthen a clip to fill gaps in a timeline. Like Generative Expand in Photoshop, the name of the new AI tool in Premiere Pro is fairly self-explanatory. It uses AI to extend clips or generate missing shots to fill short spaces on a timeline for the perfectly timed edit. It's a solution to a fairly common problem in video editing: finding that a clip ends half a second too early, which often requires a clip to be slowed down or for a still to be used to fill the space. And it's easy to access in the toolbar, making it as easy to use as adjusting an edit point. "This is going to be so useful," wrote, Florida Keys Film Commissioner Chad Newman on X. "I can't tell you the number of times I've wished for just a few more frames. No more having to slow down the last 10 frames to make it fit game". "Getting closer to my wish of just working in premier and not having to jump around to different platforms to generate," the videomaker Allen T wrote. There are limitations. For now, Premiere Pro's AI video extension is limited to 1920x1080 or 1280x720 resolutions in a 16:9 aspect ratio at 12-30fps. Videos must be at least 2 minutes long. And while the tool can extend the room tone and sound effects too, it can't extend music due to copyright. Based on initial experiments, it doesn't seem to work quite as smoothly as shown in Adobe's demo video, but the speed of generation is surprisingly fast. It sometimes struggles with a lot of motion, but it can work well on more static shots. It can also be used to add additional media for J or L cuts to to correct eye-lines or actions that move mid-shot. And this is only the beta version. It's very promising for the future of Premiere Pro. When Adobe gets round to adding an AI object remover as well, the program is all but guaranteed to keep its spot at the top of our guide to the best video editing software. Adobe has also announced that it's launching standalone access to the Firefly AI video model online through the Firefly site. This implementation will allow text-to-video prompting, including the ability to use a variety of camera controls. And users will also be able to generate video from stills using reference images, similar to what's currently offered by tools such as Runway. Users will be able to use a reference image alongside a text prompt to, for example, create a complementary shot for existing content, such as a close-up, by uploading a single frame, or to create new b-roll from still photography. The Firefly video model will also be able to create atmospheric elements like fire, water, light leaks, and smoke on a black or green background so they can be layered over existing video using blend modes or keying in Premiere Pro or After Effects. Another potential use is to visualise creative intent for difficult-to-capture or expensive shots, allowing quick turn arounds before going into VFX or back to set for pick-up shots, and removing the need for the dreaded 'insert shot here' placeholder. Much like with the original Adobe Firefly AI Image Model, Adobe is billing Firefly Video as the first publicly available AI video model designed to be safe for commercial use. What it means by that is that the model was trained on licensed content, which it believes will prevent users from facing any potential legal challenges. The standalone Firefly AI video model is currently available only through a limited public beta open to a "small group of creative professionals". Image-to-Video clips are limited to five seconds, and the quality to 720p and 24 frames per second. But, based on initial demonstrations, it looks like it provides good prompt adherence to detailed prompts and impressive clarity. And the addition of a Firefly video model to join the existing Image Model, Vector Model and Design Model, looks set to make Firefly the most complete generative AI model for creatives. Access to Adobe Premiere Pro and other Adobe apps is available via a subscription to Creative Cloud. See the best prices in your region below.
[19]
You can now generate AI videos right in Premiere Pro | Digital Trends
Firefly can now generate videos from image and text prompts, as well as extend existing clips, Adobe announced on Monday. The new feature is currently rolling out to Premiere Pro subscribers. The video generation feature makes its debut in a number of new tools for Premiere Pro and the Firefly web app. PP's Generative Extend, for example, can tack on up to two seconds of added AI footage to either the beginning or ending of a clip, was well as make mid-shot adjustments to the camera position, tracking, and even the shot subjects themselves. The generated video is available in either 720p or 1080p resolution at 24 frames per second (fps). The tool can also extend the clip's sound effects and ambient noise by up to 10 seconds, though it cannot do the same with spoken dialog or musical scores. Recommended Videos The Firefly web app is receiving two new AI tools of its own: Text-to-Video and Image-to-Video tools are rolling out in limited public beta, and you can apply for the waitlist here. They do what they sound like they do. Text-to-Video generates short clips in a variety of artistic styles and enables creators to iteratively fine-tune the output video using the web app's camera controls. Image-to-Video, similarly, uses both a text prompt and reference images to get the model closer to what the creator has in mind, in fewer iterations. Both web features take around a minute and a half to generate videos up to five seconds long at 720p resolution and 24 fps. While none of these new video generation features are particularly groundbreaking -- Runway's Gen-3, Meta's Movie Gen, and OpenAI's upcoming Sora all boast nearly identical features and functionalities -- Firefly does offer its users an advantage over other models in that its outputs are "commercially safe." Adobe trained its Firefly model on Adobe Stock images, openly licensed content, and public domain content, meaning that its generated outputs aren't likely to trigger any copyright infringement claims. If only the same could be said for rivals Runway, Meta, and Nvidia.
[20]
Adobe's free AI video generator is here - how to try it out
Adobe has launched its Firefly Video Model ahead of Meta, Google, and OpenAI's competitor generators. Even though image generators may already seem like an innovative, advanced application of artificial intelligence (AI), companies have set their sights on the next forefront: AI video generation. Today, Adobe has become the first major company to make its AI video generator available to the public. At its annual creativity conference, Adobe Max, the company unveiled its latest AI features and products across its suite of creative tools, including its generative AI models, known collectively as Adobe Firefly. Now, users can use texts or images to create AI-generated videos with the company's new Adobe Firefly Video model. Also: How to use Gemini to generate higher-quality AI images now - for free The video model will be available on the Firefly website in public beta, where users can test the model by inputting texts or images they'd like converted to video. Adobe plans to use that feedback from the beta to improve the model further. Adobe's Firefly for Video model will also be available in Adobe Premiere through a new Generative Expand feature, also in beta. This feature allows users to expand a clip with AI-generated video and audio that matches the original clip. According to Adobe, the new model stands out because it is commercially safe. Like the other Firefly Models, it was trained on Adobe Stock images, openly licensed content, and public domain content. Furthermore, Adobe Stock contributors whose content was used to train the model are eligible for a Firefly Contributor Bonus. Also: Adobe unveiled a new tool to help protect artist's work from AI - and it's free Of course, like when using any other AI image generator, it is always a good idea to be transparent about your use of AI to create the image in order to build trust with your audience and be aware of the potential legal risks that can come with using the technology. If you are interested in trying out the model for yourself, you can join the waitlist. Once you get access, while it is in public beta, all generations will be free. All you have to do is select the model, enter a prompt, and get started. There is also a suggestion box to spur your creativity and camera controls to allow you to customize the generation as much as you'd like through camera angle, motion, and zoom. This launch beats OpenAI's text-to-video model, Sora, which was announced in February and has yet to be made available to the general public. Google's counterpart, Veo, was announced in May but has also not been released publicly, though YouTube announced that it would be incorporated into the application to help creators make content. Meta also announced its version, MovieGen, earlier this month, which is unavailable yet.
[21]
Adobe Moves Deeper Into Generative AI With Firefly Video Model
Adobe has cautiously dipped into generative AI with its Firefly technology, wary of low-quality results and treading on intellectual property rights. But at its Max conference this week, it unveiled a Firefly Video Model that supports text to video and image to video. "Video generative AI is hard," Adobe's VP of Generative AI, Alexandru Costin, said in a press briefing. The company is focused on features that creative professionals actually need, rather than buzzy tricks, but with the Firefly Video Model, it takes a few steps forward in offering generative AI for professionals. The model will appear in the standalone Firefly web application. Text to video works by generating video content based on a text prompt, while image to video converts a still image into animated video content. Adobe trained the model on hundreds of millions of high-quality assets and applies Content Credentials to its creations to let the world know that they're AI-generated. The tool can generate both realistic and imaginary-looking scenes. Just as important for video creators, Firefly gives them control over the virtual camera angle, motion, and zoom. It also lets them choose aspect ratios and frame rates and supports text graphics -- which are often botched by generative AI -- as well as simulated 2D or 3D stop motion. The Firefly Video Model also shows up in Premiere Pro in the form of the Generative Extend feature, which allows video editors to lengthen footage to fit their project using convincingly generated frames based on an existing video clip. New Features in Photoshop Updates in Adobe's leading photo-editing software follow the same theme. In Photoshop's case, the highlight is Auto Photo Distraction Removal, a sort of fine-tuned object-removal tool that uses AI to automatically replace distracting items with a convincing background. Also new are updated Generative Fill, Expand, and Background. All this is based on the Firefly 3 model, as is the ability to generate images from scratch with text prompts. Photoshop now gets a 3D Viewer to integrate 3D models into 2D images. Adobe moved the 3D-editing tools that used to be in Photoshop into the separate suite of Adobe Substance 3D applications. New Features in Premiere Pro Adobe has freshened up and made more consistent the interface design of Premiere Pro, the company's industry-standard pro video-editing software. It also updated the program's Color Management features. But the most exciting new feature is the aforementioned Firefly-powered Generative Extend. The app also gets a new (also AI-powered) Context-Aware Properties panel, which surfaces the controls you're most likely to want at the current moment in your workflow. Also new is a Frame.io panel for collaboration, showing review and approval info. Adobe claims that performance gets a 3x boost for things like ProRes exports. New for Adobe Illustrator and InDesign Intriguing new capabilities for Adobe Illustrator include Objects on Path and an enhanced Image Trace tool that more accurately lets you convert bitmaps to vector images. The Firefly generative AI tool coming to Illustrator is Generated Shape Fill, which creates vector content to fill your shapes. Also new for the leading illustration software is Project Neo, is a hybrid web and desktop application that offers a way to create and edit 2D vector images using 3D techniques. For Adobe InDesign, Max 2024 adds Firefox Generative Expand, Text to Image, and integration with Adobe Express. New for Adobe Lightroom In terms of genAI, Lightroom gets improved Generative Remove, which not only removes objects from photos but also fills in what was removed with appropriate content. For the mobile and web versions of Lightroom, there's a new Quick Actions feature, which lets you work quickly while you're away from your main photo-editing rig. Adobe also announced performance improvements for the software across the Lightroom ecosystem. New in Adobe Express Adobe Express is the company's web-based template media-creation tool, mostly for use by social media marketers. At the Max conference, Adobe announced that it can now work seamlessly with InDesign and Lightroom -- it already integrates with your Photoshop cloud-stored content. Express gets a new Animate All tool along with sound effects. Bulk Create, Resize, Expand have been added to its quiver of tools, as have branding controls for colors and more. Cool new text features include Rewrite and Translate. Frame.io, GenStudio, New Training Opportunities For pro video workflows, Frame.io is a standard. At Max, Adobe announced custom metadata for tagging assets and Collections to group your content. It also gets support for new cameras with its Camera to Cloud capability: Canon, Nikon, and Leica join the larger group of cinema cameras. Relatedly, Frame.io now integrates with Lightroom. GenStudio is a new "generative AI workflow application" with performance marketing capabilities. It lets businesses create, activate, and measure the performance of campaigns in one application, and it integrates with major web services from Google, Meta, Microsoft, and more. GenStudio has been in preview for over a year, but at this year's Max the company is announcing its general availability. Finally, Adobe announced a new program to offer training to help bridge the digital divide. The company will spend $100 million in scholarships and product access to help 30 million people worldwide to acquire skills in AI literacy, digital marketing, and content creation.
[22]
Adobe Firefly Video Is the First Commercially Safe Generative AI Video Model
Adobe unveiled its Firefly Video Model last month, previewing a variety of new generative AI video features. Today, the Firefly Video Model has officially launched in public beta and is the first publicly available generative video model designed to be commercially safe. Since Firefly's first beta in March 2023, users have generated more than 13 billion images, six billion of which were created in the last six months. Firefly is featured in numerous Adobe apps, including Photoshop, Express, and Illustrator, and with the introduction of the Firefly Video Model (beta), it is coming to Premiere Pro, Adobe's venerable video editing software. Firefly's primary generative video technology is text-to-video, the motion equivalent of text-to-image. Users can describe the video they want in a specific style. Further, Firefly offers a variety of camera controls, including angle, motion, and zoom, enabling people to finetune the video results. It's also possible to generate new video using reference images, which may be especially helpful when trying to create B-roll that can seamlessly fit into an existing project. This lattermost feature is precisely how Firefly fits into Premiere Pro. With Generative Extend (beta), creators and editors can extend existing clips using Firefly to smooth out transitions or hold on shots longer to get perfectly synced edits -- rather than reshoot something. "The usage of Firefly within our creative applications has seen massive adoption, and it's been inspiring to see how the creative community has used it to push the boundaries of what's possible," says Ely Greenfield, chief technology officer, digital media at Adobe. "We're thrilled to bring creative professionals even more tools for ideation and creation, all designed to be commercially safe." The commercially safe aspect is an important one for Adobe. Firefly has been trained exclusively using licensed and public domain content. Further, content created using Adobe Firefly may include Content Credentials, showing others that it was made using generative AI. a To date, Firefly has been used by numerous Adobe enterprise customers to optimize workflows and scale content creation, including PepsiCo/Gatorade, IBM, Mattel, and more. Adobe Firefly Video Model is available in a limited public beta today through the Firefly web app, including text-to-video and image-to-video capabilities. Generative Extend is available in Premiere Pro now via beta.
[23]
Adobe introduces new generative AI features for its creative applications - SiliconANGLE
Adobe introduces new generative AI features for its creative applications Adobe Inc. introduced a raft of new artificial intelligence features for creative professionals at its Adobe Max product today. Some of the capabilities are rolling out to the company's video editing applications. The others will mostly become available in Adobe's suite of image editing tools, including Photoshop. Adobe is upgrading its Premiere Pro video editing application with a generative AI model called the Firefly Video Model. It powers a new feature called Generative Extend that can extend a clip by two seconds at beginning or end. Additionally, it's capable of extending sound effects by up to ten seconds. According to Adobe, making slight edits to a video is another use case that the feature supports. Generative Extend can, for example, remove an unwanted camera movement that interrupts the flow of a clip. The feature generates video content with 720p or 1080p resolution at a rate of 24 frames per second. Adobe's Firefly cloud service, which provides access to AI-based design tools, is also receiving new video editing capabilities. One of the additions is a feature that generates five-second clips based on text prompts. It's joined by a similar capability, Image-to-Video, that allows users to describe the clip they wish to generate using not only a prompt but also a reference image. The first image editing application that Adobe is enhancing as part of today's update is Illustrator. It shares certain features with Photoshop, but has a significantly narrower focus. Creative professionals use Illustrator to design visual assets such as logos and infographics. The first new feature in Illustrator, Objects on Path, makes it easier to move objects to specific locations within an image. That task can involve a significant amount of work in some cases, such as when a designer wishes to place a large number of objects at exactly the same distance from one another. The new feature reduces the process to a few clicks. Objects on Path is rolling out alongside an enhanced version of Image Trace. That's an existing Illustrator feature for creating scalable vector, or easily resizable, versions of an image. According to Adobe, its engineers have enhanced the visual fidelity of the feature's output. Photoshop, the company's flagship image editing application, is being updated as well. The most significant addition is an AI-powered feature called Distraction Removal. When the feature is active, the underlying AI model automatically finds a list of objects that the user may wish to remove from an image. Distraction Removal might, for example, highlight overhead wires in a photo of an office building. Users can remove highlighted objects with one click. Before designers can edit a section of an image, they have to select it in the Photoshop interface. The application is receiving a feature that speeds up the task by automatically selecting all the objects in an image. That removes the need for designers to manually draw a line around each item they wish to edit. Users can modify selected objects using a number of existing generative AI features in Photoshop. One capability generates visual assets similar to the one highlighted by a designer. The others can embed new objects into an image, modify the background and perform related tasks. Adobe is upgrading those existing capabilities to a new AI model called the Firefly Image 3 Model. According to the company, the update will improve both the quality and variety of the content that the features generates. While at it, Adobe is also adding a tool called Generative Workspace that allows users to generate a large number of images at once with text prompts. Further down the line, both Photoshop and Illustrator will integrate with another generative AI tool called Project Concept. Adobe says that the upcoming tool will enable designers to automatically apply the style of one image to another.
[24]
You Can Now Try Adobe's Video Generation Model in Premiere Pro
Video generation is slowly getting to the point where, not only can it produce solid results, but it can be accessed by those who are not-so technically inclined. Now, Adobe's Firefly generative video component is finally going live in Premiere Pro. Adobe Firefly's generative video technology debuted in September and offers both text-to-video and image-to-video capabilities. You can now experiment with these features through the Premiere Pro beta app or a web beta. While Premiere Pro is limited to a Generative Extend feature, which uses AI to increase the length of an existing video, the web beta allows you to generate up to five seconds of video from text or image prompts, with customization options for camera movement and style. Early impressions suggest that Firefly's text-to-video model may not yet be as polished as those from competitors like Runway and OpenAI. We tried to give it a spin ourselves, but it's currently behind a waitlist, and we have no clue whether we will be allowed in anytime soon. If you want to give this a spin, you'll have to sign up for the waitlist as well. If you try Firefly through the Premiere Pro app beta, available to all Premiere Pro customers, you'll have the option to try Generative Extend. This feature seamlessly lengthens video clips by up to two seconds, even extending background audio without replicating copyrighted music or voices. Text-to-video generation is not available in Premiere Pro at the time of writing. The company says it remains committed to responsible AI development, training Firefly on a curated dataset of commercially safe content and incorporating "AI-generated" watermarks in video metadata. Adobe also aims to address concerns among creatives by highlighting how AI tools can increase productivity and content demand, as many creatives feel that AI is a threat to their work, rather than an additional tool in their toolbox. Source: Adobe via TechCrunch
[25]
Game-changing AI comes to Adobe Premiere Pro
AI-generated video in Adobe Firefly for Premiere Pro (Image credit: Adobe) True generative AI video editing has arrived on Premiere Pro. At this year's Adobe Max, the company has revealed the new genAI video tools are now available in beta, including the first generative video model designed to be safe for commercial use. As we reported last month, the latest update adds a whole suite of genAI tools. Generative Extend is the headline feature, letting users increase the length of video and audio clips. But there's much more on offer as Adobe pushes its Firefly AI deeper into the video editing software. With the release of the first set of Firefly-powered video editing workflows, Adobe has confirmed several core focuses. First, dissatisfied with the quality of previous results, Adobe has R&D'd the latest version to the Nth degree. As well as improved video quality, the model has, the company said, been trained on Adobe Stock and public domain data - and not user data or media found online. Adobe trusts the safeguarded training, alongside the indemnification available to enterprise customers, makes this the first generative video model designed to be commercially safe, and more attractive to professionals looking to use AI without fear of copyright infringement. That doesn't mean Adobe's forgotten the core of the experience. In a virtual press conference attended by TechRadar Pro, Alexandru Costin, Vice President, Generative AI and Sensei at Adobe, explained that users "told us editing is more important than pure generation. If you look at the success of Firefly Image, the most use we get inside Photoshop is with Generative Fill because we're serving an actual customer workflow. So, with video, we've decided to focus more on generative editing." So, what does that look like in practice? Generative Extend is the clearest, and most useful example coming to the beta. This tool lets users extend existing video and audio clips to match the soundtrack or alter the pacing, even without enough coverage. Image to Video and Text to Video have also arrived in earnest - as one would expect to find in any self-respecting AI video editor. By the looks of things, it works in a similar fashion that that found elsewhere across the Creative Cloud ecosystem - with, like any good movie, a twist. Here, users can effectively become the director with creative control over shot size, angle, motion, and zoom. Using the new models, the company also showcased examples of text graphics, B-roll content, and overlaying AI-generated atmospheric elements like solar flares to existing footage. The latest updates build on last month's set of beta tools, including a new context-aware properties panel that adds most needed tools into one place to speed up workflows. There's a new Color Management that, Adobe said, "fundamentally transforms the core color engine." And general performance sees an improvement, ProRes exports, for example, are now three times faster than before. We'll be reviewing the latest version of Premiere Pro soon, and we're keen to see how well the new video tools complement the editing process. In the meantime, users can try out Adobe's new tools in beta by clicking here.
[26]
Adobe Launches AI Video Generator in Race With OpenAI, Meta
Adobe Inc. unveiled artificial intelligence tools that can create and modify videos, joining Big Tech companies and startups in trying to capitalize on demand for the emerging technology. One feature, integrated into Adobe's video-editing software, Premiere, will let users extend video clips using generative AI, the company announced Monday at its annual product conference in Miami. Other tools, available online, will let users produce video from text prompts and existing images.
[27]
Adobe Launches Firefly Video Model and Enhances Image, Vector and Design Models
The Adobe Firefly Video Model (beta) expands Adobe's family of creative generative AI models and is the first publicly available video model designed to be safe for commercial useEnhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere ProFirefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises Today, at Adobe MAX - the world's largest creativity conference - Adobe (Nasdaq: ADBE) announced the expansion of its Firefly family of creative generative AI models to video, in addition to new breakthroughs in its Image, Vector and Design models and significant momentum in Firefly's adoption by leading brands and enterprises. The Firefly Video Model, now in limited public beta, is the first publicly available video model designed to be commercially safe. Since Firefly's first beta release in March 2023, it has been used to generate more than 13 billion images - an increase of more than 6 billion over the past six months. "The usage of Firefly within our creative applications has seen massive adoption, and it's been inspiring to see how the creative community has used it to push the boundaries of what's possible," said Ely Greenfield, chief technology officer, digital media at Adobe. "We're thrilled to bring creative professionals even more tools for ideation and creation, all designed to be commercially safe." New Firefly-powered Offerings The Firefly Video Model (beta) extends Adobe's family of generative AI models, which already includes an Image Model, Vector Model and Design Model, making Firefly the most comprehensive model offering for creative teams. It is available today through a limited public beta to garner initial feedback from a small group of creative professionals, which will be used to continue to refine and improve the model. Within one year of being launched, Firefly was brought into Photoshop, Express, Illustrator, Substance 3D and more, while supporting various workflows in Creative Cloud applications. Firefly also supports text prompts in over 100 languages and enables users around the world to create stunning content that is designed to be safe for commercial use. New Firefly offerings in Creative Cloud available today include: Generative Extend (beta) for perfectly timed video edits: Powered by the Firefly Video Model and now available in the Premiere Pro beta, Generative Extend allows you to extend clips to cover gaps in footage, smooth out transitions, or hold on shots longer for perfectly timed edits.Text to Video G Image to Video (beta) for improved user controls and stunning video clips: Powered by the Firefly Video Model and now rolling out in limited public beta in the Firefly web app, creators can access new Text to Video and Image to Video capabilities. With Text to Video, video editors can generate video from text prompts, access a variety of camera controls such as angle, motion and zoom to finetune videos, and reference images for B-Roll generation that seamlessly fill gaps in a video timeline. Image to Video capabilities allow creators to bring still shots or illustrations to life by transforming them into stunning live action clips.Firefly Image 3 enhancements for faster generations: With the latest evolution of Firefly Image 3 Model, creators of all levels can ideate by generating images in seconds with results that are up to 4x faster than previous models - available today on the Firefly web app.Generative Workspace (beta) in Photoshop: Powered by Adobe Firefly, Generative Workspace in Photoshop allows designers to ideate, brainstorm and iterate concepts simultaneously to achieve their vision while producing stunning visuals faster and more intuitively than ever before.Firefly Vector Model (beta) Advancements in Illustrator: Adobe Illustrator brought Generative Shape Fill (beta), Generative Recolor and Text to Pattern, all powered by the latest Firefly Vector Model (beta) earlier this year, empowering designers to quickly ideate or add detailed vectors in their own unique style to existing artwork and designs. With the latest version of Firefly Vector Model, creators can now further control the density of elements in a single pattern to change how tightly the elements are packed together. Adobe also previewed Project Concept, a new capability for multiplayer, collaborative, creative concept development bringing the ability to remix images in real time so creative professionals can concept live in a single canvas. Content Creation at Scale with New Enterprise Offerings Additionally, in Firefly Services, a collection of creative and generative APIs for enterprises, Adobe unveiled new offerings to scale production workflows. This includes Dubbing and Lip Sync now in beta, which uses generative AI for video content to translate spoken dialog into different languages, while maintaining the sound of the original voice with matching lip sync. Additionally, 'Bulk Create, Powered by Firefly Services' is now in beta and will enable creative professionals to edit large volumes of images more efficiently, streamlining tasks such as resizing or background removal. To date, Adobe Firefly has been used by Adobe customers including PepsiCo/Gatorade, IBM, Mattel, IPG Health, Deloitte and others, to optimize workflows and scale content creation so creatives can spend more time exploring their creative visions. Driving Responsible Innovation with Adobe Firefly Firefly powers generative AI tools designed for creative needs, use cases, and workflows. Adobe trained its Firefly generative AI models on licensed content, such as Adobe Stock and public domain content. In addition, Adobe's AI features are developed in accordance with the company's AI Ethics principles of accountability, responsibility, and transparency. Since founding the Content Authenticity Initiative in 2019, Adobe has championed the widespread adoption of Content Credentials as the industry standard for transparency in digital content, now supported by over 3,700 members. Content Credentials, which act like a "nutrition label" for digital content to show how it was created and edited, are applied to select Firefly-powered features across Creative Cloud to indicate the use of generative AI. Pricing and Availability The Firefly Video Model is in limited public beta on firefly.adobe.com. Join the waitlist here. During this limited public beta, generations are free. Adobe will share more information about Firefly video generation offers and pricing when the Firefly Video Model moves out of limited public beta. About Adobe Adobe is changing the world through digital experiences. For more information, visit www.adobe.com. © 2024 Adobe. All rights reserved. Adobe and the Adobe logo are either registered trademarks or trademarks of Adobe in the United States and/or other countries. All other trademarks are the property of their respective owners.
[28]
Adobe brings generative AI video to Premiere Pro
Adobe is now adding its AI-based video generator, Firefly, to its video editing software Premiere Pro. The Firefly model can be used to extend a video clip or generate video from still images or text instructions. This was first brought to our attention by The Verge. The Generative Extend tool will initially be available in beta and can extend the length of a video clip by up to two seconds with an image resolution of 720p or 1080p and a refresh rate of 24 frames-per-second. This tool will also be applicable to ambient sounds and sound effects, but not to music or speech. Adobe's Text-to-Video and Image-to-Video tools are available in Premiere's web app as a beta and can be used to generate video from text or still images. Adobe suggests that the tools could be used to create B-roll, i.e. secondary film stills or clips. The maximum length of a generated clip is five seconds with a resolution of 720p and 24 frames-per-second. All videos created or edited with Adobe Firefly can be embedded with Content Credentials to account for AI usage and ownership if published online.
[29]
Adobe's AI video model is here, and it's already inside Premiere Pro
This is what some of the camera control options look like to adjust the generated output. Image-to-Video goes a step further by letting users add a reference image alongside a text prompt to provide more control over the results. Adobe suggests this could be used to make b-roll from images and photographs, or help visualize reshoots by uploading a still from an existing video. The before and after example below shows this isn't really capable of replacing reshoots directly, however, as several errors like wobbling cables and shifting backgrounds are visible in the results.
Share
Share
Copy Link
Adobe launches Firefly AI video creator, offering businesses a tool for generating custom content while navigating copyright issues. The new technology promises to streamline video production and democratize content creation across various industries.
Adobe has launched a groundbreaking AI-powered video creation tool, Firefly Video Model, designed to revolutionize content production while addressing copyright concerns. This new technology, part of Adobe's Creative Cloud suite, enables businesses to generate custom video content efficiently and cost-effectively 1.
The Firefly Video Model stands out for its use of licensed content, ensuring that generated videos are safe for commercial use. This approach tackles a significant hurdle for businesses, especially those in highly regulated industries, who have been wary of potential copyright infringement issues 1.
The tool offers:
Adobe's AI video creator is set to transform the content production landscape:
Efficiency: The tool drastically reduces video production time, automating tasks like editing, color correction, and simple animations 2.
Cost-effectiveness: Smaller businesses and startups can now produce high-quality video content without substantial budgets 1.
Democratization: The technology makes professional-grade video production accessible to a broader range of businesses and creators 2.
Experts in the field have weighed in on the potential impact of Adobe's new tool:
"This tool enables faster content creation and experimentation, all while ensuring that what is being produced is safe for commercial use," said Robert Petrocelli, chief product and technology officer at Vimeo 1.
Michelle Berryman, executive creative director at Hero Digital, highlighted the efficiency gains: "When you think of a marketer sifting through b-roll or interstitials, it requires hours of searching for the perfect stock video from a library with a poor search experience. But with AI, you can describe what you want to create, and then it's generated" 2.
While the tool promises significant benefits, its introduction has sparked debates within the creative community:
Job displacement concerns: Some professionals view AI as a potential threat to their livelihoods, particularly in technical tasks like editing and animation 1.
Complementary role: Others argue that AI will handle technical aspects, allowing human creators to focus more on conceptualization and creativity 1.
New opportunities: The technology could create new business models and strategies in the creative software industry 2.
As AI-powered video creation tools become more prevalent, the industry may see a shift in how companies create value for their customers. Adobe's approach of integrating AI into existing workflows and focusing on solving real problems for video editors could set a new standard in the field 4.
Alexandru Costin, Adobe's VP of generative AI, encourages creatives to embrace the technology: "Take advantage of generative capabilities to uplevel, upskill, and become a creative professional that can create 100 times more content using these tools" 4.
Reference
[2]
[4]
Adobe's Firefly AI tool is set to introduce video generation capabilities, marking a significant advancement in AI-powered creative software. This development comes as Adobe continues to refine its approach to AI tool development and deployment.
2 Sources
Adobe introduces generative AI video capabilities to Firefly, reaching 12 billion generations. The company faces scrutiny over AI training data while emphasizing safety and expanding its presence in India.
5 Sources
Adobe announces the addition of AI-generated video capabilities to its Firefly platform, positioning itself as a competitor to OpenAI's Sora. The new feature is set to revolutionize video creation for both professionals and casual users.
22 Sources
Adobe introduces AI-powered features across its Creative Cloud suite, emphasizing the need for artists to adopt AI tools to remain competitive in the evolving creative landscape.
4 Sources
Adobe has launched a suite of generative AI tools aimed at boosting content personalization and measuring the impact of AI-generated content. These innovations are set to transform digital experiences and marketing strategies.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved