11 Sources
11 Sources
[1]
Adobe Firefly Image 5 brings support for layers, will let creators make custom models | TechCrunch
Adobe said on Tuesday that it is launching the latest iteration of its image generation model, Firefly Image 5. The company is also adding more features to the Firefly website, support for more third-party models, the ability to generate speech and soundtracks. Notably, the update allows artists to come up with their own image models using their existing art. Image 5 model can now work at native resolutions of up to 4 megapixels, a massive increase from the previous gen model, which could natively generate images at 1 megapixels but then would upscale them to 4 megapixels. The new model is also better at rendering humans, the company said. Image 5 also enables layered and prompt-based editing -- the model treats different objects as layers and allows you to edit them using prompts, or use tools like resize and rotate. The company said it makes sure that when you edit these layers, the image's details and integrity are not compromised. Adobe's Firefly site has supported third-party models from AI labs like OpenAI, Google, Runway, Topaz, and Flu to augment its appeal to its creative customer base, and now the company is taking that a step further by letting users create custom models based on their art style. Currently in a closed beta, this feature lets users drag and drop assets, such as images, illustrations and sketches, to create a custom image model based on their style. The company is also adding some new features to its Firefly website, which was redesigned earlier this year. The site now lets you use the prompt box to switch between generating images or videos, choose which AI model you want to work with, change aspect ratios, and more. The site's home page now features your files and recent generation history, and you also get shortcuts to other Adobe apps (these were previously housed in a menu). Adobe has also redesigned the video generation and editing tool to support layers and timeline-based editing. This design change is currently only available in a private beta, and will be rolled out to users eventually. Firefly is also getting two new audio features: Users can now employ AI prompts to generate entire soundtracks and speech -- using models from ElevenLabs -- for videos. There's also a new way to easily come up with prompts: just add keywords and sections by selecting words from a word cloud. As its competitors like Canva add AI to their platforms, Adobe is trying to cater to new-age creators who are increasingly using AI in their workflows. "We're thinking of the target audience for Firefly as what we call creators or next-generation creative professionals. I think there are these emergent creatives that are GenAI-oriented. They love to use GenAI in all their workloads," Alexandru Costin, the company's VP of generative AI, told TechCrunch over a call. He added that with Firefly, the company now has more freedom to add new features and play around with the interface as it doesn't have to adhere to the muscle memory of creative professionals who might be used to certain workflows in Adobe's existing Creative Cloud tools.
[2]
Adobe's new AI tool lets you fix your photos with simple text commands - try it today
The Prompt to Edit feature lets you edit images using natural language. OpenAI's launch of DALL-E 2 in 2022 ignited an AI text-to-image generator craze, with dozens of tools mlaunching during the past few years. Yet, the use cases for these image generators remain fairly narrow, and they lack everyday, realistic applications for the average person. Adobe's latest image model wants to change that issue. Also: The most popular AI image and video generator might surprise you - here's why people use it At its annual Adobe Max creativity conference, the company unveiled its new Adobe Firefly Image Model 5, which the company describes as its most advanced image generation and editing model yet. Beyond creating high-quality images, the model can also help you edit your existing pictures using AI. After watching demos of the editing features, I think the model solves a significant issue for users. The new model can now generate images in native 4MP resolution, which have more context and have almost twice as many pixels as 1080p, resulting in finer details. Adobe boasts that the results are high-quality images with "photorealistic" results. Also: Why open source may not survive the rise of generative AI Adobe says this attention to detail means the model can tackle challenging tasks, such as anatomically accurate portraits, natural movement, and multi-layered compositions. The real magic of this approach comes to life in the new photo-editing features. Sometimes you'll take a shot that has potential but needs a couple of tweaks to be perfect. Whether the tweaks are simple or complex, they often require you to become familiar with editing software and to click the correct buttons to make adjustments, leading to a more complicated and time-consuming process. With Adobe's new tool, you can just ask the AI to make edits, and the work is done. As the name implies, the new Prompt to Edit tool, powered by Firefly Image Model 5, lets you use a conversational prompt to have an action performed on your photo. I got to see a live demo of the feature in action before the release and was pretty amazed. In the demo, the person uploaded an image of her dog sitting behind a fence. Then she asked the tool to remove the fence from the picture. Within seconds, the fence was removed from the photo of the dog, with AI filling in the spaces where the item previously stood to produce a realistic-looking image. Google launched a similar feature, called Edit with Ask Photos, with the launch of the Google Pixel 10 earlier this year, and I found that tool just as handy. Also: Are Sora 2 and other AI video tools risky to use? Here's what a legal scholar says Building on this feature is Layered Image Editing, which maintains the image's composition accuracy as you make adjustments to its elements. For example, in the sizzle video, numerous items are displayed on a surface, but the user can then select one, drag and drop it, resize it, and even tweak it with a prompt. This task is typically a very complex process in Photoshop that requires multiple tools, steps, and precision. The Prompt to Edit feature is now available to Firefly customers, supporting Firefly Image Model 5 and partner models from Black Forest Labs, Google, and OpenAI. Firefly Image Model 5 is available in beta today. Lastly, while shown at Adobe Max, the Layered Image Editing feature is still in development.
[3]
Adobe Max 2025: all the latest creative tools and AI announcements
Adobe has kicked off its annual Max design conference, where it'll be giving us a first glimpse at the latest updates coming to its Creative Cloud apps and Firefly AI models. The creative software giant is launching new generative AI tools that make digital voiceovers and custom soundtracks for videos, and adding AI assistants to Express and Photoshop for web that edit entire projects using descriptive prompts. And that's just the start, because Adobe is planning to eventually bring AI assistants to all of its design apps.
[4]
Adobe's End-Of-Year Updates Are All AI, and Sometimes Not Even Its Own AI
Adobe MAX kicks off this week and, historically, that has meant a large drop of updates across the company's portfolio of apps. That is technically true this year too, but everything is revolving around AI -- and sometimes, that doesn't even mean Adobe's own technology. Adobe already opened the door to using AI models from competitors earlier this year, and that continues as now Generative Fill in Photoshop can swap from Adobe's Firefly model to using Google Gemini 2.5 Flash Image or Black Forest Labs FLUX.1 Kontext. Further, Generative Upscale can swap over to using Topaz Labs' AI upscale technology, too. Adobe is also bringing the beta Harmonize feature into full production, a tool that promises to blend and match light, color, and tone across disparate images to make them appear as though they were taken at the same time. "We're delivering several groundbreaking AI tools and models into creative professionals' go-to apps, so they can harness the tremendous economic and creative opportunities presented by the rising global demand for creative content," Deepa Subramaniam, vice president of product marketing, creative professionals, at Adobe, says. "With AI that gives creative professionals more power, precision, and control -- and time-savings -- Creative Cloud is truly the creative professional's best friend." While the bulk of updates to Adobe's creative apps appear to be centered around using other companies' AI models, Adobe is also updating Firefly to Image 5 (beta), which it calls its "most advanced image generation and editing model yet." Adobe says it can generate images in native 4 megapixel resolution without upscaling and also "excels" at creating photo-realistic details, such as lighting and fine-detail texture. Adobe's other first-party updates are all in beta. The company is testing an AI Object Mask in Premiere Pro, which it says can automatically identify and isolate people and objects in video frames so that they can be edited and tracked without manual rotoscoping. The video editor is also getting rectangle, ellipse, and pen masking tools (also in beta), which Adobe says can be used to isolate specific areas in a video frame so that they can be adjusted more directly. Premiere Pro is also getting a beta Vector Mask tool, which is a redesigned option that promises faster tracking. Lightroom is getting a lone beta update in Assisted Culling, which Adobe describes as a customizable tool that helps quickly identify the best images in large photo collections, with the ability to filter for different levels of focus, angles, and sharpness. The production features of Generative Fill and Generative Upscale with partner models, as well as Harmonize, are all available today. The other beta features launch into the public beta versions of Adobe's apps today, too. Adobe is also giving Creative Cloud Pro and Firefly plan subscribers unlimited image generations with Firefly and partner models (including video generations) through December 1.
[5]
AppleInsider.com
Adobe is expanding its suite of generative AI features across Firefly, Creative Cloud, and enterprise tools, with new updates aimed at faster workflows and integrated content creation. Adobe is sharing details about its latest AI tools and updates at Adobe MAX 2025. The company highlighted new Firefly features, Creative Cloud improvements, and enterprise-focused AI innovations during the conference. Adobe says the updates will make it easier for creators to produce and manage content across its Creative Cloud ecosystem. Many of these features are in either public or private beta, and are now powered by Firefly and partner model integrations. Firefly for end-to-end production Firefly now supports full video and audio production workflows. Generate Soundtrack, in public beta, uses Adobe's Firefly Audio Model to create original, licensed instrumental tracks that automatically sync with your video footage. Generate Speech, currently in public beta, transforms text into realistic voiceovers in a variety of languages. It also creates emphasis and controls tempo for life-like delivery. Users can arrange, cut, and arrange clips in a multitrack timeline using Firefly's new web-based video editor, which is currently in private beta. The editor has tools for adding titles, music, and voiceovers in browser. With the help of Firefly Image Model 5, Prompt to Edit enables creators to use natural language to explain how they wish to alter images. This feature allows for fine-grained image adjustments. Firefly Boards adds collaborative ideation tools that turn brainstorming sessions into visual layouts. A new Rotate Object feature converts 2D images into 3D perspectives for faster concept development. AI updates to Creative Cloud apps To make editing and post-production more efficient, Adobe is integrating new AI features into Photoshop, Lightroom, and Premiere Pro. According to the company, the updates give professionals more speed, accuracy, and creative control. New partner models, such as Black Forest Labs FLUX.1 Kontext and Google Gemini 2.5 Flash, are now supported by Generative Fill in Photoshop. These integrations maintain visual coherence and lighting while enabling more accurate edits through text prompts. Premiere Pro's new AI Object Mask, in public beta, can automatically identify and isolate people or objects in a scene. It speeds up color grading, blurring, and applying effects to moving backgrounds. Lightroom gains Assisted Culling, also in public beta, to help photographers quickly sort large image collections. The feature identifies the sharpest, best-composed shots and filters by angle, focus, and sharpness. YouTube partnership YouTube and Adobe have partnered to provide Premiere Pro tools to content producers of short videos. The Premiere mobile app will soon introduce Create for YouTube Shorts, a new content creation area. The integration will give users access to Adobe templates, transitions, and visual effects optimized for YouTube's short-form video format. The feature will be accessible directly from YouTube once it launches. Enterprise tools and model access Adobe's GenStudio platform adds new tools for enterprise-scale AI content creation. Firefly Foundry enables businesses to train private AI models using proprietary data for consistent, brand-safe outputs. The company is also expanding access to AI models from partners including Google, OpenAI, Runway, ElevenLabs, and Topaz Labs. Firefly Custom Models, now in private beta, let users generate visuals in their own style for consistent branding. Conversational AI experiences For conversational creation, Adobe is rolling out agentic AI assistants throughout its ecosystem. Using natural language prompts, the assistants help users finish challenging tasks while retaining complete creative control. With Adobe Express's AI Assistant, which is currently in public beta, users can go from idea to completed project in a matter of minutes. The private beta version of Photoshop for the web's AI Assistant automates tedious editing tasks while maintaining human oversight. Project Moonlight, a private beta in Firefly, uses social insights to generate new ideas and manages AI assistants across Adobe apps. Additionally, Adobe previewed early integrations for tools like Adobe Express with third-party platforms, such as ChatGPT.
[6]
Adobe MAX 2025: The Creative Suite gets a major AI boost across products
Here is a rundown of all the new stuff happening in Adobe productsAdobe MAX often described as the "Comic-Con for creatives" kicked off this week in Los Angeles, bringing with it a flood of new AI updates across Photoshop, Premiere Pro, Illustrator, and Lightroom. This year's theme was clear: precision, speed, and control for creative professionals who are increasingly relying on AI to keep up with the rising demand for content. Adobe isn't introducing AI for the first time -- Firefly, the company's generative engine, has been at the center of its push toward intelligent tools for over a year now. But what stood out this time was how much deeper AI has been woven into Creative Cloud -- from single-click masking in video to natural blending in image composites, and even a conversational AI assistant inside Photoshop and Express that you can literally "talk" to. One of the biggest highlights this year is Photoshop's new Generative Fill upgrade, now powered by multiple AI models including Google's Gemini 2.5 Flash Image, Black Forest Labs' FLUX.1 Kontext, and Adobe's own Firefly models. The update gives users more control over prompts, helping preserve lighting, tone, and perspective even in complex edits. Then there's Generative Upscale, which now integrates Topaz Labs' AI -- allowing creators to take low-resolution or cropped assets and upscale them to 4K without losing detail. It's a huge win for editors who work with legacy media or compressed assets. A new tool called Harmonize caught quite a bit of attention too -- it automatically matches light and color when you composite people or objects into new backgrounds. Essentially, it handles the heavy lifting of blending, leaving creators to fine-tune the art. On the video side, Premiere Pro is getting a much-needed AI makeover. Features like AI Object Mask and Fast Vector Mask can now automatically detect and isolate moving subjects in a frame -- something that would've previously required tedious manual rotoscoping. In short, color grading and effects work just got a lot faster. Adobe's Firefly engine -- now a suite in its own right -- also got a major upgrade. The new Firefly Image Model 5, available in public beta, can now generate native 4MP images (without the need for post-upscaling) and handle intricate details like lighting, reflections, and anatomy with more realism. It also powers a new "Prompt to Edit" feature -- type what you want to change in an image, and it does the rest. For professionals or brands looking to maintain a consistent visual language, Adobe is also rolling out Firefly Custom Models. Think of it as training your own AI model, in your style. Just drag and drop reference images, and it learns from them -- privately and securely. This feature is currently in private beta, but it's shaping up to be one of the most powerful tools for creative teams who produce content at scale. Another interesting addition is Adobe's move into "agentic AI." Inside Photoshop (on the web), there's now an AI Assistant that behaves more like a creative partner. You can ask it to perform tasks ("adjust lighting," "make this look cinematic"), or even get real-time tutorials while you work. The assistant can be toggled off at any time if you'd rather tweak settings manually -- a nod to those who prefer control over automation. Adobe is also expanding its collaborative side with Firefly Boards -- a shared space where teams can ideate visually. The new update lets you rotate objects in 3D, bulk-download assets, and even export as PDFs -- all within the same board. And for those managing massive content pipelines, Firefly Creative Production allows batch editing of thousands of images at once -- from background replacements to color grading -- through a no-code interface. A number of these features -- like Generative Fill, Harmonize, and Generative Upscale -- are available in Photoshop today. Premiere's AI Object Mask and new masking tools are in public beta, as is Lightroom's Assisted Culling, which automatically helps pick your best shots from a batch. Firefly's new Image Model 5 and Prompt to Edit are open to all users, while Custom Models and Creative Production are rolling out in private beta next month. Through December 1, Creative Cloud Pro and Firefly plan subscribers can enjoy unlimited image and video generations across all models. This year's MAX made one thing clear Adobe is betting on AI not as a gimmick, but as an extension of the creative process. The tools on display weren't about replacing professionals, but helping them move faster, maintain creative control, and scale their work.
[7]
Adobe expands AI tools across creative platforms at MAX 2025 (ADBE:NASDAQ)
Adobe (NASDAQ:ADBE) on Tuesday announced a broad expansion of artificial intelligence features across its creative applications at Adobe MAX 2025, introducing new AI assistants and models aimed at transforming the creative process for professionals and enterprises. The Photoshop-maker, launched AI assistants across Adobe aims to enhance its creative applications with new AI tools, increasing differentiation for creative professionals and enterprises versus competitors. Enterprise users gain access to streamlined content supply chains via new integrations with major platforms and no-code bulk image editing capabilities. The CEO believes Adobe's stock is undervalued, justifying ongoing share buybacks to enhance shareholder value.
[8]
Adobe brings AI deeper into creativity with Firefly, Photoshop, and more at MAX 2025
Apple may bring vapor chamber cooling to future iPad Pro modelsAt its annual MAX 2025 event in Los Angeles, Adobe pushed the boundaries of creative tech once again -- this time leaning harder into AI. The company announced a sweeping set of updates across Firefly, Creative Cloud, Express, and GenStudio, all built around a single idea: making AI an assistant, not a replacement, for creators. The highlight was Firefly Image Model 5, now in public beta, which promises sharper 4MP native resolution and more photorealistic results for prompt-based editing. Adobe also unveiled AI assistants across apps like Photoshop, Express, and Firefly, designed to let users describe what they want in plain language and refine it using familiar creative tools -- essentially bringing a conversational layer to the design process. What stood out this year was the openness. Adobe is integrating models from Google, OpenAI, Runway, ElevenLabs, and Topaz Labs, giving users the flexibility to choose how they create. For professionals, new features like Generative Upscale, AI Object Mask, and Assisted Culling make bulk editing, video cleanup, and image selection faster without losing creative control. On the enterprise side, GenStudio continues to evolve as a full content supply chain platform, now enhanced with generative tools and integrations with Amazon Ads, LinkedIn, and TikTok. Meanwhile, Firefly Foundry gives brands the option to train custom AI models in their own visual language -- a move that could redefine how companies scale on-brand creative output. Adobe's message this year was clear: AI isn't here to do the work for you, it's here to help you move faster, iterate smarter, and stay in control of your craft.
[9]
Adobe's personalised AI generators could change creative software forever
Firefly Foundry will let businesses create branded images, video, vectors and 3D models. Adobe has been rolling out masses of AI-powered features in programs like Photoshop and Premiere Pro over the past couple of years, but the news at the Adobe MAX 2025 conference in LA today goes beyond just more of the same. The software giant is moving beyond the one-size-fits-all approach to generative AI and putting personalisation at the heart of its approach. It's launching the options of custom AI models for both brands and individual creators, and it's adding agentic AI assistants that can read users' social media and propose ideas for things to post. Put together, the changes are geared towards creating a much more versatile and tailored ecosystem of creative software. For enterprise users, Adobe has announced the launch of Adobe Firefly Foundry. The company will work directly with businesses to create tailored generative AI models unique to their brand. Trained on entire catalogs of existing IPs, these "deeply tuned" proprietary Adobe Firefly Foundry models will be built on top of existing Firefly models and will be able to generated image, video, audio, vector and even 3D based on brands' own content . Firefly Foundry is intended to addresses the challenges in achieving on-brand consistency across materials generated by AI. Adobe says it will allow brands to scale on-brand content production, create new customer experiences and extend their IP. While other AI companies provide customisable base models, Adobe has a possible advantage in that it believes its models are commercially safe because they were trained on licensed material. Adobe's also rolling out customisable models for creators. These are simpler, working directly in the Firefly app and Firefly Boards. The company says creators can "easily personalise their own AI models to generate entire series of assets with visual consistency in their own, unique style". There's a waitlist for early access to the private beta. Also at Adobe MAX LA, Adobe showcased new conversational AI assistants that will connect to users' social media accounts and give them ideas for things to post (among other things). Dubbed Project Moonlight, the new assistants will be powered by agentic AI to provide a conversational interface that connects across Adobe apps to helps users make whatever assets they need. The idea is that users describe in their own words what they want to accomplish, or how they want something to look and feel, and the AI assistants will help them achieve that. Creators will be able to ask their AI assistant for personalised advice and suggestions. It seems that the models will also draw insights from creators' social channels, picking up on content that has done well, with the aim of helping users to brainstorm ideas and create new content faster. Adobe's also expanding its new strategy of adding third-party AI models to its software, including models from Black Forest Labs, Google, Luma AI, OpenAI and Runway, which will be integrated directly into the Adobe platform as they're released. Today, Adobe announced the addition of new partners ElevenLabs and Topaz Labs along with more models from existing partners. Generative Upscale in Photoshop now uses Topaz Labs' technology to upscale low-resolution images to 4K. Other new AI tools include AI Object Mask in Premiere (public beta) to help identify and isolate people and objects in video frames, and Assisted Culling in Lightroom (public beta) to help photographers quickly identify the best images in large collections of photos. There's also a new proprietary AI model in Firefly: Image Model 5 (public beta). It can generate images in native 4MP resolution without upscaling, and provides improved realism in lighting and texture. The new model also adds much more powerful image editing capabilities, with a new Prompt to Edit tool that lets users describe how they want to edit an image. Adobe says Layered Image Editing is in development for precise, context-aware compositing that keeps changes coherent.
[10]
Adobe and Google Cloud partner to integrate AI models into creative apps By Investing.com
LOS ANGELES - Adobe (NASDAQ:ADBE), a prominent software industry player with an impressive 89% gross profit margin and market capitalization of $152 billion, and Google Cloud announced an expanded strategic partnership to integrate Google's advanced AI models into Adobe's creative applications, according to a press release statement issued at Adobe MAX conference. According to InvestingPro analysis, Adobe currently trades below its Fair Value, suggesting potential upside for investors. Under the partnership, Adobe customers will gain access to Google's AI models including Gemini, Veo, and Imagen directly within Adobe applications such as Firefly, Photoshop, Express, and Premiere. The integration aims to help users generate higher-quality images and videos with greater precision. Enterprise customers will also be able to customize Google's AI models through Adobe Firefly Foundry using their proprietary data to create brand-specific content at scale. Google Cloud's Vertex AI platform will support this customization while providing data commitments that customer information will not be used to train Google's foundation models. "Our partnership with Google Cloud brings together Adobe's creative DNA and Google's AI models to empower creators and brands to push the boundaries of what's possible," said Shantanu Narayen, chair and chief executive officer of Adobe, in the press release. Thomas Kurian, chief executive officer of Google Cloud, added that the integration gives "everyone, from creators and creative professionals to large global brands, the AI tools and platforms they need to dramatically speed up content creation." The announcement follows Adobe's recent partnership with YouTube, which will bring Premiere's video editing tools to YouTube Shorts through a new creation space called Create for YouTube Shorts, coming soon to the Premiere mobile app. The companies did not disclose financial terms of the partnership or specific launch dates for the integrated features. In other recent news, Adobe announced several AI-powered enhancements to its GenStudio platform, aimed at improving personalized content creation for businesses across various marketing channels. These additions, unveiled during Adobe MAX, include Firefly Design Intelligence and Firefly Creative Production for Enterprise. Additionally, Adobe has partnered with YouTube to integrate its Premiere mobile app with YouTube Shorts, allowing creators to access professional-level video editing tools directly within the platform. This new feature, "Create for YouTube Shorts," offers editing capabilities such as effects, transitions, and templates. Furthermore, Adobe introduced an AI Assistant in beta for Adobe Express, providing users with a conversational design experience to simplify content creation. The company also launched its Premiere video editing application for iPhone, offering mobile creators free access to professional editing tools, including multi-track timeline editing and 4K HDR support. In a separate development, Morgan Stanley downgraded Adobe's stock from Overweight to Equalweight, citing slower monetization of AI features as a concern. The downgrade also included a revised price target of $450, down from $520, reflecting the firm's view on Adobe's Digital Media annual recurring revenue growth. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[11]
Adobe to Integrate Google AI Models in Apps
Adobe is expanding its partnership with Google Cloud to offer Google's AI models in its apps. The latest models from Alphabet unit Google will be integrated directly into Adobe Firefly, Photoshop, Adobe Express, Premiere and GenStudio, the San Jose, Calif., software company said Tuesday. Adobe's enterprise customers will also be able to customize their own AI models that are tailored to customers' brands. Earlier Tuesday, Adobe also announced that it would collaborate with YouTube to offer new features to help customers edit and create videos. Adobe has been investing in AI with a goal of pivoting more toward helping customers generate content, rather than just editing it. Shares are down 19% this year, as investors are getting anxious about seeing the company monetize its AI efforts. The company made some progress in its fiscal third quarter, as AI-first annual recurring revenue surpassed $250 million, the original target the company had aimed to hit by the end of this year. Write to Katherine Hamilton at [email protected]
Share
Share
Copy Link
Adobe introduces a suite of AI-driven features across its Creative Cloud apps and Firefly platform, revolutionizing image editing, video production, and creative workflows.
Adobe has announced a significant update to its AI-powered creative tools at the annual Adobe Max 2025 conference, introducing Firefly Image 5 and a host of new features across its Creative Cloud suite
1
2
3
. These advancements aim to revolutionize the way creators work with images, videos, and audio content.
Source: Economic Times
The latest iteration of Adobe's image generation model, Firefly Image 5, brings substantial improvements to image quality and editing capabilities
1
. Key features include:1
4
2
4
1
4

Source: Creative Bloq
One of the most significant additions is the Prompt to Edit feature, which allows users to make complex edits using natural language commands
2
. This tool simplifies the editing process, making it accessible to users without extensive technical knowledge of photo editing software.Adobe is expanding its ecosystem by integrating AI models from other tech giants and specialized AI labs
1
4
:4
4
5
Adobe is also introducing AI-powered tools for video and audio content creation
3
5
:5
5
4
5

Source: PetaPixel
Related Stories
To cater to professional and enterprise users, Adobe is launching:
5
5
5
Adobe is integrating conversational AI assistants across its ecosystem to streamline workflows:
5
4
5
Adobe has partnered with YouTube to bring Premiere Pro tools to short-form video creators, introducing 'Create for YouTube Shorts' in the Premiere mobile app
5
.As Adobe continues to innovate in the AI space, these updates represent a significant step forward in making advanced creative tools more accessible and efficient for professionals and enthusiasts alike. The integration of third-party models and the focus on natural language interactions demonstrate Adobe's commitment to staying at the forefront of AI-powered creativity.
Summarized by
Navi
[1]
[5]