Curated by THEOUTPOST
On Thu, 24 Apr, 4:04 PM UTC
8 Sources
[1]
I Demoed Adobe's Coolest New AI and Creative Tools. Here Are My Top Five Favorites
Katie is a UK-based news reporter and features writer. Officially, she is CNET's European correspondent, covering tech policy and Big Tech in the EU and UK. Unofficially, she serves as CNET's Taylor Swift correspondent. You can also find her writing about tech for good, ethics and human rights, the climate crisis, robots, travel and digital culture. She was once described a "living synth" by London's Evening Standard for having a microchip injected into her hand. If you consider yourself at all arty and creative, you're pretty much guaranteed to have experimented with Adobe's software tools at one time or another. I'm more of a casual creative, mostly using Lightroom and occasionally dipping into Photoshop and Premiere. But at my first Adobe Max Creative Conference in London this week, I was impressed by how the many new features and tools that the company unveiled across its entire suite of products - from Firefly generative AI tools to Creative Cloud - make it easier than ever for people like me to tap into our artistic side. I'm not simply talking about the ability to use generative AI to do the heavy lifting of making art for us. Adobe is using AI to demystify the more technical aspects of its platforms, decrease the time and effort needed to complete the more repetitive and mundane tasks, and ultimately pick up the slack where our own lack of forethought or finesse has let us down. The debate about the role of AI in the creative process continues, and isn't likely to be resolved anytime soon. Adobe's approach is to support the creative professionals who rely on its software through the AI transition as well as possible. Deepa Subramaniam, vice president of creative cloud, summed up the company's philosophy in the keynote, saying: "If you use generative AI, you want it to complement, not replace, your skills and experience." At the conference, I witnessed many demos of Adobe's latest creative tools, and even got to try some out for myself. Here are the ones that stood out to me. The cutest demo of the day at Adobe Max came courtesy of Firefly. Adobe released its latest Firefly generative AI models at the event, including the Firefly Video Model. This tool can be used to create a range of different video types, but the demo that caught my eye was a claymation-style video generated from a combination of an image and a text prompt. The prompt read: "claymation character pushing a wheelbarrow through a Tuscan village," while the image already had a path for the figure to follow. Sure enough, the generated video saw a little clay guy with a broad-rimmed hat enter the scene, and push his wheelbarrow along the snaking cobblestone path. Adorable - and genuinely impressive. It elicited gasps from the crowd and I overheard people talking about it throughout the day. When I got the opportunity to try out the model for myself, I asked Firefly to make me a video of a giraffe wearing a fruit hat in Scotland. It struggled with the Scotland aspect of the prompt, but I was more than satisfied with how it interpreted the rest of the command. For some time now, you've been able to use Firefly to generate a new image based on the same structure or arrangement as a reference image. Now Adobe has brought this capability directly into Photoshop. In one example I saw, a photo of a sweeping horseshoe-shaped road was used as a reference along with the prompt "dark stormy winter snow." Photoshop generated a series of different romantic scenes, all of which maintained the structural integrity of the road from the original image. In a second example, I saw how a child's drawing of a monster could be imported into Photoshop and used as the basis to generate a cartoon-style version of that same monster. What a way to give your kid's scrappy doodles a second lease of life. If you've ever tried to remove the background from an image with lots of fine details in Photoshop, you'll appreciate how tricksy and time-consuming the process can be. Trying to pick around certain objects can be like chipping away at a block of ice with a toothpick. But no longer. Thanks to Adobe deploying AI via its new Select Details tool to read a photo, Photoshop is now significantly better at distinguishing the details of an image. In a demo, I saw how, with a single click, it's now possible to completely isolate a tennis racket, strings and all, from its background. Each string had been perfectly defined, and each square in between cut away with precision. The same effect was successfully applied to a fish caught in a net. In another example, it was able to delineate the shape of a woman wearing a black turtleneck from a black background. To the naked eye, it was almost impossible to see where the woman's turtleneck ended and the background began, but Photoshop was able to segment the image perfectly. Adobe has reimagined Photoshop's Action Panel so that you now have access to 1,000 different actions - instructions such as "blur background" or "soft black and white" - to transform your image at the touch of a button. My favorite thing about this new feature is that Photoshop takes the guesswork out of which actions you might want to use by analyzing your image locally using a machine learning model and putting forward a list of suggestions. As someone who has felt intimidated by the vast array of options in Photoshop, the new Action Panel makes the software vastly more accessible. The actions are also searchable, and I like the fact that you don't necessarily need to know the correct Photoshop terminology to achieve your desired results. For example, in the demo I saw, you could select the phrase "make the subject pop," which picked out the retro Italian car in the foreground of the image and boosted the contrast and saturation so that it stood out even more. Making something "pop" is not a technical photography term, but Photoshop is able to interpret natural language, making this the closest thing yet to an AI assistant embedded in the software. If you've ever been editing a video and realized the clip you've filmed isn't quite long enough, you'll love Generative Extend in Premiere Pro. The tool, which has been available in beta for a while, is available from today, and it can generate an extra few seconds of footage based on the clip you've uploaded. "It's super seamless, and it's up to 4K now," said Eric Snowden, senior vice president of design at Adobe, who listed it as one of his favorite new features across Creative Cloud, speaking in a briefing. The tool can even generate sound, as long as it's not music or speech. Examples I saw included extended footage of flags waving in the wind, and a clip of a woman continuing to smile and nod for two seconds after the real recording of her had ended. For producers who are just missing those extra few frames of B-roll, this feature will be an absolute lifesaver.
[2]
Your go-to Adobe apps just got AI features that are actually worth using
Summary Adobe Max London 2025 highlights AI upgrades across Creative Cloud. Photoshop, Illustrator, Express, Lightroom, and Premiere Pro get smarter, faster tools to boost productivity. New Firefly app blends Adobe's models with partners like Google and OpenAI for all-in-one creative workflows. The last few years have been all about AI. Companies around the world have been integrating AI into their tools left and right, making sure to hop onto the bandwagon before it's too late. While they're certainly not wrong for doing so, most AI features are, frankly, not worth using. Adobe's annual creative conference, Adobe Max London 2025, is currently underway. Similar to the one held toward the end of 2024, this year's event is shaping up to be focused on AI-powered features. This time around, though, the AI features actually feel polished and worth using. Related 5 of the most useful AI features added to Adobe products Maximize your creativity by leveraging these five AI tools in Adobe's industry-grade apps Posts Photoshop, Illustrator, Express, Lightroom, and Premiere Pro get exciting upgrades At Adobe Max London 2025, Adobe announced multiple AI-powered features across most of its Creative Cloud software. Adobe Photoshop: Here are the features that everyone's favorite editing app, Photoshop, got: Composition Reference in Text to Image ensures that any assets you generate follow the same structure and visual arrangement as the reference image you provide it with. Select Details allows you to quickly select things like hair, facial features, and clothing in an image, instead of needing to manually outline each element with painstaking precision. Adjust Colors lets you instantly adjust color hue, saturation, and lightness in images. Adobe claims that the upgrades to Photoshop help "deliver a combination of greater speed, smarter suggestions, and tools for working with precise details." Adobe Illustrator: Adobe Illustrator gets a speed boost as well, with its menu access becoming more responsive. The company also claims that the tool's most popular effects are now up to five times faster. Additionally, it gets the following two Adobe Firefly-powered features: Generative Shape Fill lets you fill shapes with realistic vector graphics. Text to Pattern helps you create unique vector patterns by entering a descriptive prompt. Adobe Express: Adobe Express, which is an all-in-one content creation app, also gets a few updates I'm particularly excited about: The Adobe Firefly-powered Generate Video tool, which allows you to create unique, commercially safe videos with a descriptive prompt. Dynamic Animation, which lets you add "realistic motion effects" to animate static images. Generate Similar, which you can use to create a cohesive set of images that follow the same theme, color palette, or visual style as your original. Enhance Speech lets you remove distracting background noise from videos. Clip Maker lets you convert long-form videos like podcasts, interviews, or demos into short, snappy clips you can share across your social platforms. Adobe Lightroom: Both the Adobe Lightroom mobile and desktop apps are also getting new tools for editing and sharing: Select Landscape is a feature that helps photographers automatically detect and create masks for common landscape elements, including plants, water, sky, and more. Quick Actions get a few upgrades, allowing you to retouch group photos with more precision and control. Adobe Premiere Pro: Finally, Adobe Premiere Pro gets tools that make generating, editing, and searching for footage within your videos all that much easier: Generative Extend, a Firefly-powered feature, allows you to extend your video to add a few extra frames to cover gaps in footage and smooth out transitions. The feature was previously available in beta and is now generally available, with support for 4K and vertical video. Media Intelligence can search terabytes of footage within seconds to help you find relevant clips. Caption Translation allows you to translate captions in up to 27 languages! Adobe also announced a new Firefly app, which combines Adobe's very own Firefly models with "a choice of models from partners," including Google and OpenAI. This way, you can do everything from content generation to image and video production to creating patterns, all within one app. The company also announced that the Firefly Video model is now generally available through Firefly on the web, and the new Text to Vector feature, powered by the Adobe Vector model, is also now generally available. All in all, it's shaping up to be an exciting time for Adobe software, and I can only imagine it getting even better with time.
[3]
Adobe Max London 2025 live - all the new features coming to Photoshop, Firefly, Premiere Pro and more
Welcome to our liveblog for Adobe Max London 2025. The 'creativity conference', as Adobe calls it, is where top designers and photographers show us how they're using the company's latest tools. But it's also where Adobe reveals the new features it's bringing to the likes of Photoshop, Firefly, Lightroom and more - and that's what we'll be focusing on in this live report direct from the show. The Adobe Max London 2025 keynote kicks off at 5am ET / 10am BST / 7pm ACT. You can tune onto the livestream on Adobe's websiteand also see demos from the show floor on the Adobe Live YouTube channel.But we're also at the show in London and will be bringing you all of the news as it happens here. Given Adobe has been racing to add AI features to its apps to compete with the likes of ChatGPT, Midjourney and others, we're expecting that to be a big theme of the London edition of Adobe Max - which is a forerunner of the main Max show in LA that kicks off on October 28. But there will also likely be a sprinkling on non-AI features and tools for everything from Photoshop to Illustrator - so whichever Creative Cloud app you use, you can follow all of the announcements here with us live from Adobe Max London 2025...
[4]
The 5 biggest new Photoshop, Firefly and Premiere Pro tools that were announced at Adobe Max London 2025
Adobe's new Content Authenticity app can verify creator's digital work Adobe Max 2025 is currently being hosted in London, where the creative software giant has revealed the latest round of updates to its key apps, including Firefly, Photoshop and Premiere Pro. As we expected, almost every improvement is powered by AI, yet Adobe is also keen to point out that these tools are designed to aid human creativity, not replace it. We'll see. Adobe hopes to reenforce this sentiment with a new Content Authenticity app that should make it easier for creators to gain proper attribution for their digital work. It's a bit of a minefield when you start thinking about all of the permutations, but kudos to Adobe for being at the forefront for protecting creators in this ever evolving space. There's a raft of new features and tools to cover, from a new collaborative moodboard app to smarter Photoshop color adjustments, and we've compiled the top five changes that you need to know about, below. Firefly seemingly has enjoyed the bulk of Adobe's advances, with the latest generative AI model promising more lifelike image generation smarts and faster output. The 'commercially safe' Firefly Model 4 is free to use for those with Adobe subscriptions and, as always with Firefly tools, is by extension available in Adobe's apps such as Photoshop, meaning these improved generative powers can enhance the editing experience. Firefly Model 4 Ultra is a credit-based model designed for further enhancements of images created using Model 4. Adobe also showcased the first commercially safe AI video model, now generally available through the Firefly web app, with a new text-to vector for creating fully editable vector-based artwork. For example, users can select one of their images as the opening and end keyframe, and use a word prompt to generate a video effect to bring that image to life. All the tools we were shown during Adobe Max 2025 are available in Adobe's apps, some of which are now in public beta. We were also told that a new Firefly mobile app is coming soon, though the launch date has yet to be confirmed. During Adobe's Max 2025 presentation, we were also shown the impressive capabilities of an all-new Adobe tool; Firefly Boards. Firefly Boards is an 'AI-first' surface designed for moodboarding, brainstorming and exploring creative concepts, ideal for collaborative projects. Picture this; multiple generated images can be displayed side by side on a board, moved and grouped, with the possibility of moving those ideas and aesthetic styles into production using Adobe's other tools such as Photoshop, plus non-Adobe models such as OpenAI. The scope for what is possible through Firefly in general, now made easier with the Boards app that can utilize multiple AI image generators for the same project, is vast. We're keen to take Boards for a spin. Adobe Photoshop receives a number of refined tools that should speed up edits that could otherwise be a time sink. Adobe says 'Select Details' makes it faster and more intuitive to select things like hair, facial features and clothing; 'Adjust Colors' simplifies the process of adjusting color hue, saturation and lightness in images for seamless, instant color adjustments; while a reimagined Actions panel (beta) delivers smarter workflow suggestions, based on the user's unique language. Again, during a demo, the speed and accuracy of certain tools was clear to see. The remove background feature was able to isolate a fish in a net with remarkable precision, removing the ocean backdrop while keeping every strand of the net. Overall, the majority of Photoshop improvements are realized because of the improved generative powers of the latest Firefly image model, which can be directly accessed through Photoshop. As Adobe further utilizes AI tools in its creative apps, authenticity is an increasing concern for photographers and viewers alike. That's why the ability to verify images is all the more vital, and why this next Adobe announcement is most welcome. Adobe Content Credentials - an industry-recognized image verification standard, adds a digital signature to images to verify ownership and authenticity - and is now available in a free Adobe Content Authenticity app, launched in public beta. Through the app, creators can attach info about themselves; their LinkedIn and social media accounts, plus image authenticity, including date, time, place and edits of said image. An invisible watermark is added to an image, and the info is easily seen with a new Chrome browser extension installed. Furthermore, through the app, creators will be able to attach their preferences for Adobe's use of their content, including Generative AI Training and Usage Preference. This should put the control back with creators, although this is also a bit of a minefield - other generative AI models do not currently adhere to the same practices. The upgrades to Premiere Pro are more restrained, with the Firefly-powered Generative Extend now generally available being the headline announcement. Not only is the powerful tool now available to all users, but it also now supports 4K and vertical video. Elsewhere, 'Media Intelligence' helps editors find relevant clips in seconds from vast amounts of footage, plus Caption Translation can recognize captions in up to 27 languages. These tools are powered by, you guessed it: Adobe Firefly.
[5]
'It's like magic and everything just works': We spoke to Adobe's AI maestro to find out what's new with Firefly and how it levels up creativity for all
Alexandru Costin is Vice President, Generative AI and Sensei at Adobe. There was no getting away from Firefly at this year's Adobe Max London. Already infused across the Creative Cloud suite, the AI image and video generator has been massively upgraded with new tools and features. Ahead of the events, we sat down with Alexandru Costin, Vice President, Generative AI and Sensei at Adobe, to explore what's new with Firefly, why stories matter when using the best AI tools, and how professionals can use it to enhance creativity across the board. At Max, we have the next generation of our image model, two versions of it. We have a vector model, we have the video model. So, a lot of progress on the model from Adobe, commercially safe, high quality, amazing human rendering. A lot of control and a great style engine, et cetera. We are also introducing third-party model integrations. Our customers told us that they want to stay in our tools, in our workflows. They are still using other models for ideation purposes, or for different personalities. So, we're announcing OpenAI's GPT image integration and Google's Imagen and Veo 2 in Firefly, and Flux integration in Firefly Boards. The third big announcement is Firefly Boards is a new capability in the Firefly web application. We look at it as an all-in-one platform for next generation creatives to ideate, create and produce production content. Firefly Boards is an infinite canvas that enables team collaboration, real-time collaboration, commenting, but also deep Gen AI features stepping in, into all of these first-party and third-party models, new capabilities for remixing images. It's not easy. We've been working on the project concept for like, a year. Actually, that underlying technology, we've been working on for many years, like real-time collaboration with deep integration, with storage, and innovation in Gen AI user experiences, remixing, auto-describing images to create the prompts for you. There's a lot of deep technology that went into it. It looks like magic, and is very easy [to use]. We hope it's so easy. Our goal is to build a complex layer. So for customers, it's like magic, and everything just works. My favorite feature is integration between image, video, and the rest of the Adobe products. We're trying to build workflows where customers that have an intent in mind, and they want to paint the picture that's in their mind, can use these tools in a really connected way without having to jump through so many hoops to tell their story. Firefly Image 4 offers amazing photo realism, human rendering quality, prompt understanding. You iterate fast. With Image 4 Ultra, which is our premium model, you can render your image with additional details, and we can take them into the Firefly video model as a keyframe, and create a video from that whole image. Then you can take that video into Adobe Express and make it like an animated banner, add text, add fonts. In Creative Cloud, we have a lot of capabilities that exist already. We're bringing Gen AI inside those workflows, either in Firefly on the web, or directly as an API integration. But for me, I think the magic is having all of this accessible in an easy way. The Photoshop team is also working on an agentic interface. They call it a new Actions panel. You type in what you want. We have 1000 high-quality actions we've curated for you. There are all these tools in Photoshop that are sometimes hard to discover if you're not an expert, but we're gonna just bring them and apply them for you. I mean, you will learn along the way, but you don't need to know everything before you start. Not only we're helping you achieve your goal, we're also teaching you the ins and outs of Photoshop as we go through this. It is. It's too powerful to some extent. It has so many controls, it might be intimidating, but with the new Actions panel, we want to take a big chunk of that entry barrier away. Everybody will benefit from this technology in different ways. For creative professionals, it will basically remove some of the tedium, so they can focus on creativity. But with things like Firefly Boards, they will be able to work with teams and clients much better. The client can upload in boards some stylistic ideas, and then you can take it and integrate it very fast in your professional workload. For consumers, with people that want to spend seconds to create something, with Firefly, you just type in the prompt and we do it for you. It's a great capability. In the middle, there are the folks learning in their careers, aspiring creative professionals, next generation creatives. And for them, we want to give them both Gen AI capabilities, but also a bridge towards the existing pixel-perfect tools that we have at Adobe. Because we think a mix of those two worlds is the best mix that next generation creatives need to be armed with. For me, a big opportunity is better understanding of humans, like prompt understanding agentic, having a creative partner to bounce ideas off of. Another thing we're announcing is the [upcoming] Firefly mobile app. This is a companion app that can use many of the Firefly app capabilities, generate text, generate video, et cetera. But also, because it's on mobile, you have access to the camera, you have a microphone, there are many new opportunities to make these interactions easier. So, we're looking into that. We do think next generation creatives are a big target market for us because we want to give them the tools of the trade. For us, customers are why we get up in the morning every day, they are telling us what they need, and they told us they want more quality, better humans, more control, better stylization. That's what's behind the image model updates. We just want to make them more usable in more workflows for actual production use-cases. Because our model is uniquely positioned to be safe for commercial use, we want customers to use it everywhere. For video, video is also growing, and many of our customer-base doesn't know how to use the video product. So, making video creation more accessible is another great accelerant for creativity. We want to offer a larger population of people the tools to tap into video and be able to start achieving their goals there. While, of course, inside products like Premiere Pro, we're continuing to integrate deeper, more advanced features, like a couple of weeks ago at NAB, we launched Generative Extend. It won one of the awards. Gen Extend is a 4K extension, enabling professional videographers to basically extend clips so they don't have to reshoot. What motivates us is helping our customers tell stories, better stories, more diverse stories, and be successful in their careers. I think through human creativity and engineering, how do they differentiate today? They're all using Photoshop. They do find ways to differentiate because, in reality, Gen AI is a tool designed, at least from an Adobe perspective, to be of service to the creative community, and we want to give them a more powerful tool that should help them level-up their craft. They're describing it as going from the person editing to a creative director. All of our customers can become directors of these Gen AI tools to help them tell better stories, tell stories faster, et cetera. So, we think the differentiation will still be in the creativity of the human using the tool. And we're seeing so much innovation. We're seeing people using these technologies in ways we haven't even thought about, which is very exciting, always. Mixing them in novel ways. Because that's how you differentiate. And we do think there will always be many ways to express somebody's creativity. We think creativity comes in a variety of ways, and there are different tools creative people will use and mix together to tell better stories and change culture.
[6]
Just announced! These are all the new Photoshop tools you need to know about
Adobe MAX London is underway and that means Photoshop news. Each Photoshop update introduces new tools and workflow improvements, which is what's kept the software up there are the top of our lists of the best photo editing software, of course. Many recent updates have focused on AI, and today's is no exception. New generative AI tools powered by Adobe Firefly include composition reference, while it's now easier to select details and adjust colours. Let's dive in and see what's new. First up, Composition Reference in Text-to-Image builds on Structure Reference for AI image generation with more compositional control. It allows users to use a reference and take the same structure and visual arrangement to apply to the AI-generated output, something we've seen in other AI image generators. We already had Select Subject and Select Object, now we have Select Details, which is intended to make it faster and more intuitive to select things like hair, facial features and clothing. It looks like should will help with the kinds of things that can often be hard to mask using traditional selection tools. The new Adjust Colors tool sounds more like something Photoshop already has, but it makes the process easier. Until now, to adjust colours, you would go to layer adjustments. That took a few clicks, and it could get messy if you wanted to target a single layer in a complex composition. Now there's an Adjust Colors option in that sometimes helpful, sometimes irritating Contextual Task Bar that hovers under your canvas. Click on that for a simpler way to adjust hue, saturation and lightness all in a panel on the main workspace. It automatically samples colours from the image you're working on to allow quick, targeted adjustments. Speaking of the Contextual Task Bar, the evolution of that concept is the reimagined Actions Panel. This provides more suggestions for the next steps you might want to take in your workflow. In theory, this will provide faster access to regular actions since it seems that it learns from your own workflow. Adobe also suggests that it could even provide inspiration for creative directions to follow. This is really just the start for this concept. As I wrote a few weeks ago when I suggested that Adobe wants to turn Photoshop into an AI chatbot, the software giant sees this as the foundation of its plan to create an AI creative agent in its apps, serving as a kind of assistant to the user. The idea is that the user will be able to put requests to the agent the way they would to a chatbot like GPT. Finally, this isn't being billed a headline new feature, but Adobe's principal director evangelist Paul Trani pointed it out in a demo, and it's a cool new feature that should appeal to more old-school Photoshop users who aren't so impressed with the AI stuff. The Gradient Tool and Gradient Fill interpolation options now include stripes in addition to perceptual, linear, classic and smooth. It's a small feature, but It should make it much quicker and easier to add a striped background to compositions. And to tie in with all the excitement, Adobe is running a 50% discount on Creative Cloud! Find out more on Adobe CC's website.
[7]
The 4 biggest announcements from Adobe MAX London
Adobe MAX London is the place to be right now, as Adobe is announcing a slew of updates to its tools. With the amount of news coming out of the conference, it can be hard to keep up, but don't worry, because we've rounded up the latest and greatest news for you right here. From hot new Photoshop tools to potentially controversial additions to Adobe's AI tool, Adobe Firefly, here's the news you need to know about. We'll be updating this page as and when more announcements are made so stay tuned for more. If you're not already a subscriber to Creative Cloud and are in the UK, then don't miss the 50% off Adobe Creative Cloud offer, which is live right now. New tools in Photoshop announced at Adobe MAX London include Composition Reference, which enables users to take the same structure and visual arrangement from a reference and apply it to AI-generated output. You can also now Select Details in PS rather than just Subject and Object, making it easier to select things like hair and clothing. There's also Adjust Colors, which enables users to adjust hue, saturation and lightness on a panel from the main workspace. Adobe has announced two new Firefly AI Models - Firefly Image Model 4 and Firefly Image Model 4 Ultra (both very catchy names, I'm sure you'll agree). The new models apparently offer more realistic output and better resolution (up to 2K). Adobe also says that these new models offer more control. The plain Model 4 is supposed to be better for everyday creative tasks while the Ultra version is better for more detailed and realistic projects. The credit/pricing structure has not been announced yet. On its initial launch, Adobe billed Firefly as the first 'commercially safe' AI image generator. Some even called it ethical. So today's announcement that the likes of Google, AI, Runway, Pika and Ideogram can now be used in Firefly, is bound to cause a stir. Adobe is emphasising the benefits of choice and flexibility, but whether this announcement will go down well with creatives remains to be seen. Adobe's new Content Authenticity web app aims to help artists know/record exactly where an image has come from by enabling them to 'digitally sign' their work with an identity tag, which allows people to see how that content can be used. Our editor, digital arts and 3D, Ian Dean, sat down with Andy Parsons, the senior director of content authenticity at Adobe. You can read their conversation over on our Content Authenticity app explainer. And to tie in with all the announcements, Adobe is running a 50% discount on Creative Cloud All Apps! Find out more on Adobe CC's website.
[8]
"Everything we do is to benefit creativity" - love it or hate it, Adobe has a bold vision for AI and creative tools
Today at Adobe MAX London I'll get to see and hear just what is coming to Photoshop, Illustrator, Fresco and other digital art software but the through-line linking everything is more AI. Love it or hate it, Adobe isn't holding back on its push for these new integrated tools At the forefront of this move to more AI is Adobe's Firefly web app and gen AI model, a suite of generative AI tools designed to streamline ideation, enhance content creation, and push the boundaries of creative production. Adobe is well aware of the issues around greater use of AI in art and design, hence it's investment and launch of the Content Authenticity web app to enable artists to 'track' and display how an image was made and by whom. To learn more, I sat down with Alexandru Costin, Adobe's chief product officer, ahead of Adobe MAX London to get a dive deep into the company's approach to AI, its new tools, and how the Adobe envisions the future of creativity. "Firefly is our all-in-one platform for creativity," Alexandru explains enthusiastically, revealing that the launch of Firefly Model 4 Ultra will bring even more powerful tools to Adobe's suite. Firefly Model 4 Ultra, an advanced version of the generative model, promises improved human rendering and stylisation, as well as unlimited potential for image generation. In a demo the night before MAX London I got to see Firefly Model 4 Ultra in action, and yes it does look more photo real than previous AI models from Adobe, but there remains a level of stylisation to elements like hair that breaks the illusion these images are of real people. But, it remains impressive. The addition of new features like the SVG export in the Firefly vector module and interactive lower resolution modes for rapid ideation shows Adobe's commitment to offering both quality and speed for creatives. "We're constantly evolving our platform to meet the needs of next-gen creatives," Alexandru says. This includes Firefly's integration with multiple AI models from other providers like Google's Imagen, Flux, Runway and others, responding to customer demand for diversity in AI options while maintaining Adobe's commitment to data privacy. This move to allow third-party AI models into the Firefly workflow will raise eyebrows, as while Adobe has gone to great lengths to develop an 'ethical' AI based on opting in, and some initial partners like Flux make the same commitment (though opaquely), while the inclusion of OpenAI is a clear issue. "We're offering the same level of privacy protection for our users as we've always done with Firefly. This means we don't train on your prompts or images. Our partnerships with other providers like OpenAI ensure that when you use their models through Adobe, your content remains your own," says Alexandru. One of the most exciting developments coming with Firefly is the launch of Firefly Boards (formerly Project Concept AI), a collaborative infinite canvas where teams can work in real-time to remix images, generate new content, and combine elements from various AI models. This new feature aims to simplify the creative workflow by allowing teams to work together seamlessly within Adobe's ecosystem. Boards, in its Project Concept stage, is something I saw at MAX Miami last year, and impressed, mostly because it finally enables greater control over the outcomes of an generative AI image. But what happens when you merge content from different AI models? You can, for example have images from Firefly, Flux and Imagen in one 'board' and remix them into one new gen AI creation. Alexandru explains that when users create a new image by remixing elements from different sources, each element is traced back to its origin. The process is transparent: the system generates a detailed prompt based on the selected elements, preserving the integrity of each model used and ensuring clear attribution through Adobe's Content Credentials. The focus on transparency is key, especially when it comes to the blending of various generative AI models. Images are clearly tagged with the model used, and the commercial safety of the content is paramount to Adobe, explains Alexandru. In line with its mission to make creativity accessible to everyone, Adobe is taking steps to ensure that AI tools are available on all devices. Firefly Mobile, a new app announced today, is set to bring Firefly's gen AI tools to mobile phones. The idea is to further "democratise" access to generative AI tools for creators on the go. "Our goal is to make creative power available to everyone, whether you're working on a desktop or on your mobile phone," says Alexandru. "This is just the beginning. With AI-powered voice input, mobile access to Firefly, and a mobile website, we're enhancing the creative experience for users everywhere." When it comes to Photoshop, the integration of AI represents a major shift in user experience. Alexandru talks about a new feature that will bring natural language understanding to Photoshop users, enabling them to interact with the program via voice or text prompts. This is part of Adobe's broader strategy to redesign how artists engage with Adobe's creative tools, making the software more intuitive and accessible to beginners while still catering to advanced professionals. "We've always believed in making tools accessible, whether you're a student, a professional, or a hobbyist. The introduction of AI will allow us to further reduce the learning curve, making complex tools like Photoshop more approachable," says Alexandru. The new "Actions Panel," for instance, means you can see a live preview of actions and apply them on their documents, helping with learning Photoshop's advanced features quickly. This kind of "agentic AI" could open up Adobe's apps to more people, or help those of us stuck in our way find new tools and workflows. Cynically we could say Adobe is turning Photoshop into an AI chatbot, but behind that lays an interesting new way of using software. Alexandru emphasises that Adobe's AI tools aren't just for professionals, they're designed with a wide variety of users in mind. From students and small business owners to creative professionals, Adobe is looking to serve the entire spectrum of creators, and this is where the friction happens - Adobe clearly sees a path to a broader user-base and is trying to square the circle of being a professionals's creative toolbox and a casual user's draw of useful apps. With Firefly, Creative Cloud, and Adobe Express, Adobe is ensuring that its software and tools cater to everyone, whether they need pixel-perfect control or simpler, more accessible experiences. The big question is, will some be left behind or forced away as Adobe casts its net wider? At the heart of Adobe's AI strategy is the belief that creativity should be accessible to all. Whether through its AI-powered tools, its focus on data privacy, or its commitment to user-centric design, Adobe is creating an ecosystem where anyone can express themselves creatively, no matter their background or skill level. Alexandru smiles and tells me, how he believes "technology is cool" but pure 'technology' alone "doesn't benefit anybody [...] everything we do is designed to make creativity work across consumer, student, SMB and professional [uses]. Everything we do is to benefit creativity".
Share
Share
Copy Link
Adobe showcases new AI-enhanced features across its Creative Cloud suite, including Photoshop, Firefly, and Premiere Pro, at Adobe Max London 2025.
Adobe has unveiled a range of new AI-powered features and tools across its Creative Cloud suite at the Adobe Max London 2025 conference. The company's focus on integrating artificial intelligence into its products aims to enhance creativity, streamline workflows, and make advanced editing capabilities more accessible to users of all skill levels.
At the heart of Adobe's AI push is Firefly, the company's generative AI platform. The latest iteration, Firefly Model 4, promises more lifelike image generation and faster output 1. Adobe has also introduced Firefly Model 4 Ultra, a credit-based system for further enhancements 4.
One of the most exciting additions is the Firefly Video Model, which allows users to create unique, commercially safe videos from text prompts 2. This tool can generate claymation-style videos and even extend existing footage to smooth out transitions 14.
Photoshop, Adobe's flagship image editing software, has received several AI-powered upgrades:
These enhancements are designed to speed up complex editing tasks and make Photoshop more accessible to casual users 1.
Adobe's video editing software, Premiere Pro, has also benefited from AI integration:
Adobe introduced several new tools to complement its existing suite:
Alexandru Costin, VP of Generative AI and Sensei at Adobe, emphasized that these AI tools are designed to aid human creativity, not replace it 3. The company aims to remove tedious aspects of creative work, allowing professionals to focus on higher-level creative tasks 5.
Adobe's AI advancements cater to various user groups:
As AI continues to evolve, Adobe remains committed to integrating these technologies responsibly, focusing on commercial safety and creator attribution 14. The company's efforts to balance innovation with user needs and ethical considerations position it at the forefront of the AI revolution in creative software.
Reference
[2]
[3]
[4]
Adobe introduces a range of AI-powered tools and updates across its Creative Cloud applications, including Photoshop, Premiere Pro, and Illustrator, at its annual Adobe MAX conference.
10 Sources
10 Sources
Adobe launches new Firefly AI models for image and video generation, introduces third-party AI integration, and announces new features for its Creative Cloud suite.
26 Sources
26 Sources
Adobe introduces new AI-powered video generation and translation features to its Firefly platform, aiming to streamline content creation for professionals and casual users alike.
4 Sources
4 Sources
Adobe introduces AI-powered features across its Creative Cloud suite, emphasizing the need for artists to adopt AI tools to remain competitive in the evolving creative landscape.
4 Sources
4 Sources
Adobe introduces generative AI video capabilities to Firefly, reaching 12 billion generations. The company faces scrutiny over AI training data while emphasizing safety and expanding its presence in India.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved