Curated by THEOUTPOST
On Fri, 1 Nov, 12:12 AM UTC
2 Sources
[1]
Adobe's Promising Generative AI Tools Have Room to Grow for Photographers
When Adobe debuted the Firefly-powered Generative Remove tool in Adobe Lightroom and Adobe Camera Raw in May as a beta feature, it worked well much of the time. However, Generative Remove, now officially out of its beta period, has confusingly gotten worse in some situations. For photographers not opposed to generative AI in their photo editing workflows, Generative Remove and other generative AI tools like Generative Fill and Generative Expand have become indispensable. They utilize AI to significantly speed up and improve image editing without taking control away from the photographer. But since the recent updates following Adobe MAX earlier this month, some users have found that generative AI tools in Adobe Camera Raw, Lightroom, and Photoshop have taken steps backward. PetaPixel's testing has corroborated the complaints. Adobe is aware of the issues and explains that, unlike non-AI tools, those powered by technology like Firefly, which is constantly being fine-tuned behind the scenes, are not continuously improving in every possible situation. While a one-step backward, two-step-forward situation is foreign to most photo editing applications, reality has changed in the age of AI. "The quality has dropped to an unusable level. Textures that were rendered beautifully in previous versions now sometimes appear smudged or as if painted in with a brush. There are far more nonsensical fills -- like birds randomly appearing in hair or clothes, and extra hands in bizarre places when the image already has two!" writes RedFishBlack on the Adobe Photoshop forums. "It's like AI gone wrong. What happened to quality control? How does the AI actually get worse? This is beyond frustrating." As the disgruntled photo editor adds, there is no simple way to roll back to an older version of the Firefly tools. Images are processed server-side, so there is not much available by way of user control. RedFishBlack refers specifically to Generative Fill in Photoshop, but the problems extend to other tools, including Generative Remove, a tool tailor-made for helping photographers clean up photos and remove distractions. In a separate Adobe Community post, a professional photographer says they use generative fill "thousands of times per day" to "repair" their images. "The generative fill was almost perfect in the previous version of Photoshop to complete this task. Since I updated to the newest version (26.0.0), I get very absurd results," the user explains. Since the update, generative fill adds objects to a person, including a rabbit and letters on a person's face. "The success rate has dropped to maybe 5-10% (or less!)," the photographer says. "Before the update, it was more like 90-95%." Even when they add a prompt to improve the results, they say they get "absurd" results. "Little hiccups are normal, but this ruins my workflow." The two users who responded say they have the same issue: getting weird objects added to their images and having massive problems with textures not matching the rest of an image. Generative Remove and Generative Fill are technically different, although their use cases overlap. Adobe itself has advocated for the use of Generative Fill to remove objects from images in Photoshop, for example. Some of the issues with Generative Fill persist across different tools, too. Among the most significant problems for a typical photographic workflow is that Adobe's generative AI is having a lot of issues with matching textures in an image and being resistant to removing an object and instead seems inclined to replace it with something else. As its name suggests, Generative Remove generates new pixels using artificial intelligence. If pixels are removed from an image, they must be replaced with new ones. For photographers, the new pixels are nearly always meant to jive with the background, making it look like a distraction was never there in the first place. If the pixels are too smooth, too noisy, or the wrong color, one distraction has just been replaced with a new one. Generative Remove and Fill can be valuable when they work well because they significantly reduce the time a photographer must spend on laborious tasks. Replacing pixels by hand is hard to get right, and even when it works well, it takes an eternity. The promise of a couple of clicks saving as much as an hour or two is appealing for obvious reasons. However, at the moment, these latest generative AI tools, many of which were speeding up their workflows in recent months, are now slowing them down thanks to strange, mismatched, and sometimes baffling results. In response to a complaint on Reddit about Generative Remove in Lightroom adding objects rather than erasing them, Adobe employee Terry White, an excellent photographer and educator, says that users must be careful to paint over the entire object using Generative Remove. "For best results when using Gen Remove is to make sure you brush the object you're trying to remove completely including shadows and reflection. Any leftover fragments, no matter how small, will cause the AI to think it needs to attach a new object to that leftover piece. Causing it to replace instead of remove," White explains. While this is good advice, overbrushing is not always enough. The original poster on Reddit suggests "there is still room for improvement." "I agree," White replies. Adobe is listening to feedback and making tweaks, but AI inconsistencies point toward a broader issue. Generative AI is still a nascent technology and, clearly, not one that exclusively improves with time. Sometimes it gets worse, and for those with an AI-reliant workflow, that's a problem that undercuts the utility of generative AI tools altogether. As some examples above show, it is absolutely possible to get fantastic results using Generative Remove and Generative Fill. But they're not a panacea, even if that is what photographers want, and more importantly, what Adobe is working toward. There is still need to utilize other non-generative AI tools inside Adobe's photo software, even though they aren't always convenient or quick. It's not quite time to put away those manual erasers and clone stamp tools.
[2]
Apple vs Adobe: One is Clearly Better at AI Photo Clean Up
Apple Photos added a new Clean Up feature that removes unwanted objects from photos, a tool that competes directly against the one that's found inside Adobe products like ACR and Lightroom. So, which one is better? This week, Apple pushed an update to macOS Sequoia (version 15.1) that adds the first batch of Apple Intelligence features that include the Clean Up tool in Photos, a generative AI feature that removes unwanted objects from images. Some of you might not have realized this tool was coming to desktop, as Apple typically showcased it in use on the iPhone. But for those of you who prefer editing images from a desk, Clean Up lives on macOS, too. Outside of some flashy animations, Clean Up works pretty much identically to Adobe Generative Remove in practice, except for the fact it will offer suggestions sometimes on objects in photos it detects and thinks you might want to remove. Otherwise, it uses the same painting method that Adobe Lightroom and Photoshop users have come to know. Since Adobe just updated Photoshop and its Firefly AI model, we figured now was a great time to see how these two widely available removal tools fare against each other. So, we tasked both with removing the same elements of six different photos to see which performed best. For each below, the beginning image is linked in higher resolution, and higher resolution versions of the two outputs from Apple and Adobe are also linked in the text above the comparison slider. We encourage you to look closely at them all and form your own opinions, but I also provide my insight on each result. For this image, I asked both platforms to remove the power lines that bisect the middle of this image. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right. This was the first photo of the test batch and I immediately started to question what is going on at Adobe. For it to produce this result is, frankly, unacceptable. This was probably the easiest of the photo editing tasks I provided and it failed -- miserably. The remove tool added a ton of pixelated noise, telling me that the AI behind Generative Remove is not comfortable or even familiar with texture. In this photo, I asked both Apple Photos and Adobe Photoshop to remove the two people standing by the edge of the platform. Arguably, this is one of the more common applications of an AI-assisted remove tool. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right. This, the second image I processed, is where I started to see a trend form. Whatever Adobe did to its latest Firefly model, it desperately does not want to remove objects. Instead, it seems to want to always fill the blank space with something. In this case, it managed to skillfully remove one person but replaced the other with a post. I'm not going to sit here and say that Apple's job was perfect: it's not. The AI has some problems with that busy background and the areas where it had to fill in ended up looking like visual clutter. But if I have to pick between visual clutter and a nonexistent, nonsensical post, I'm going with the former. In this photo, I wanted to remove the people on the left side of the photo so that the focus would be more on the two girls in kimono. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right. This is one of the more comical results from Adobe, as I can understand where the AI is coming from but it is absolutely not correct. It's also another example of the software leaning hard into the, "I need to add something," point of view. On the flipside, Apple's Clean Up does a fairly good job at replicating a reflection on the wet street, which looks pretty similar to the reflection a post in the background is casting (if not a bit too strong). The street, however, is warped and the pixels are smudged, so it's not a perfect fix. But if I were asked to pick one of these results, the answer is obvious: Apple's is better. I was curious how both Clean Up and Generative Remove would handle a main subject, so I selected this photo of a boat on a river and asked both platforms to remove the boat. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right. Adobe finally produced a good result after three successive abject failures. The main difference between the two outputs comes down to the replaced pixels. Adobe's result is a smudgy, blobby mess that is less noticeable on water than it would be on a more textured background but it's not clean and sharp. Apple's result is much sharper, but it replicates pixels in an odd, unnatural way. Clean Up also specifically recognized the boat, which is why the area it fills with generated pixels is smaller than the one Adobe produced even though I highlighted the same space on each platform. I'm calling this a tie. I think this is one of the more challenging tasks: I asked Clean Up and Generative Remove to delete the person riding this escalator. The result from Adobe Generative Remove is on the left while the result from Apple Cleanup is on the right. Apple is once again the clear winner here as it is able to accurately and sharply replicate the background believably. There is still a dark shadow where the person used to be standing, but at least the steps on the escalator are clean, straight, and sharp. Adobe's result, on the other hand, is riddled with errors. There are floating chunks on both the steps and just below the railing, making it unusable as-is. It would take several more minutes of careful clone-stamping to get the Adobe generation to the level of completeness Apple achieved on its own. To this point, I've given Adobe access to RAW files so that I can use Generative Remove in Photoshop ACR. But what if no RAW file is available? The next best option is Generative Fill, and that's what I used in this final comparison. The result from Adobe Generative Fill is on the left while the result from Apple Cleanup is on the right. I want to loop back to when I said that Generative Remove is very uncomfortable with blank space. Generative Fill here absolutely would not give me any result that left that seat blank. The first result it suggested is this two-headed man, but there were two others: Just... no. I don't want to replace the man with another man, and I most certainly don't want that man to have two heads, and the final example appears to depict a person being sucked into the marble. None of these results are good. As a note, I did do what Adobe instructs you to do when you want Generative Fill to simply fill a selection with surrounding information with the intention of having objects removed: I left the prompt blank. If I did want a person added, I would have said so in the prompt. On the other side, Apple again does a good job. It's not perfect, but it at least understood the assignment. Apple Clean Up is the clear winner. We suspect there is something very wrong with Adobe Firefly right now. We can't think of a time that Adobe had a feature available in its software for six months and then when a new version of it was released, it is objectively worse. That is exactly what happened with this update. After speaking with Adobe about some of these results, the company got back to me today saying that it updated its model last night after I reported my problems. I re-tested the above images and, unfortunately, the results were the same. I was really hoping the power lines image would get better, but Generative Remove still produces the pixelated visual mess even this morning. Something did change though: I re-ran the Generative Fill task and it gave me something different. It's not good, but it is different: Again, the other options it provided were no less terrible: What is happening here is somewhat of a cautionary tale in the story of AI. Adobe does plan to charge Generative Credits to use these tools in the future, going so far as to set up a system where you could give them money for more credits now, even if the company says it's not actively tracking their usage yet. Even if this problem gets resolved, which I am sure it will, who is to say it won't happen again the next time Adobe updates its model? These results are not only bad, but they would force me to re-run results multiple times in an attempt to get something usable, which would burn credits. That would feel like getting charged for unacceptable results. That's not a winning business strategy. We tested the same photos in both Photoshop with ACR and Lightroom Classic on two different machines and the results were different every time. Sometimes they were usable while other times they ended up worse than the examples I show above. Of note, none of the times we asked Adobe to remove the power line resulted in a clean image. That's a glaring problem. I opted to show the first result given to me by both Apple Clean Up and Generative Remove, but you may get different results on your end. That speaks to a consistency issue, too, and loops back around to Adobe's future Generative Credits system. Adobe declined to specifically say what might have happened with this model, only: "We're looking into this and always appreciate feedback from the community as we continue to improve the quality of our tools." We plan to revisit this comparison after Adobe figures out what is causing Firefly to generate these, frankly, bizarre results. On the other side of the coin, Apple's first attempt at an AI removal tool is, generally, a success. I wouldn't say it gets full marks, but it at least does what it promises -- it cleans up images by removing unwanted objects. The elevator photo and the power lines are perhaps its best wins with results I would say look very real. Clean Up's results are also very predictable, giving the same result every time I processed the images. That's probably because Apple's system is very focused on removal, which is easier to code and provides a more consistent result compared to Adobe's which is running on an AI model that also generates images out of thin air. It's also worth noting that if you have a Mac that can run the macOS Sequoia, Clean Up is free. Access to Adobe's Generative Remove and Generative Fill tools is only available to subscribers.
Share
Share
Copy Link
Recent updates to Adobe's AI-powered tools have led to unexpected issues, while Apple's new Clean Up feature demonstrates promising results in photo editing.
Adobe's recently updated Firefly-powered generative AI tools, including Generative Remove and Generative Fill, have encountered significant issues following the Adobe MAX event. Users and professionals have reported a decline in performance, with some describing the results as "unusable" 1. The problems range from mismatched textures to the unexpected addition of objects in images, severely impacting workflows that had previously benefited from these AI-assisted editing features.
Photographers and editors have expressed frustration with the recent changes. One user on the Adobe Photoshop forums reported a drop in success rate from 90-95% to 5-10% or less 1. Adobe employee Terry White acknowledged the issues, suggesting that users ensure they completely brush over objects for removal, including shadows and reflections. However, he conceded that there is still room for improvement 1.
Adobe explains that, unlike traditional software updates, AI-powered tools may not consistently improve with each iteration. The company describes this as a "one-step backward, two-step-forward situation," which is new territory for photo editing applications 1. This inconsistency highlights the challenges of integrating rapidly evolving AI technology into established software platforms.
In contrast to Adobe's struggles, Apple has introduced a new Clean Up feature in macOS Sequoia (version 15.1) that has shown promising results 2. This tool, which uses generative AI to remove unwanted objects from images, competes directly with Adobe's offerings.
A series of tests comparing Adobe's Generative Remove and Apple's Clean Up feature revealed significant differences in performance:
The contrasting performances of Adobe and Apple's AI-powered editing tools highlight the rapid advancements and challenges in the field of AI-assisted image editing. For Adobe, a long-standing leader in professional photo editing software, these setbacks could potentially impact user trust and market position. Conversely, Apple's strong entry into this space with its Clean Up feature demonstrates the company's growing capabilities in AI and image processing.
As the competition intensifies, both companies are likely to invest heavily in improving their AI technologies. This rivalry could ultimately benefit users, driving innovation and pushing the boundaries of what's possible in AI-assisted photo editing. However, it also underscores the need for caution when integrating AI tools into professional workflows, as their performance can be unpredictable during the development phase.
Reference
Adobe has unveiled a new AI-powered 'Distraction Removal' tool for Photoshop, promising to simplify the process of removing unwanted elements from images. This feature, along with other AI enhancements, aims to streamline the creative workflow for photographers and designers.
3 Sources
3 Sources
Apple's iOS 18.1 introduces an AI-powered 'Clean Up' feature for photo editing, rivaling Google's Magic Editor. The new tool offers intuitive object removal and background blending, sparking comparisons in performance and user experience.
3 Sources
3 Sources
Adobe's latest AI model, Firefly Image Model 3, brings unprecedented capabilities to image creation and editing. This update enhances Photoshop's Generative Fill feature, offering users more control and creativity in their digital workflows.
2 Sources
2 Sources
Adobe introduces a range of AI-powered tools and updates across its Creative Cloud applications, including Photoshop, Premiere Pro, and Illustrator, at its annual Adobe MAX conference.
10 Sources
10 Sources
Adobe introduces new AI-powered video generation and translation features to its Firefly platform, aiming to streamline content creation for professionals and casual users alike.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved