12 Sources
12 Sources
[1]
YouTube secretly tested AI video enhancement without notifying creators
Is it a conspiracy? For months, YouTubers have been quietly griping that something looked off in their recent video uploads. Following a deeper analysis by a popular music channel, Google has now confirmed that it has been testing a feature that uses AI to artificially enhance videos. The company claims this is part of its effort to "provide the best video quality," but it's odd that it began doing so without notifying creators or offering any way to opt out of the experiment. Google's test raised eyebrows almost immediately after it began rolling out in YouTube Shorts earlier this year. Users reported strange artifacts, edge distortion, and distracting smoothness that gives the appearance of AI alteration. If you've ever zoomed in close after taking a photo with your smartphone only to notice things look oversharpened or like an oil painting, that's the effect of Google's video processing test. According to Rene Ritchie, YouTube's head of editorial, this isn't quite like the AI features Google has been cramming into every other product. In a post on X (formerly Twitter), Ritchie said the feature is not based on generative AI but instead uses "traditional machine learning" to reduce blur and noise while sharpening the image. Although, this is a distinction without a difference -- it's still AI of a sort being used to modify videos. YouTuber Rhett Shull began investigating what was happening to his videos after discussing the issue with a fellow creator. He quickly became convinced that YouTube was applying AI video processing without notifying anyone -- he calls this "upscaling," though Google's Ritchie contends this is not technically upscaling tech.
[2]
Intrusive Thought of the Day: Is That YouTube Video Enhanced With AI?
Have you noticed YouTube videos have started to have a little hint of the Uncanny Valley in recent months? You are far from alone, as a growing chorus of folks stuck in YouTube's endless scroll of Shorts have started piecing together similar qualities across videos that give viewers the heebie jeebies. That's probably not the intended response that YouTube was going for, but according to a report from The Atlantic, the effects are intentional and part of an ongoing experiment by YouTube to "enhance" videos. Here's what to look for to spot an "enhanced" video, according to users: "punchy shadows," "sharp edges," and a "plastic" look. According to the BBC, YouTubers have also pointed out these strange effects, which lead to more defined wrinkles appearing in clothing, skin looking unnaturally smooth, and occasional warping around the edges of a person's face. Some creators expressed concerns that the unnatural look could lead to viewers thinking they used AI in their video. All of this is appearing because YouTube is tweaking people's videos after the content is uploaded, and has been doing so seemingly without any forewarning that changes would be made and without the permission of the creator. And while YouTubers like Rhett Shull have suggested the effects are the result of AI upscaling, an attempt to "improve" video quality using AI tools, YouTube has a different explanation. "We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)," Rene Ritchie, YouTube's head of editorial and creator liaison, said in a Twitter post. "YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features." It's certainly an interesting decision to explicitly identify these techniques as "traditional machine learning technology" rather than AI. A spokesperson for Google made the message even clearer in a statement to The Atlantic, stating, "These enhancements are not done with generative AI.†It's not like YouTube has exactly been distancing itself from generative AI. The platform just launched a new suite of "generative effects" that it has encouraged creators to use. Other creators have shown that YouTube uses AI tools to generate "inspiration" and ideas for new videos for their channel. But perhaps it's the viscerally negative response that people have had when spotting these "enhanced" videos that has YouTube backing away from the AI-centric language. This experiment has apparently been going on for a couple months, if the eyes of viewers are to be trusted. The BBC tracked examples of complaints about the effects described by YouTube as "enhancements" dating back to June of this year. It's also led to some users taking a conspiratorial view of the experiment, suggesting the company is trying to desensitize audiences to AI-style effects and make them more palatable. On the positive side, that at least suggests people are generally rejecting slop. Ideally, YouTube won't keep dragging its creators down into the AI mud and will let their videos be. It's not like the platform is exactly short on content, after all.
[3]
YouTube has been running a secret AI experiment, and it's not sitting well with creators or their audiences
This overlooked YouTube trick completely changed how I watch videos Although AI is still in its infancy, we're already seeing some impressive tools coming out which have also caused quite a bit of concern as well. While tools from Google are always welcome, especially when they can help us to learn in new ways, the brand has also introduced some lackluster and problematic products as well. However, its latest move has struck quite a nerve, altering content on its platform. The news comes from BBC, reporting that YouTube has been using AI to alter the look of content on the platform. Now, this in itself would be quite surprising, but it goes even a step further with YouTube not even telling the original creators that the alterations were taking place, or even asking permission to get this done. YouTube might want to roll this change back For a brand that's very protective of its creators on its platform, this seems like a blatant overreach. While it's unclear just how many videos have had this type of alteration performed, the subject in the article, Rick Beato, who is known for in-depth videos on everything about music, shares the things that he noticed with the content on his own channel. Beato goes on to explain that he noticed that his hair looked 'strange' and in other instances it looked like he 'was wearing makeup' in some of his Shorts. The BBC goes on to explain that YouTube has been using AI in order to edit videos on the platform to enhance their appearances. Now, if this was done perfectly, there would still be complaints, but at least the results would be pleasing to the eye. But the problem is that most of these changes make it look like the video was generated using AI. You probably know what we're talking about here, when AI can't quite deal with all the intricacies that are in a photo or video, and decides to create its own twisted image of what it thinks things should look like. If you want to see the full breakdown of this 'AI magic' we suggest checking out this video by Rhett Shull. Shull does a deep dive into what he's found, showing that while the changes could be very minor in some cases, it's still being done without permission from the creator. There's also even been confirmation from YouTube via its Liason about what's going on, and it appears that YouTube is indeed running some kind of 'experiment' that uses 'traditional machine learning technology' to make videos appear clearer. If you check out the responses to the Liason post on X, you'll see that people simply aren't vibing with what's going on, calling for these types of alterations to stop. It's unclear how things will progress, but it might be a good idea for YouTube to make this type of change a choice or get rid of it altogether. Of course, by uploading to the platform, creators are bound by a set of rules that allows this type of thing to happen. We've seen plenty of missteps on this new and untravelled road. As mentioned before, AI is still in its infancy, and there's no telling how the world will look in just a couple of years because of how far and fast things are progressing. While there's no doubt that some of this can be used for good, there's always the chance that something bad will also come from it. Have you noticed anything strange with the YouTube Shorts you watch every day? If so, let us know in the comments.
[4]
Google secretly 'enhanced' YouTube Shorts videos with AI filters
If you've noticed an unnatural smoothness to YouTube Shorts, you're not alone. A representative said it's a test, but claims generative AI isn't involved. YouTube is being overrun with AI slop. And it probably doesn't help that YouTube itself, and owner Google, is where a lot of it is coming from. The latest questionable decision from the operator of the web's de facto home for video? Using AI-powered tools to "enhance" videos, without telling anyone -- including the creators who made said videos. YouTube viewers and video producers like Rhett Shull have noticed a certain sheen and smoothness to some videos that wasn't intentionally done by the original uploaders. This sort of filtering isn't new -- in fact, you've probably seen it over-applied to old movie clips uploaded to TikTok and YouTube shorts, giving them an unnaturally smooth motion and overly glossy look for things like human skin. But the subtle application of these filters is part of a test rolled out by YouTube itself, confirmed by Rene Ritchie, the platform's head of editorial. "We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)," Ritchie said, replying to a question on social media. He started the post with "No GenAI, no upscaling," perhaps in hopes of deflecting some of the backlash. Calling the tool "traditional machine learning" (what?) was probably meant to soften the blow as well. As Ars Technica notes, this is indeed simply a wider application of similar filter tools that have been available for a while. The misapplication of the term "AI" to machine learning -- and the intentional overselling of products while large language models become more prevalent in the public consciousness -- is one of my personal bugaboos with Google and other marketers of this new technology. But here's the other shoe dropping: Google has no one to blame but itself if users instantly recoil at the thought of applying "AI" to videos, even if it's little more than a new kind of filter. Users are increasingly wary of harder-to-spot generative AI slipping into text, images, music, and video, and Google/YouTube itself is one of the biggest vendors of this technology. Applying machine learning tech (again, possibly intentionally confused with "AI" tools) to videos suddenly becomes a point of contention for users who might not have had any problem with it a few years ago. Not to mention the problem of applying these visual filters to videos without even informing the creators of those videos. Smoothing motion and evening out textures, particularly skin and other fine details, is a touchy subject at the best of times. Doing it without explicitly telling people it's being done is a great way to lose the trust of the people making the content that YouTube relies upon.
[5]
YouTube is Secretly Editing Users' Videos Without Their Consent
YouTube has been quietly using AI to alter people's videos without telling them or asking for permission. According to a report by the BBC, YouTube has secretly used AI to edit people's videos in recent months. The changes are reportedly small and often hard to spot without a side-by-side comparison. Wrinkles in clothes appear sharper, skin can look smoother or more textured, and ears sometimes warp. Some creators say these edits give their work an artificial look they never intended. Several YouTubers noticed the differences and raised concerns online. In one video with over 600,000 views, content creator Rhett Shull showed that the same video looked different on YouTube compared with Instagram. He says the YouTube Shorts version appeared "smoothened" and had "an oil painting effect" on his face. "I did not consent to this," Shull says. "The most important thing I have as a YouTube creator is that you trust what I'm making, what I'm saying, and what I'm doing is truly me. Replacing or enhancing my work with some AI upscaling system not only erodes that trust with the audience, but it also erodes my trust in YouTube." Shull is not alone, according to the report. A Reddit post from June 27 entitled "YouTube Shorts are almost certainly being AI upscaled" shows screenshots of the same video at different resolutions to claim that details were being added or removed by AI. Other social media complaints by YouTube revealed how their videos were being automatically edited with AI. After months of speculation, YouTube has now confirmed that it is altering a limited number of videos on its Shorts platform. "We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)," Rene Ritchie, YouTube's head of editorial and creator liaison, writes a post on X last week. "YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features." YouTube did not respond to the BBC's questions on whether creators will have the option to disable the edits. However, the news outlet pointed out that altering videos without informing users could undermine trust in what people see online. It comes months after reports that Google is using its expansive library of YouTube videos to train its AI models like Gemini and Veo 3, shocking many content creators.
[6]
YouTube admits it's been enhancing videos behind the scenes with machine learning
YouTube has admitted to digitally polishing creators' Shorts without their knowledge, following a growing wave of creator confusion that led to accusations of AI interference ruining videos. The company claimed to have been "experimenting" with subtle machine learning enhancements on select Shorts videos. The tweaks are supposed to improve the videos' clarity, but were made without the creator's consent. The issue first gained notice when musician and YouTuber Rick Beato noticed a clip of his interview with Pearl Jam guitarist Mike McCready on YouTube Shorts seemed odd, looking like it had been sent through a filter. He made a viral video about it, and many others started posting what seemed like similar changes made to their own videos. Though some whose videos were affected leveled accusations that YouTube applied AI to the videos, YouTube was firm about it being "only" machine learning. However, regardless of the tools used, the creators are more upset that their work was quietly altered in the first place. After weeks of mounting criticism, YouTube says it's building an opt-out, according to Creator Liaison Rene Ritchie in a post on X. Despite YouTube likening the changes to computational photography, which improves smartphone photos, the key difference is obvious when considering the order of events. Smartphone enhancements are applied before the user ever sees the image. In YouTube's case, the creators had already uploaded and approved their content, which was then changed behind the scenes after the fact, without notice. YouTube's reasoning is understandable, as Shorts are mobile-first, fast-scrolling, and often visually inconsistent. A little extra polish could help the scrolling experience feel more cohesive, with clearer videos and a better experience. But for creators who feel responsible for whatever is posted under their name, unacknowledged changes undermine that creative ownership. Especially in a time when AI fakery is making viewers more skeptical of what they see on their screens already. For instance, Netflix provoked a lot of outrage over "HD remasters" of classic sitcoms like A Different World. The AI involved made for some warped faces and uncanny backgrounds, not to mention the AI-generated posters for its content. YouTube's case is arguably more delicate. Unlike streaming platforms, where viewers have little control over the product, YouTube is a creator-driven ecosystem. If the platform starts altering what creators publish, even with good intentions, it risks damaging the trust that makes the whole system work. YouTube's promise of an opt-out is probably a necessary course correction, but one that came only after public pressure. If platforms want to keep the trust of their users and the creators who keep them alive, they need to be more transparent, regardless of whether it's AI or simply machine learning that appears to mimic AI in the results.
[7]
YouTube dismisses creators' concerns that it secretly used AI to edit some videos
The YouTube logo is seen outside the company's corporate headquarters in San Bruno, Calif., on April 23.Josh Edelson / AFP / Getty Images Earlier this month, YouTuber Rick Beato called his fellow creator Rhett Shull with a question: Did one of his recent videos look a little off? The video, which Beato had uploaded to YouTube Shorts on Aug. 5, was a clip of his interview with Pearl Jam guitarist Mike McCready. The creator, who is also a music producer and multi-instrumentalist, has built a following of 5.1 million subscribers for his guitar-related content. He'd posted the same video to his Instagram page. Shull, a guitarist who also primarily makes videos about music, said he noticed something did stand out about the YouTube Shorts version: It looked as if it had been enhanced using generative artificial intelligence. Aspects of the background looked smudged, giving it an "oil painting" effect, he said. Other details, like Beato's hair, appeared especially sharp. "I've been making videos for a long time and I'm someone that spends a lot of time trying to get their videos to look a certain way, with the lighting and the color grade and stuff," Schull told NBC News in a phone interview. "And I know what the normal YouTube compression looks like ... But what was going on here is very, very different." Seeking answers from fellow creators, Shull posted a video titled "YouTube Is Using AI to Alter Content (and not telling us)." In the past 11 days, his video has been widely circulated across X, where some people have reshared clips of it and tagged YouTube to ask about the claim. One person even posted side-by-side screenshots of Beato's McCready videos to showcase the subtle changes. In a post to X last week, YouTube disputed allegations that it used generative AI or "upscaling" -- when artificial intelligence predicts a high resolution image from a lower resolution one using a deep learning model -- on creator videos. "We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing," the YouTube Liaison account, run by YouTube's head of editorial, Rene Ritchie, wrote on X in response to a question from a user who had seen the discourse around Shull's video. It was the first official comment from YouTube regarding the concerns, which were first raised by creators on Reddit in June. Several people had previously posted similar observations to Shull and Beato in r/Youtube. "I was wondering if it was just me," wrote one Reddit user. "omg, i had this today too, i freaking hate this, i don't want a platform altering my content," added another. Shull and others have said the issue isn't just that YouTube could be using AI to alter their content. Many creators have been experimenting with various AI tools for a while, including ones rolled out by the platform last year, to help them improve their videos. The main problem, according to some creators, is that they aren't being given the option to opt out. "It doesn't really matter if you're using 'traditional machine learning' or "GenAI', you're still altering the videos without notice or consent from the content owners. In my opinion, I view this practice as both deceptive and malicious," wrote one Reddit user, who was first to post about the topic. Viewers could also grow distrustful of creators' content, according to Shull, especially amid the rise of AI "fakery," which is when AI tools are used to generate or modify content without viewers' knowledge. "If someone sees a piece of content that I've made that looks like it's been altered with AI, the logical conclusion that that person, in my opinion, would jump to is that, 'oh well, Rhett's using AI to make videos or to alter videos,'" Shull said. "Or that I'm somehow using it as like a, a shortcut or a cheat code, or that it's not real, or that it's been deep faked. It raises a lot of questions." It's not the first time a video giant has come under fire for purportedly using AI to enhance content. In January, viewers mocked Amazon for using what appeared to be AI on a poster of the 1922 film "Nosferatu." The company didn't publicly comment on the backlash. In February, Netflix also sparked controversy with its "HD remasters" of "The Cosby Show" and "A Different World," after viewers said they noticed warped facial features on the actors and distorted backgrounds. Netflix did not issue a comment regarding whether it used AI to enhance the shows. When asked for further comment regarding the frustration from some creators, a YouTube spokesperson referred NBC News to the YouTube Liaison X post. In the second part of the X post, Ritchie wrote, "YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features." Later, in response to a different X user, Ritchie elaborated further. "GenAI typically refers to technologies like transformers and large language models, which are relatively new," he wrote. "Upscaling typically refers to taking one resolution (like SD/480p) and making it look good at a higher resolution (like HD/1080p)" the post continued. "This isn't using GenAI or doing any upscaling. It's using the kind of machine learning you experience with computational photography on smartphones, for example, and it's not changing the resolution." Still, as Shull's video picked up more traction, many people on X continued to express their concerns. "Awesome, now let me turn it off because it's actually making my Shorts look worse," wrote X user CaptainAsthro, who goes by the same username on YouTube where he posts about the video game Star Citizen, in response to Ritchie's X post. "The issue isn't what technology is being used," wrote Ari Cohn, a First Amendment and defamation lawyer who serves as lead counsel for tech policy at the Foundation for Individual Rights and Expression (FIRE). "It's that you're changing the content without the permission or even knowledge of its creator." "YouTube has confirmed it's testing AI for clarity in Shorts, but the lack of choice for creators is sparking conversations about trust and authenticity in digital content," AI strategist and former IP lawyer Wes Henderson said in a post on X. "It really makes you think about the evolving role of AI in shaping what we see online and the importance of creator awareness." Shull said he's planning to continue uploading videos like he normally does. But, after such a massive response to his video about the YouTube's enhancement on some creator shorts, he said he may start talking about AI more in his content. "Not in a way to like, try and take it down or be anti-AI, but more so just to sort of have the conversation and say, 'Hey, what are we doing here? Is this a good thing? Is this a bad thing?'" he said. "Because it feels like we're moving really fast and not taking into account a lot of really important issues. Mainly, chief among them, is how this is going to impact people and their livelihoods across multitude of different industries, not just mine."
[8]
YouTube tests AI edits on Shorts without disclosure sparking creator backlash - SiliconANGLE
YouTube tests AI edits on Shorts without disclosure sparking creator backlash A new controversy has emerged around YouTube after creators discovered that the platform has been quietly altering some of their uploaded Shorts videos using artificial intelligence without disclosure, raising concerns about creative integrity and transparency. Reports first emerged earlier this month from a number of creators who claimed that their clips appeared sharper, smoother or unnaturally stylized compared to the originals. Some described the effect as "plastic" or "oil painting-like," with details that had been subtly changed by machine-driven processing. YouTube insists the updates rely on traditional machine learning tools for unblurring, denoising and clarity enhancements rather than generative AI, however, the results bear a striking resemblance to diffusion-style upscaling models that have become common in the broader industry. According to Rene Ritchie, YouTube's head of editorial, there was no generative AI or upscaling involved. "We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)," Ritchie said on X Inc. (formerly Twitter). "YouTube is always working on ways to provide the best video quality and experience possible and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features." Irrespective of the claimed intent, creators are not happy. In one case, a creator known as Mr. Bravo, who is known for using VHS-like grain in his productions, claims that the edits stripped away the aesthetic choices that define his work. Musicians Rhett Shull and Rick Beato have also reported similar issues with their videos. Among the critics against the move by YouTube, Dave Wiskus, chief executive officer of independent streaming platform Nebula Inc., says that the approach taken by YouTube is "disrespectful" and equates the edits to tampering with an artist's work without permission. Other creator communities on sites such as Reddit are generally in agreeance and are warning that AI-driven alterations could undermine authenticity and blur the line between original and machine-mediated media. YouTube's decision to edit existing videos, particularly without disclosure, raises valid questions around ownership, authenticity and disclosure.
[9]
YouTube is quietly using AI to change some videos without creator consent
TL;DR: YouTube has been secretly using AI to enhance select Shorts videos by unblurring, denoising, and improving clarity without creators' consent. This experiment aims to boost video quality but has raised trust concerns among creators who feel their original content is altered without permission. YouTube has secretly been using AI to edit people's videos over the last few months, with these changes being so small that users wouldn't notice unless the videos were placed side-by-side. The claims come from a new BBC report that cites several YouTubers who have noticed differences in their videos. One content creator, Rhett Schull, pointed to one video that looked different on YouTube Shorts compared to Instagram, with the creator saying the YouTube Shorts version looked "smoothened" with an "oil painting effect" over his face. Shull added that he "did not consent to this," adding, "The most important thing I have as a YouTube creator is that you trust what I'm making, what I'm saying, and what I'm doing is truly me. Replacing or enhancing my work with some AI upscaling system not only erodes that trust with the audience, but it also erodes my trust in YouTube." Shull isn't the only creator who has noticed some of their content being changed on YouTube, as a Reddit post from June 27 has pointed out the same thing. Now, YouTube has confirmed that it is changing some YouTube Shorts videos, with the platform describing the alteration as "running an experiment". "We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video). YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features," wrote Rene Ritchie, YouTube's head of editorial and creator liaison in an X post
[10]
YouTubers Say AI Is Editing Their Shorts Without Consent
YouTube said it is improving video quality as part of an experiment YouTube might be using artificial intelligence (AI) to alter human-created Shorts videos, several content creators have claimed. YouTubers have complained that the AI-generated effects on their videos were being added without their express consent, and added that the streaming giant did not inform them about adding the said effect. YouTube has responded to the complaints and has acknowledged that certain Shorts were being edited using "traditional machine learning technology" as part of an ongoing experiment. However, the company maintained that no generative AI was used for this. YouTube's New Experiment Uses Machine Learning to Improve Shorts' Quality In a video (first spotted by BBC), YouTube content creator Rhett Shull showcased that his videos looked different when uploaded to YouTube compared to when uploaded on Instagram. Placing both videos side-by-side, he claimed that the one posted as Shorts appeared "smoothened," and as if "an oil painting effect was added to his face." Shull is not the only one with this experience. A Reddit post from June 27, titled "YouTube Shorts are almost certainly being AI upscaled," mentioned experiencing the same thing. The poster also shared screenshots of a video across different resolutions to claim that AI was being used to add and remove specific details. In both cases, users noted that the faces were being smoothened, hair made to look sleeker, and wrinkles on shirts being erased. Both users called the practice of altering elements in their videos deceptive and malicious, due to the streaming platform not communicating these changes. Rene Ritchie, YouTube's head of editorial and creator liaison, explained in a post on X (formerly known as Twitter), "No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)." Despite the explanation, several users commenting on the post claimed that the company was being deceptive by using "machine learning" instead of AI. However, Ritchie responded, "GenAI typically refers to technologies like transformers and large language models, which are relatively new. Upscaling typically refers to taking one resolution (like SD/480p) and making it look good at a higher resolution (like HD/1080p). This isn't using GenAI or doing any upscaling." "I did not consent to this." Shull said in the YouTube video, adding, "The most important thing I have as a YouTube creator[..]is that you trust what I'm making, what I'm saying, and what I'm doing is truly me[..]Replacing or enhancing my work with some AI upscaling system not only erodes that trust with the audience, but it also erodes my trust in YouTube."
[11]
YouTube secretly used AI to edit Shorts content
While we might have once seen the internet as a place of total freedom, website owners in the past few years have been reminding us where the power really lies. From cookies to age verification and now AI smoothing of videos, we're in a different kind of internet now, as shown by YouTube's latest video "improving" endeavour. As caught by the BBC, YouTube has been editing people's videos using AI. It will sharpen and smoothen skin, as well as alter certain features like ears. YouTube claims this is a feature being tested on Shorts to improve clarity, but users aren't best pleased. The theory is that these "improvements" are being made to blur the line between AI-generated and user-generated content, so that soon it will be tough for people to tell the difference between the two. Right now, YouTube also isn't offering an option for people to opt out of these AI edits, which only further fuels the flames.
[12]
YouTube changed videos with AI without informing channel owners: Why this could be dangerous
Undisclosed AI changes on YouTube Shorts spark debate over authenticity and consent In the fast-evolving world of digital content creation, trust is the currency that binds creators, platforms, and audiences. But a recent revelation has shaken that foundation: YouTube has been quietly using artificial intelligence to alter YouTube Shorts without informing or obtaining consent from the creators behind them. This clandestine practice, which involves machine learning techniques to enhance video quality, has sparked outrage among channel owners and raised alarm bells about transparency, authenticity, and the potential for broader misuse of AI in online media. The issue came to light when creators like Rhett Shull, a musician and YouTuber with near 750,000 subscribers, noticed something off about their YouTube Shorts. In a video that has since garnered over 700,000 views, Shull compared his original uploads to the versions appearing on YouTube, revealing subtle but unmistakable changes: smoothed skin, unnaturally sharpened clothing wrinkles, and even distorted features like warped ears. "I did not consent to this," Shull said, echoing a sentiment shared by many creators who felt their creative control had been undermined. Also read: What is Wan 2.2: Free AI video generation tool going viral right now Rick Beato, another prominent music YouTuber, echoed Shull's concerns, pointing out that the AI enhancements gave videos an artificial sheen that clashed with the raw, authentic aesthetic many creators strive for. These changes, often only noticeable through side-by-side comparisons with originals posted on platforms like Instagram, were applied without any notification to channel owners. It seems like the platform has been experimenting with "traditional machine learning" to enhance Shorts. The techniques, he explained, were akin to those used in smartphone cameras to unblur, denoise, and improve video clarity. Ritchie emphasized that this was not generative AI but rather a form of post-processing to make videos "pop." However, the explanation did little to quell the backlash, as YouTube offered no commitment to halt the practice or provide creators with an opt-out option. The lack of transparency is particularly galling for creators who rely on YouTube as their primary platform. "It's not just about the edits themselves," says Shull. "It's about the principle. If they can change our videos without telling us, what else are they doing behind the scenes?" The absence of clear communication or consent has left creators feeling like their work is no longer fully their own, raising questions about who truly controls the content on YouTube. The implications of YouTube's actions extend far beyond a few smoothed faces or sharpened textures. Experts warn that this move could set a dangerous precedent for how AI is used in digital media. Samuel Wooley, a professor at the University of Pittsburgh who studies digital propaganda, argues that undisclosed AI edits threaten the authenticity that audiences crave. "People turn to creators for real, unfiltered perspectives," he says. "When a platform alters that content without disclosure, it erodes trust - not just in the creator, but in the entire ecosystem." This controversy comes on the heels of reports that Google, YouTube's parent company, has been using YouTube videos to train AI models like Gemini and Veo 3. While YouTube insists that its Shorts enhancements are unrelated to generative AI, the timing fuels skepticism. Creators worry that their content is being used as a testing ground for AI experiments, with little regard for their rights or creative intent. Also read: How to install and run Wan 2.2 locally on your Windows PC: Step-by-step guide The potential dangers are manifold. For one, undisclosed AI edits could distort a creator's brand or message. A musician aiming for a gritty, lo-fi aesthetic might find their work polished into something unrecognizable. A vlogger sharing a raw, emotional moment could have their authenticity undermined by an artificial glow. Beyond aesthetics, there's the risk of deeper manipulation. If YouTube can alter videos without consent, what's to stop the platform or others from making more significant changes, like altering audio, inserting product placements, or even modifying the substance of a video? The creator community has not taken this lightly. On platforms like X, creators have rallied under hashtags like #YouTubeAIEdits, sharing side-by-side comparisons of their videos and demanding greater transparency. Some have called for YouTube to implement an opt-out mechanism, while others advocate for a complete halt to AI enhancements unless explicitly approved. "This isn't just about Shorts," says Beato. "It's about the precedent it sets. If they can do this now, what's next?" The backlash has also sparked discussions about creator rights in the age of AI. Many argue that platforms like YouTube, which profit from user-generated content, have a responsibility to treat creators as partners, not pawns. "We're not just uploading videos for fun," says Shull. "This is our livelihood. We deserve to know what's being done with our work." YouTube's decision to alter videos without consent comes at a time when trust in digital platforms is already fragile. With misinformation, deepfakes, and AI-generated content on the rise, audiences are increasingly skeptical of what they see and hear online. By introducing undisclosed edits, YouTube risks further eroding that trust, alienating both creators and viewers. For now, YouTube has not indicated whether it will change its approach. The platform's silence on an opt-out option or broader policy changes leaves creators in limbo, forced to either accept the alterations or seek alternative platforms. Some, like Shull, are exploring options like Vimeo or Patreon, where they can exert greater control over their content. But for many, leaving YouTube, a platform with unmatched reach and monetization potential, isn't a viable option. Without transparency and consent, the line between enhancement and manipulation becomes dangerously thin. If platforms prioritize technological advancements over creator autonomy, they risk alienating the very people who make their platforms thrive. The controversy over YouTube's AI edits is more than a dispute over video quality. In an era where AI's capabilities are expanding rapidly, the need for ethical guidelines and creator empowerment has never been greater. Without them, the trust that holds the digital world together could crumble, one unapproved edit at a time.
Share
Share
Copy Link
YouTube has been secretly testing AI-powered video enhancement on its platform, particularly for Shorts, without notifying creators or seeking their consent. This has led to concerns about content integrity and trust among the YouTube community.
YouTube, owned by Google, has been conducting a secret experiment using artificial intelligence to enhance videos on its platform, particularly YouTube Shorts, without notifying creators or seeking their permission
1
. This revelation has sparked controversy and raised concerns about content integrity and trust within the YouTube community.Source: GameReactor
According to Rene Ritchie, YouTube's head of editorial, the platform is using what they term "traditional machine learning technology" to unblur, denoise, and improve clarity in videos during processing
2
. YouTube claims this is similar to what modern smartphones do when recording video. However, the distinction between this technology and generative AI has been a point of contention.Users and creators have reported several noticeable changes in their videos:
3
These alterations have led some creators, like Rhett Shull, to investigate the changes in their content. Shull noted that his hair looked "strange" and in some instances, it appeared as if he was wearing makeup in his Shorts
4
.Source: Android Police
YouTube has confirmed that they are running an experiment on select YouTube Shorts, emphasizing that it uses "traditional machine learning technology" rather than generative AI
5
. However, this distinction has done little to quell the concerns of creators and viewers.The lack of transparency and consent in this process has been a major point of contention. Creators argue that altering their content without permission erodes trust between them and their audience. As Rhett Shull stated, "I did not consent to this. The most important thing I have as a YouTube creator is that you trust what I'm making, what I'm saying, and what I'm doing is truly me."
Related Stories
This incident raises important questions about content authenticity and the role of platforms in modifying user-generated content. As AI and machine learning technologies become more prevalent, the line between enhancement and alteration becomes increasingly blurred.
The controversy also highlights the growing wariness among users regarding the application of AI to digital content. With the rise of deepfakes and other AI-generated media, viewers are becoming more skeptical of the authenticity of the content they consume online.
Source: PCWorld
As YouTube continues to "iterate and improve" these features, it remains to be seen how they will address the concerns raised by creators and viewers. The incident underscores the need for clear communication and consent protocols when implementing AI-driven changes to user-generated content on social media platforms.
Summarized by
Navi
[3]