4 Sources
[1]
Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. "This has to be real. There's no way it's AI." I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: "a band of brothers playing rock music in 6/8 with an accordion"β¬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to the (pretty good!) song. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI" (that last part is true, at least). I could go on, but you get the idea. @gameboi_pat This has got to be real. There's no way it's AI π© #google #veo3 #googleveo3 #AI #prompts #areweprompts? β¬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke (or don't, based on some of the comments). The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling ("Goolgle's [sic] New A.I. Veo 3 is at it again!! When will the prompts end?!" Cummings jokes in the caption). @justdrakenaround Goolgle's New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit β¬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that [prompter], I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real... can you guess which one? #veo3 β¬ original sound - Jett After watching both of these videos on loop a few times, I'm relatively (but not entirely) convinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longer (without an apparent change in camera angle) is almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues -- if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes -- even as a joke or engagement hack -- is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling.
[2]
Gemini's Veo 3 AI Video Generator Is Just One Step Away From Decimating Truth on the Internet
I recently tested Google Gemini's newest, much-hyped video generation model, Veo 3. Part of Gemini's extremely expensive $250-per-month AI Ultra plan, Veo 3 can render small, finely detailed objects, like chopped onions, in motion and create accompanying, realistic audio. It isn't perfect, but with some careful prompt calibration and enough generations, you can create something indistinguishable, at a glance, from reality. Yes, this is cool, deeply impressive new technology. But it's also a lot more than that. It might mean the final death knell for truth on the internet. Veo 3 already poses a major threat as is, but just one minor update will revolutionize deepfake creation, online harassment, and the spread of misinformation. Once Veo 3 Gets the Image Upload Feature, It's All Over For all the upgrades the Veo 3 model has over its predecessor, Veo 2, it's currently missing a key feature: the ability to generate videos based on pictures you upload. With Veo 2, I can upload a picture of myself, for example, and have it generate a video of me working on my computer. Considering that Veo 2 and Google's AI animation tool, Whisk, both support this functionality, it seems inevitable that Veo 3 will get it eventually. (We've asked Google if it plans to add this feature and will update this article with its response.) This would mean that anybody will be able to generate lifelike videos of people they know doing and saying things they never have and probably never would. The implications are obvious in an era where clips of dubious authenticity spread like wildfire on social media every day. Don't like your boss? Send a clip to HR of them doing something inappropriate. Want to spread fake news? Post a faux press conference on Facebook. Hate your ex? Generate them doing something unseemly and send it to their entire family. The only real limits are your imagination and your morality. If generating a video with audio of a real person takes only a few clicks and doesn't cost much (or anything), how many people will abuse that feature? Even if it's just a tiny minority of users, that still adds up to lots of potential for chaos. Google Isn't Serious About Moderation As you might expect, Google imposes some limitations on what you can and can't do with Gemini. However, the company isn't nearly strict enough to stop the worst from happening. Of all the chatbots I've tested from major tech companies, Google's offering, Gemini, has the weakest restrictions. Gemini isn't supposed to engage in hate speech, but it will give you examples if you ask. It isn't supposed to generate sexualized content, but it will provide an image of somebody in beach attire or lingerie if you prompt it. It isn't supposed to enable illegal activity, but it will create a list of the top torrenting sites if you so inquire. Basic restrictions for Gemini that prevent it from generating a video of a popular political figure just aren't enough when it's so easy to get around Google's policies. What happens when Google's already lax restrictions meet an internet community intent on breaking them? Take ChatGPTJailbreak, for example, which is in the top 2% of subreddits by size. This community dedicates itself to "unlocking an AI in conversation to get it to behave in ways it normally wouldn't due to its built-in guardrails." What will like-minded people do with Veo 3? I don't care if someone wants to amuse themselves by getting a chatbot to generate adult content or rely on one for finding torrenting sites. But I am worried about what easy-to-generate, photorealistic videos (complete with audio) mean for harassment, misinformation, and public discourse. How to Deal With Veo 3's New Normal For every SynthID AI content watermark system Google introduces, third-party watermark removal sites and online removal guides appear. For every chatbot with restrictions and safeguards, there's a FreedomGPT without them. Even if Google locks Gemini down with so many filters that you can't even generate a cute cat video, there's very little in place to stop jailbreakers and uncensored imitators once Veo 3-style video generation becomes mainstream. For decades, sketchy Photoshopped images depicting real people doing things they never did have made the rounds on the internet -- these are just part of life in the digital age. Accordingly, you must fact-check anything you see online that seems too awful or too good to be true. This is the new normal with Veo 3 video generation: You can't treat any video clip you see as the real thing, unless it's from a reputable news organization or another third party you know you can trust. Gemini's Veo 3 video generation is just the first skip of a stone across the pond of widely accessible, truly lifelike AI video generation, too. AI video generation models are only going to get more realistic, offer more features, and proliferate more, too. Gone are the days when video evidence of something is the smoking gun. If truth isn't dead, it's different now and requires careful verification.
[3]
Google's Veo 3 AI video generator is unlike anything you've ever seen. The world isn't ready.
Screenshot from AI-generated video. Credit: The Dor Brothers / YouTube At the Google I/O 2025 event on May 20, Google announced the release of Veo 3, a new AI video generation model that makes 8-second videos. Within hours of its release, AI artists and filmmakers were showing off shockingly realistic videos. You may have even seen some of these videos in your social media feeds and not realized they were artificially generated. To be blunt: We've never seen anything like Veo 3 before. It's impressive. It's scary. And it's only going to get better. Misinformation experts have been warning for years that we will eventually reach a point where it's impossible for the average person to tell the difference between an AI video and the real thing. With Veo 3, we have officially stepped out of the uncanny valley and into a new era, one where AI videos are a fact of life. While several other AI video makers exist, most notably Sora from OpenAI, the clips made by Veo 3 instantly stand out in your timeline. Veo 3 brought with it several innovations that separate it from other video generation tools. Crucially, in addition to video, Veo 3 also produces audio and dialogue. It doesn't just offer photorealism, but fully realized soundscapes and conversations to go along with videos. It can also maintain consistent characters in different video clips, and users can fine-tune camera angles, framing, and movements in entirely new ways. On social media, many users are dumbfounded by the results. This Tweet is currently unavailable. It might be loading or has been removed. Veo 3 is available to use now with Google's paid AI plans. Users can access the tool in Gemini, Google's AI chatbot, and in Flow, an "AI filmmaking tool built for creatives, by creatives," per Google. Already, AI filmmakers are using Veo 3 to create short films, and it's only a matter of time until we see a full-length film powered by Veo 3. On X, YouTube, Instagram, and Reddit, users are sharing some of the most impressive Veo 3 videos. If you're not on your guard and simply casually scrolling your feed, you might not think twice about whether the videos are real or not. The short film "Influenders" is one of the most widely shared short films made with Veo 3. "Influenders" was created by Yonatan Dor, the founder of the AI visual studio The Dor Brothers. In the movie, a series of influencers react as an unexplained cataclysm occurs in the background. The video has hundreds of thousands of views across various platforms. "Yes, we used Google Veo 3 exclusively for this video, but to make a piece like this really come to life we needed to do further sound design, clever editing and some upscaling at the end," Dor said in an email to Mashable. "The full piece took around 2 days to complete." Dor added, "Veo 3 is a massive step forward, it's easily the most advanced tool available publicly right now. We're especially impressed by its dialogue and prompt adherence capabilities." Similar videos featuring man-on-the-street videos have also gone viral, with artists like Alex Patrascu and Impekable showing off Veo 3's capabilities. And earlier this week, a Wall Street Journal reporter made an entire short film starring a virtual version of herself using Veo 3. All this in just 10 days. In "Influenders" and these other videos, some of the clips and characters are more realistic than others. Many still have the glossy aesthetic and jerky character movements that are a signature of AI videos, a clear giveaway that's similar to the ChatGPT em dash. Just a couple of years ago, AI creations with too many fingers and other obvious anatomical abnormalities were commonplace. If the technology keeps progressing at this pace, there will soon be no obvious difference between real video and AI video. In promoting Veo 3, Google is eager to stress its partnerships with artists and filmmakers like Darren Aronofsky. And it's clear that Veo 3 could drastically reduce the cost of creating animation and special effects. But for content farms and bad actors producing fake news and manipulative outrage bait, Veo 3 is equally powerful. We asked Google about the potential for Veo 3 to be used for misinformation, and the company said that safeguards such as digital watermarks are built into Veo 3 video clips. "It's important that people can access provenance tools for videos and other content they see online," a representative with Google DeepMind told Mashable via email. "The SynthID watermark is embedded in all content generated by Google's AI tools, and our SynthID detector rolled out to early testers last week. We plan to expand access more broadly soon, and as an additional step to help people, we're adding a visible watermark to Veo videos." Google also has AI safety guidelines that it uses, and the company says it wants to "help people and organizations responsibly create and identify AI-generated content." But does the average person stop to ask whether the images and videos on their timelines and FYP are real? As the viral emotional support kangaroo proves, they do not. There's zero doubt that AI videos are about to become even more commonplace on social media and video apps. That will include plenty of AI slop, but also videos with more nefarious purposes. Despite safeguards built into AI video generation tools, skilled AI artists can create deepfake videos featuring celebrities and public figures. TV news anchors speaking into the camera have also been a recurring theme in Veo 3 videos so far, which has worrying implications for the information ecosystem online. If you're not already asking "Is this real?" when you come across a video clip online, now is the time to start. Or, as a chorus of voices are saying on X, "We're so cooked."
[4]
What People are Getting Wrong this Week: Identifying AI Videos
The internet is full is misinformation, conspiracies, and lies. Each week, we tackle the misunderstandings that are going viral. You've probably already been fooled by an AI video, whether you realize it or not. On May 20, Google released Veo 3, its latest AI video generation tool, showing it off with a video featuring an AI-generated seaman, and the results were either impressive or horrifying, depending on your point of view. Yeah. We're cooked. While this video does have a slightly surreal quality upon close inspection, it's good enough to fool most casual viewers. The barrier preventing the average person from being taken in by a computer-generated video has been shattered. Veo 3's videos are so good, you can't easily tell they aren't real, especially when you see them while casually scrolling a social media feed. People are already using Veo 3 for profit, politics, and propaganda. As Lifehacker's Jake Peterson put it, "You are not prepared for this terrifying new wave of AI-generated videos." Veo 3 produces hyper-realistic videos with natural-looking lighting, physicality, sound effects, camera movement, and dialogue. Unlike traditional CGI, the new breed of AI doesn't require a Hollywood budget or a team of animators -- you just need to craft a prompt a few sentences long, and feed it to Veo 3. What gets output doesn't display many of the telltale distortions that used to mark content as obviously created using AI. Take a look at this entirely Veo 3-made video to get a sense of how convincing it can be: This Tweet is currently unavailable. It might be loading or has been removed. Making these videos is extremely easy too -- you don't need to spend all day iterating prompts to get a good result. I went from "I don't know how to do this" to creating the video below in about half an hour, and it was even made using the "free trial" version of Google's AI tool: Is there anything a devoted seeker of truth can do in the face of the upcoming onslaught of AI video slop? Maybe. A little. There are (a few) steps you can take to (maybe sometimes) spot a fake video. At least until Veo 4 makes it that much harder, or a competing AI's video generation service releases a new model that's even better. According to Google, a SynthID watermark is embedded in all content created with Google's generative AI models. Unfortunately, you can't see it, and you can't easily detect it -- at least not yet. The company says it's testing a verification portal to "quickly and efficiently identify AI-generated content made with Google AI." It's not live yet. (Maybe they could have finished working on that before releasing Veo 3?) Anyway, hopefully soon anyone will be able to upload a piece of content and learn if it was made with any of Google's AI tools. Late last week, Google also rolled out a visible watermark on Veo 3 content, called DeepMind's SynthID. Unfortunately (again), it won't apply to "videos generated by Ultra members in Flow, our tool for AI filmmakers," so anyone using the expensive, "professional" version of Veo 3 can still trick you. Here are a few more suggestions for spotting AI-generated video that don't require the use of any tools more sophisticated that your own brain: Those are some concrete steps you can take, but I don't actually think many people will take them when viewing videos on social media. Even in a perfect world, where we all had access to foolproof AI checkers that reliably identified bogus content, plenty of people would still believe AI gunk is real. Who would bother investigating every video that scrolled by on TikTok, and every photo that appeared on Facebook? That's a lot of work, and I don't think most people actually care whether what they're seeing is real or not, as long as they like it. I write about people being fooled by AI creations in this column fairly regularly, and it doesn't seem to matter how convincing the fakes are. Even the sloppiest, obviously uncanny creations are good enough when the people viewing them want them to be real. And that's the hardest thing about AI detection: We're most vulnerable to fake content when it confirms our biases. Humans are the weak link in the chain. While AI programs like Veo 3 make it easier to create fake videos, it's not like creating effective disinformation was impossible before. CGI has been convincing people of unreal things for decades. Before that, you could just film a realistic version of whatever you'd like to see, minus digital effects. Before there were movies, people faked photographs, and before there were photographs, people lied in print. And people lie with their mouths all the time, sometimes while standing behind a podium carrying an official government seal. Deception, forgery, and fraud are as old as mankind. The only difference is that now we can do it a lot faster. The best means of telling the fake from the real has always been by developing your personal bullshit detector, but that's also the most difficult method to rely on. Submitting to confirmation bias is base human nature, and while it's easy to say "be extra suspicious of things that seem true to you," it's not a skill that many of us (or maybe any of us) actually possess. Maybe the people who are most wrong this week (and every week) are people like me, for assuming I'd be able to spot AI fakes when it really matters. Maybe it's easy to spot and call out Facebook slop AI, but how can I ever know what I'm sure is true is actually true? I can't. No one can. And that's a philosophical conundrum that no amount of watermarks or detection tools can fix.
Share
Copy Link
Google's release of Veo 3, an advanced AI video generation tool, has sparked both excitement and concern as it produces highly realistic videos that are increasingly difficult to distinguish from genuine footage.
Google has released Veo 3, its latest AI video generation model, marking a significant leap in artificial intelligence technology. This tool, part of Google's Gemini AI suite, can create highly realistic 8-second video clips complete with sound and lip-synced dialogue 12. The release has sparked both excitement and concern among tech enthusiasts, content creators, and misinformation experts.
Veo 3 stands out from previous AI video generators due to its ability to produce remarkably lifelike videos. It can render small, finely detailed objects in motion and create accompanying, realistic audio 2. The tool's capabilities extend to maintaining consistent characters across different video clips and allowing users to fine-tune camera angles, framing, and movements 3.
Since its release, Veo 3 has been embraced by AI artists and filmmakers who are showcasing its potential through short films and creative projects. Notable examples include "Influenders," a short film created by Yonatan Dor, which depicts influencers reacting to an unexplained cataclysm 3. The film's realistic portrayal has garnered hundreds of thousands of views across various platforms.
Source: Mashable
The realism of Veo 3's output has led to an interesting phenomenon on social media platforms like TikTok. Some content creators are now pretending to be AI-generated avatars, reversing the typical dynamic of AI imitating reality 1. This trend highlights the increasing difficulty in distinguishing between genuine and AI-generated content.
While Veo 3's capabilities are impressive, they also raise significant concerns about the potential for misuse. Experts warn that such technology could be exploited to create convincing deepfakes, spread misinformation, or engage in online harassment 2. The ease of generating realistic videos with minimal effort amplifies these concerns.
Source: PC Magazine
In response to potential misuse, Google has implemented several safeguards. These include embedding digital watermarks in all content generated by their AI tools and developing a SynthID detector to help identify AI-generated content 3. However, questions remain about the effectiveness of these measures, especially given the rapid pace of AI advancement.
The advent of Veo 3 and similar technologies is forcing a reevaluation of how we consume and trust visual media. Experts advise increased skepticism when viewing online content, emphasizing the need to question the authenticity of videos, especially those featuring public figures or news events 3.
As AI video generation technology continues to evolve, it's clear that society will need to adapt. This may involve developing more sophisticated detection tools, enhancing media literacy education, and potentially rethinking how we verify and trust visual information in the digital age 23.
Source: Lifehacker
The release of Veo 3 marks a significant milestone in AI-generated content, showcasing both the impressive capabilities of modern AI and the complex challenges it presents to our understanding of digital media and truth in the online world.
A federal judge rules that AI companies can train models on legally acquired books without author permission, marking a significant victory for AI firms. However, the use of pirated materials remains contentious and subject to further legal scrutiny.
34 Sources
Policy and Regulation
11 hrs ago
34 Sources
Policy and Regulation
11 hrs ago
The UK's Competition and Markets Authority (CMA) is considering designating Google with "strategic market status," which could lead to new regulations on its search engine operations, including fair ranking measures and increased publisher control over content use in AI-generated results.
22 Sources
Policy and Regulation
19 hrs ago
22 Sources
Policy and Regulation
19 hrs ago
OpenAI is developing collaboration features for ChatGPT, potentially rivaling Google Docs and Microsoft Word, as it aims to transform the AI chatbot into a comprehensive productivity tool.
3 Sources
Technology
10 hrs ago
3 Sources
Technology
10 hrs ago
Google DeepMind has released a new on-device AI model for robotics that can operate without cloud connectivity, marking a significant advancement in autonomous robot control and adaptability.
5 Sources
Technology
10 hrs ago
5 Sources
Technology
10 hrs ago
Google has donated its Agent2Agent (A2A) protocol to the Linux Foundation, aiming to establish open standards for AI agent interoperability across platforms and vendors.
4 Sources
Technology
18 hrs ago
4 Sources
Technology
18 hrs ago