What People are Getting Wrong this Week: Identifying AI Videos

You think you can spot one every time, but can you really?

Jun 3, 2025 - 17:10
 0
What People are Getting Wrong this Week: Identifying AI Videos

You've probably already been fooled by an AI video, whether you realize it or not. On May 20, Google released Veo 3, its latest AI video generation tool, showing it off with a video featuring an AI-generated seaman, and the results were either impressive or horrifying, depending on your point of view.

Yeah. We're cooked. While this video does have a slightly surreal quality upon close inspection, it's good enough to fool most casual viewers. The barrier preventing the average person from being taken in by a computer-generated video has been shattered. Veo 3's videos are so good, you can't easily tell they aren't real, especially when you see them while casually scrolling a social media feed. People are already using Veo 3 for profit, politics, and propaganda. As Lifehacker’s Jake Peterson put it, You are not prepared for this terrifying new wave of AI-generated videos.

Veo 3 produces hyper-realistic videos with natural-looking lighting, physicality, sound effects, camera movement, and dialogue. Unlike traditional CGI, the new breed of AI doesn’t require a Hollywood budget or a team of animators—you just need to craft a prompt a few sentences long, and feed it to Veo 3. What gets output doesn't display many of the telltale distortions that used to mark content as obviously created using AI.

Take a look at this entirely Veo 3-made video to get a sense of how convincing it can be:

Making these videos is extremely easy too—you don't need to spend all day iterating prompts to get a good result. I went from "I don't know how to do this" to creating the video below in about half an hour, and it was even made using the "free trial" version of Google's AI tool:

Is there anything a devoted seeker of truth can do in the face of the upcoming onslaught of AI video slop? Maybe. A little. There are (a few) steps you can take to (maybe sometimes) spot a fake video. At least until Veo 4 makes it that much harder, or a competing AI's video generation service releases a new model that's even better.

A few tips for spotting fake AI videos (that might work, sometimes)

Look for the watermarks (both visible and invisible)

According to Google, a SynthID watermark is embedded in all content created with Google's generative AI models. Unfortunately, you can't see it, and you can't easily detect it—at least not yet. The company says it's testing a verification portal to "quickly and efficiently identify AI-generated content made with Google AI." It's not live yet. (Maybe they could have finished working on that before releasing Veo 3?) Anyway, hopefully soon anyone will be able to upload a piece of content and learn if it was made with any of Google's AI tools.

Late last week, Google also rolled out a visible watermark on Veo 3 content, called DeepMind’s SynthID. Unfortunately (again), it won't apply to "videos generated by Ultra members in Flow, our tool for AI filmmakers," so anyone using the expensive, "professional" version of Veo 3 can still trick you.

Non-technical, common sense ways to spot an AI video

Here are a few more suggestions for spotting AI-generated video that don't require the use of any tools more sophisticated that your own brain:

  • Slow down. Don’t immediately trust videos you see online, even if they’re from people or accounts you usually trust.

  • Cross-check. Has this clip been reported elsewhere? Who’s sharing it, and why?

  • Watch for tells. Even Veo 3 isn’t perfect yet. Look for odd physics, unnatural skin textures, strange mouth movements, or uncanny lighting.

  • Think critically. Always ask yourself, Who benefits from me believing this video is real? Consider whether it even makes sense. Think about who could've shot it, and why. Question whether the behavior of the video's subjects tracks with how you know actual humans to behave.

The limitations of AI detection

Those are some concrete steps you can take, but I don't actually think many people will take them when viewing videos on social media. Even in a perfect world, where we all had access to foolproof AI checkers that reliably identified bogus content, plenty of people would still believe AI gunk is real. Who would bother investigating every video that scrolled by on TikTok, and every photo that appeared on Facebook? That's a lot of work, and I don't think most people actually care whether what they're seeing is real or not, as long as they like it.

I write about people being fooled by AI creations in this column fairly regularly, and it doesn't seem to matter how convincing the fakes are. Even the sloppiest, obviously uncanny creations are good enough when the people viewing them want them to be real. And that's the hardest thing about AI detection: We're most vulnerable to fake content when it confirms our biases. Humans are the weak link in the chain.

While these videos are terrifying, there's nothing new about fake news

While AI programs like Veo 3 make it easier to create fake videos, it's not like creating effective disinformation was impossible before. CGI has been convincing people of unreal things for decades. Before that, you could just film a realistic version of whatever you'd like to see, minus digital effects. Before there were movies, people faked photographs, and before there were photographs, people lied in print. And people lie with their mouths all the time, sometimes while standing behind a podium carrying an official government seal. Deception, forgery, and fraud are as old as mankind. The only difference is that now we can do it a lot faster.

The best means of telling the fake from the real has always been by developing your personal bullshit detector, but that's also the most difficult method to rely on. Submitting to confirmation bias is base human nature, and while it's easy to say "be extra suspicious of things that seem true to you," it's not a skill that many of us (or maybe any of us) actually possess.

Maybe the people who are most wrong this week (and every week) are people like me, for assuming I'd be able to spot AI fakes when it really matters. Maybe it's easy to spot and call out Facebook slop AI, but how can I ever know what I'm sure is true is actually true? I can't. No one can. And that's a philosophical conundrum that no amount of watermarks or detection tools can fix.