Don’t Get Tricked: Spotting AI Images and Deepfake Videos Like a Pro

Jessica Long

Jessica Long

|
8 min read
|
Published Jul 18, 2025

It’s 2025, and artificial intelligence can now generate a fake video of your grandma dancing in outer space while sipping sweet tea. Charming? Sure. Creepy and misleading? Well, a lot of them can be. Whether it’s fake photos of celebrities, cloned voices, or politicians saying things they never said, the age of AI deception has arrived. And unfortunately, it’s not always harmless entertainment.

 

Here in Georgia, where people know the value of a good front porch chat and the difference between real and imitation grits, we now also need to learn the difference between real and AI-generated content. If your eyeballs haven’t been fooled yet, trust me, they will be. That’s why learning to spot these AI-generated images and videos isn’t just helpful, it’s vital.

Spotting AI-Generated Images

AI-generated images and videos are increasingly being used in scams, misinformation campaigns, political propaganda, and even fake dating profiles. (Yes, that suspiciously attractive person who just “liked” your post might be a computer-generated heartbreaker.) In the wrong hands, these realistic fakes can be weaponized to manipulate public opinion, scam innocent people out of money, or stir up confusion during election seasons.

 

So how can you spot these digital imposters before they lead you astray?

 

First, train your eyes to be skeptics. AI-generated images often look flawless at first glance, but there are some telltale signs. First, check the hands and eyes. These are two body parts AI still hasn’t quite mastered. AI hands might have too many fingers, fingers bent in unnatural ways, or even float off into the void. As for eyes, they can be oddly glossy, misaligned, or even looking in completely different directions… kinda like they just saw Atlanta traffic and gave up on getting home at a decent time.

 

Also, pay close attention to lighting and shadows. AI often fumbles with consistent lighting, producing images where the sun seems to come from three different directions. (Apparently, some AI programs think Georgia has three suns. Who knew?)

 

Another clue is the background. AI-generated images often include warped text, melted buildings, or backgrounds that look like a Salvador Dalí painting after too much caffeine. A stop sign might say “Spop” or a storefront might advertise something like “Plooth Smoothie.” Not exactly subtle.

 

Here is an AI-generated image that made the rounds on Facebook in the aftermath of the 2025 Central Texas floods. It fooled thousands of users into believing that Dolly Parton and Reba McEntire took a boat cruise through the flood waters:

Image source: Facebook

I’ll admit it. At first and even second glance, it looks real.  The hands aren’t deformed and even the eyes are relatively natural. However, there are still a number of giveaways that prove this isn’t a real photo:

  1. While Reba and Dolly are in sharp focus, the background trees and lighting are uniformly blurred, lacking the depth or lens characteristics you'd expect in a real photograph. 
  2. The lighting on their faces and bodies seems almost too perfect. AI often applies uniform lighting and texture, which can make skin appear plastic or overly airbrushed, lacking pores or natural variation. 
  3. Both images show the country music stars’ with bread and milk inside the boat with them. Seems appropriate, until you realize how ridiculous it actually is. There is no way they would be casually handing out cups of milk from a jug that’s been sitting out in the Texas heat all day. 
  4. Check out the trees in the background. The top photo looks like it was taken in the winter, after all the leaves have fallen, while the trees in the bottom picture are lush and green. 

 Now look at the image again. Does it still look realistic?

Deepfakes: When AI Fakes Go Full Hollywood

Images are one thing, but AI can now create videos where people say and do things they never actually said or did. These are called deepfakes, and they’re not just fodder for meme pages anymore.

 

Deepfakes have been used to impersonate world leaders, create fake news reports, and facilitate scams through robocalls or fake video messages. Imagine getting a video message from your “daughter” asking for urgent money. Except that it’s not your daughter, it’s a digitally manipulated version of her, powered by AI and a few stolen social media clips.

AI Artist Chris Ume is behind several viral deepfakes in which he transformed actor Miles Fisher into Tom Cruise | Source: NBC News/ Chris Ume (@deeptomcruise)

So, how can you spot one?

 

Watch the mouth closely. If the lips don’t quite match the words, or the movements look rubbery or too smooth, you’re probably looking at a deepfake. Eyes are often a giveaway in deepfake videos as well, since AI still hasn’t quite nailed the art of casual human blinking. Look for eyes that blink too slowly, too frequently, or not at all.

 

What’s more, facial expressions might seem slightly “off,” with odd pauses or transitions. Also pay attention to background details, which may warp or ripple as the person moves around or turns their head. 

The Good News: Free Tools Are Available To Help

Sites like Illuminarty and sightengine offer online AI image detection for free. These platforms allow users to upload images and get an analysis of whether something may have been created by a machine. It’s not foolproof, but it’s a good starting point.

 

You can also try AI or Not, a tool that analyzes both images and text to estimate whether it was generated by artificial intelligence. While no tool is 100% accurate, using them in combination with your own common sense can go a long way.

Why It Matters for Georgians (And Everybody Else Too)

Why is this skill so important? Because AI fakes are already affecting real lives. From false reports on social media to scams using AI-generated voices pretending to be family members, these technologies are being used in ways that exploit trust. The more we learn to spot them, the less power they have over us. 

Image source: iStock

It’s not about being paranoid, it’s about being prepared. Most of us humans are smart (enough), skeptical, and care about our communities. By keeping an eye out for digital trickery and teaching others to do the same, we can protect ourselves and our neighbors from falling for the modern-day equivalent of snake oil.

 

If you suspect that a fake video or image is being used to spread false information, especially in a way that could harm people or mislead Georgia voters, you can report it to the Georgia Attorney General’s Consumer Protection Division. If it involves fake social media profiles or impersonation, you can also notify the platform directly and file a complaint through IC3.gov for internet-related crimes.

Bottom Line: Just Use Your Eyes and Your Brain 

Next time you see a stunning photo, perfect profile picture, or shocking video clip online, give it the side-eye. AI fakes are getting good, but they’re not perfect. With a little know-how and a few free tools, you can stay one step ahead.

 

We humans like to think of ourselves as having mostly good judgment and a healthy dose of skepticism. So do yourself a favor and expand on those qualities to navigate our AI-shaped future. Don’t let a deepfake fool you into clicking, liking, or giving away your money. 

 

Think of it as digital front porch wisdom: pause, squint, verify.  If something smells funny, it probably ain’t peach pie.

AI was used to assist our editors in the research of this article.
#consumer protection
#consumer advice
#ai image detection tools
#deepfake warning signs
#scam prevention
#ai scam prevention
#ai fake video guide
#how to spot ai content
#digital literacy
#ai literacy
#how to spot a deepfake