
How to Fact-Check a Viral Post Before You Hit Share
"Misinformation is worse than an epidemic"

You know the drill. You’re scrolling, and there it is....a viral post that seems just a little too crazy to ignore. It pushes all the buttons. It's emotional, dramatic, and has probably already been shared by three people in your group chat.
But before you pass it along, it’s worth asking: Is this even true?
In 2025, with platforms tweaking their content policies and AI-generated content popping up left and right, double-checking what you share online isn’t just a nice habit. It’s necessary. Here’s how to fact-check a viral post without turning into a buzzkill or a full-time detective.
Why Fact-Checking Matters More Than Ever
Let’s get one thing straight. Misinformation isn't just annoying. It can be flat-out harmful. As Marcia McNutt, president of the US National Academy of Sciences, put it,
"Misinformation is worse than an epidemic. It spreads at the speed of light throughout the globe and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence"
And in case you missed it, Meta (the parent company of Facebook, Instagram, and Threads) scrapped its third-party fact-checking program at the start of 2025. It’s now relying on community notes to police content. So instead of trained professionals doing the filtering, it’s mostly up to regular users to call out the bad stuff.
Also worth knowing: one study found that just 15% of people who share news online are responsible for spreading up to 40% of fake news. That means a tiny group can cause a huge mess. On the flip side, one smart move from you can stop a misinformation avalanche before it starts.
"Misinformation is worse than an epidemic."

How False Info Spreads Like Digital Wildfire
Turns out, misinformation behaves kind of like a virus. Experts are literally borrowing models from epidemiology to track it.
Dr. Sander van der Linden explains it like this.
“Misinformation spreads from a ‘patient zero’—someone who shares false content—to others in their network. Those ‘infected’ users then spread it further, creating a cascade effect.”
That’s why something as simple as pausing before you share can actually break the chain. You become a kind of firewall.
Your Step-by-Step Guide to Spotting the Fakes
Step 1: Start With a Gut Check
Before you dive into deep analysis, do a quick vibe check. Ask yourself:
- Is the language over-the-top or designed to shock?
- Does it skip the basic who, what, where, when, why?
- Is it begging you to “share now before it gets deleted”?
Dan Evon from Snopes has a solid rule of thumb:
"If the video uses slurs or demeaning language there's a good chance that the accompanying text is only telling a partial (or completely fictional) version of the backstory."
Basically, if it smells fishy, don’t ignore that instinct.
Step 2: Check the Source
Look at where the content is coming from. Some quick things to scan for:
- Is the account verified?
- What’s their posting history like?
- Was the account just created yesterday?
If it’s a website, check the About page. Do they even list real people?
The German government recommends comparing suspicious claims with at least two reliable sources. Think official portals or well-known media outlets.
Step 3: Cross-Check With Trusted Outlets
If the post is making a big claim, it shouldn’t be that hard to find confirmation elsewhere.
- Are reputable news orgs reporting it?
- Have official agencies said anything?
- What do independent fact-checkers like Reuters Fact Check or FactCheck.org say?
And yes, fact-checking can get political. But research shows that these organizations generally agree with each other and with assessments from bipartisan crowds. That suggests there’s usually a clear line between fact and fiction.
Dealing With Visual Content: Pics and Videos Can Lie Too
For images:
- Use reverse image search tools like Google Images or TinEye.
- Check if the photo was taken out of context or heavily edited.
- Look for weird lighting or perspective issues that might scream Photoshop.
For videos:
- Use Amnesty International’s YouTube DataViewer or the InVid browser extension to grab still frames and dig deeper.
- If it’s supposedly taken in a certain city or country, try to verify it with tools like Google Earth or Wikimapia.
- Use something like SunCalc to match up shadows and timestamps.
Digital investigator Christiaan Triebert from Bellingcat backs this up:
"We probably use geolocation tools most often. If you know the location, and it's correct and verified, you'll probably find more information related to the case."
Watch Out for Deepfakes and AI Fakery
Thanks to generative AI, you can’t always trust your eyes and ears anymore. Fake videos and audio clips are getting eerily good.
Here’s what to watch for:
- Unnatural facial expressions or lip sync issues
- Inconsistent lighting and shadows
- Anything featuring public figures saying outrageous things
There’s no perfect tool yet for spotting deepfakes, but awareness is your first defense. If something feels too outrageous to be real, pause and verify.
Scientific Claims? Handle With Care
A post quoting “a new study” might sound credible, but context is everything.
Here’s how to check:
- Who did the study and who paid for it?
- Was it peer-reviewed or just tossed on the internet?
- What was the sample size?
- Are other experts backing it up?
Don't base everything on a single study. Look for consensus across multiple reputable sources.
What’s New in the Fact-Checking World
Back in January 2025, Meta switched things up. They dropped their third-party fact-checking program and replaced it with a community notes system, similar to what X (formerly Twitter) uses. CEO Mark Zuckerberg said the move was to address concerns about perceived bias in traditional fact-checkers.
But not everyone is convinced this is a win. While community notes can be helpful, experts worry about how fast they work, especially during breaking news events when things move quickly.
Professor Anjana Susarla from Michigan State University explained the usual moderation flow like this:
"Content moderation typically involves three steps: scanning content for harmful elements, assessing whether flagged content violates policies, and intervening appropriately."
With the shift away from trained fact-checkers, that process could take a hit.
Combating misinformation takes more than just fact-checking

What To Do After You Fact-Check
If a post turns out to be false:
- Don’t share it, even “just to debunk it.” That can give it more reach.
- If you already shared it, post a correction or take it down.
- Send a private message to the person who shared it with you. Include some solid evidence.
- If it clearly violates platform rules, go ahead and report it.
If it’s mostly true but missing key details:
- Add helpful context if you decide to share.
- Include links to primary sources for people who want the full story.
As one study notes, combating misinformation takes more than just fact-checking.
"Not only consistent fact-checking but also adequate propagation of the fact checks."
That means it’s not enough to know the truth, you have to help spread it too.
Final Thoughts: Think Before You Share
Platforms are changing, and misinformation is evolving right alongside them. That’s why your own digital common sense is one of the most powerful tools we have right now.
Whether it’s a fake quote, a doctored video, or a misinterpreted study, taking a few extra minutes to verify a viral post can stop a whole wave of confusion. And honestly? That little pause might save someone in your network from spiraling over something that isn’t even real.
When in doubt, don’t share out. Your feed (and the internet) will be better for it.
Sources:
The Poynter Institute for Media Studies

Damjan
