Critical Thinking and Fake News

See also: Assessing Internet Information

The phrase ‘fake news’ exploded into standard currency during the 2016 US presidential election. A decade later, the challenge has evolved from simple false stories into a sophisticated landscape of AI-generated content, deepfakes, and automated disinformation.

The bad news is that modern disinformation is more believable than ever, and it is extremely easy to get caught out.

This page explains how you can apply critical thinking techniques to news stories and digital media to reduce the chances of being misled, or at least starting to understand that ‘not everything you see or read is true’.


What is ‘Fake News’?

‘Fake news’ refers to stories that are either completely untrue, or do not contain all the truth, with a view to deliberately misleading readers.

While the term is still widely used, experts now often distinguish between misinformation (false information shared by accident) and disinformation (false information shared deliberately to cause harm).

“The report of my death was an exaggeration.”


In May 1897, Mark Twain, the American author, was in London. Rumours reached the US that he was very ill and, later, that he had died. In a letter to Frank Marshall White, a journalist who inquired after his health as a result, Mark Twain suggested that the rumours had started because his cousin, who shared his surname, had been ill a few weeks before. He noted dryly to White,

“The report of my death was an exaggeration.”

It had, nonetheless, been widely reported in the US, with one newspaper even printing an obituary.

Fake news is not:

  • Articles on satirical or humorous websites (like The Onion or The Daily Mash) that make a comment on the news by satirising them, because this is intended to inform and amuse, not misinform.

  • Anything obvious that ‘everyone already knows’ (often described using the caption ‘that’s not news’).

  • An article whose content you simply disagree with.

The deliberate intention to mislead is the crucial distinction.


The New Frontier: AI and Deepfakes

In recent years, the rapid advancement of Artificial Intelligence (AI) has complicated the landscape.

We have moved beyond simple written articles to ‘synthetic media’. This includes:

  • Deepfakes: Videos or audio recordings where AI has been used to swap faces or clone voices, making it appear that public figures have said or done things they never did.

  • AI-Generated Images: Hyper-realistic photos of events that never happened, often created to incite an emotional reaction during a crisis or election.

  • Bot Networks: Automated social media accounts that can amplify a false story to make it look like a popular opinion.

This technology means that ‘seeing is believing’ is no longer a reliable standard for truth. Critical thinking must now be applied to images and audio, not just text.

Why is Disinformation a Problem?

If rumours have been around for so long, why is this suddenly a global crisis?

The answer is that social media algorithms combined with AI automation allow credible-looking fake stories to spread globally in minutes.

In the worst cases, they can have real-world consequences. From the Pizzagate shooting to more recent instances of AI-generated images affecting stock markets or inciting civil unrest, the impact is tangible. In less critical cases, fake reports can result in distress or reputational damage for the people or organisations mentioned.

It is, therefore, important to be alert to the potential for content to be synthetic or manipulated, and to ensure that you are not party to its spread.

Spotting Fake News and AI Content

Unfortunately, it is increasingly difficult to spot false news with the naked eye.

Early AI images often had obvious flaws (like extra fingers or garbled text), but the technology is improving daily. However, the principles of critical thinking remain your best defence. Useful tips include:

  • Lateral Reading

    Don't just stay on the page. Open a new tab and search for the story. If a major event has occurred, legitimate news sources like national broadcasters or reputable papers will be covering it. If the only source is a random social media account or a blog you've never heard of, be sceptical.

  • Reverse Image Search

    If you see a shocking image, right-click it (or long-press on mobile) and search for the image with Google or a dedicated verification tool. You may find that the image is actually from a completely different event years ago, or that it has been flagged as AI-generated.

  • Check the Source ‘About Us’

    Be wary of stories written by unknown sources. Check their website URL—does it look slightly ‘off’ (e.g., .co.com instead of .com)? Does the site have a clear ‘About Us’ section listing editorial standards, or does it look generic?

  • Check Your Emotions

    Disinformation is designed to trigger a strong emotional response—usually anger, fear, or shock. If a headline makes you immediately furious, pause. That reaction is exactly what the creator wanted to bypass your critical faculties.

This advice boils down to reading and viewing critically.

That does not mean looking for flaws or being cynical about everything. Instead, it means applying logic and reason to your thinking, so that you make a sensible judgement about what you are consuming.

In practice, this means being alert to why the content has been created, and what the creator wants you to feel, think or even do as a result of seeing it.

For more about this see our pages on Critical Thinking and Critical Reading.

A word about bias


It is worth remembering that everyone has their opinions, and therefore sources of potential bias in what they write. These may be conscious or unconscious. News organisations tend to have an organisational ‘view’ or political slant. For example, the UK’s Guardian is broadly left-wing, and most of the UK tabloids are right-wing in their views, and this affects both what they report and how they report it.

As a reader, you also have biases, both conscious and unconscious, and these affect the stories you choose to read, and the sources you use. It is therefore possible to self-select only stories that confirm your own view of the world, and social media algorithms are designed to feed this habit.

To overcome this, it is important to use more than one source of information, and try to ensure that they have at least small differences in their political views.


A Final Thought

Fake news spreads so fast because we all like the idea of telling people something that they did not already know, something exclusive, and because we want to share our view of the world. It’s a bit like gossip.

But like false gossip, fake news can harm. Next time, before you click on ‘share’ or ‘repost’, just take a moment to think about whether the story that you are spreading is likely to be true or not. Even if you think it is true, consider the possible effect of spreading it. Is it going to hurt anyone if it turns out to be false?

If so, don’t go there, think before you share.


TOP