Critical Thinking and Artificial Intelligence (AI)
See also: Critical AnalysisA 2025 study published in the peer-reviewed journal Societies found that people who used more AI-based tools were less likely to have good critical thinking skills. This article was, naturally, followed by several blogs and commentary articles suggesting that we were losing our ability to think critically thanks to AI. Dire warnings to employers and educators were issued about the need to teach critical thinking skills.
But is the situation really that simple? This page unpicks the relationship between critical thinking and AI. It discusses why using AI tools might be a symptom of poor critical thinking, rather than a cause. It also explains why critical thinking is likely to be more, not less, important when you use AI tools, and how you can develop better critical thinking skills.
What is Critical Thinking?
It is worth a brief excursion into what we mean by critical thinking.
Our page on Critical Thinking explains that it is “the ability to think clearly and rationally, understanding the logical connection between ideas”. You can also think of it as the ability to engage in careful and independent thought, and challenge ideas rather than take them at face value.
People with good critical thinking skills tend to understand the links between ideas, and critically examine arguments to identify errors in the logic or thinking. They analyse evidence and use it to support or refute arguments. They also ask questions to ensure that they understand, and do not simply accept statements as fact. They are not cynical, but they are sceptical about whether they can believe everything they read, see and hear.
Critical thinking skills are essential in learning, and at work, to explore and understand problems.
However, they are becoming more important for all of us as we navigate a social media-based world, and are exposed to more and more fake news.
Unpicking the Relationship Between AI Use and Critical Thinking
What do we know about the relationship between the use of AI and critical thinking?
A recent study found that using AI-based tools was generally associated with poorer critical thinking skills. In other words, when people used more AI-based tools, they were less able to think critically (you can read the full study here: https://doi.org/10.3390/soc15010006).
The researchers suggested that this was through the process of cognitive offloading.
Cognitive offloading is the process of ‘outsourcing’ your brain’s work to tools. Examples include writing a grocery list so you do not have to remember what you need, and using ‘to-do’ lists to remember tasks. The idea is that you use this technique to free up your brain for things that you really need to do.
The suggestion is that people are offloading the need to think critically to the AI-based tools, and therefore losing their own critical thinking skills.
But is this really what’s happening here?
Possibly not. What is interesting about this research is that the effect was not universal.
Older and more educated people were more likely to show better critical thinking skills, regardless of how much they used AI. It seems likely that these groups already had better critical thinking skills because of their experience and education.
This suggests that the relationship between AI use and lack of critical thinking skills may not be causal—or at least not in the direction that the researchers suggested. In other words, people with poorer critical thinking skills may be more likely to use AI-based tools to overcome the deficiency in their own skills. That is cognitive offloading, but it is not a cause-and-effect relationship.
Why Should We Worry?
What is the worry about critical thinking and AI?
To understand the concern about the use of AI and critical thinking skills, you need to understand a bit about how generative AI works.
Generative AI appears very human-like. You ask it a question and it replies, in the kind of language that your friends might use. However, it is not thinking. Instead, it puts words together extremely effectively to respond to the words that you have used. Note that this is not a response to the question, or to what you were thinking, but the words that you have used, in the order that you have used them.
Its output therefore appears to be rational, but it will not necessarily be factually correct. Indeed, it is a well-known feature of large language models (and other generative AI) that they hallucinate, or make up information.
You can find out more about how these algorithms work in our page on Understanding Large Language Models (LLMs).
It is therefore extremely important to apply critical thinking to the output of AI.
You need to examine its output to see if it makes sense, and to fact-check it. This sounds straightforward, but even eminent researchers and experts on AI have been caught out (see box).
The Case of the Phantom References
In December 2025, The Times published an article claiming that a book published by Springer Nature called Social, Ethical and Legal Aspects of Generative AI contained many citations that appeared to have been invented. These so-called ‘phantom’ references are often a sign of the use of generative AI. More than two-thirds of the citations in one chapter could not be verified.
It seems that even experts on AI—alongside scientific publishing giants—can be caught out by AI-generated content.
Perhaps the real issue lies in our own confidence in AI—or rather, in the relationship between our confidence in AI and our confidence in our own judgement.
A 2025 study by Microsoft found that knowledge workers (people who work with information) who had more confidence in AI were less likely to use critical thinking skills to examine the output of the AI. However, those with more self-confidence—that is, confidence in their own judgement—were more likely to use critical thinking to examine the output.
This therefore echoes the study discussed earlier: when you have more knowledge and experience, you are more likely to be able to critically examine AI output. However, you are also less likely to be using AI as often.
Unfortunately, this means that those with the most need to examine AI output—that is, those who rely on it most because they lack the skills to do tasks for themselves—are also the least able to do so critically.
Applying Critical Thinking to AI
The next question is what should you do to be able to apply critical thinking skills to AI-generated output?
First, start by developing scepticism.
You should be prepared to ask questions about everything that emerges from an AI-based model. The big question to ask is “Can this be verified using other sources?”. Make a habit of checking everything that you do not already know to be true—and be particularly sceptical about any sources that the AI generates.
This will inevitably take longer than accepting the output at face value. However, it will also help to develop your ability to detect ‘fake’ content, and the process will speed up over time.
Our page on Critical Thinking contains more questions that you can use to interrogate AI-generated content. Our page on Critical Thinking and Fake News will also help you to understand what kinds of questions to ask.
Developing confidence in your own judgement—which also means developing your own knowledge—can also help. Reading widely about current affairs (and not just the stories that your social media algorithm throws up) is one way to do this. For work, you should take time to read the trade press in your industry, and get a better feel for current issues there. This will help you to filter AI-generated content and start to be able to assess its accuracy without having to check it.
Finally, use AI only as a resource to supplement your own effort, not to do the work for you. Current AI models have too many gaps, and lack contextual understanding. They have no knowledge, even though they can interpret patterns at enormous speed, and they also cannot generate anything new. This means that they can speed up routine work very effectively—but only if you use them wisely.
You may be interested in our page on Using Large Language Models (LLMs), which explains how to use LLMs effectively but cautiously.
And Finally...
Critical thinking is not just useful for assessing the outputs from AI, it is essential.
It is unlikely that any AI-based tool is ever going to replace human ability to think critically, and certainly not in the near future. Critical thinking is therefore a vital transferable skill to develop. Fortunately, AI-generated content also provides a very good means of honing those skills by applying a sceptical eye to what you see.
Continue to:
Sources of Information
Research Methods
