Identifying and Using AI-Generated Images and Videos
See also: Using Large Language ModelsAnyone who uses social media in any form is likely to have come across AI-generated images and videos. They are becoming increasingly ubiquitous.
However, they are possibly not quite as ubiquitous as comments saying ‘That’s AI’, followed by a response from the photographer saying that no, this is an image that they took at [location] on [date]. They usually—but not always—go on to say that they have edited or touched up the image slightly.
It seems that we are becoming increasingly confident in claiming to recognise AI-generated images, but that our assessment is not always correct. Research from 2025 found that only 47% of UK teenagers claimed to be confident in identifying AI-generated content—and that’s an age group renowned for over-estimating their own expertise. This page aims to help by providing some advice for assessing images and videos for ‘tells’.
Where Do We Start?
Let’s start by thinking about our starting point for assessing images and videos for accuracy.
Consider these two pictures. One is AI-generated, one is from life. Can you spot which is which?
You probably guessed (correctly) that the one on the right is AI-generated. The one on the left is courtesy of our lead writer, Melissa Leffler.
You were probably also fairly certain about which was the ‘fake’ image.
But why?
Responses given in a workshop that used these images included:
“Because clouds don’t look like that” and “Trees in this country aren’t like that.”
In other words, when you look at a photo or image, you measure it against what you already know.
How can we tell it’s AI? Because we’ve never seen anything that looks quite like that, or it looks a bit ‘wrong’. That is our starting point. However, what if it is something that we have never seen before, something quite outside our own experience? Do we automatically discard it as untrue?
Obviously, there are people who do exactly that. They are the ones who are quickest to cry ‘AI image’ or ‘AI video’ in the comments section.
It’s not necessarily a bad thing to be a bit sceptical about content. In fact, it’s probably a reasonable starting point—but you should also not leap to judgement because some use of AI is acceptable.
Image Manipulation and Creation: Identifying Acceptable Use
Both image creation and image manipulation are here to stay.
We therefore need to consider how they are used, and perhaps set some boundaries around what is ‘acceptable’ and ‘unacceptable’.
If we look at the rules for the Natural History Museum’s Wildlife Photographer of the Year competition, this gives some hints about what we might broadly consider to be acceptable.
Under ‘Ethical Requirements’, the rules state “(2) Your photographs must report on the natural world in a way that is creative, honest and ethical: (i) entries must not deceive the viewer or attempt to disguise and/or misrepresent the reality of nature.”
The rules later state:
“...digital adjustments are permitted providing that they comply with the Competition’s principles of authenticity i.e. they do not deceive the viewer or misrepresent the reality of nature, or what was originally captured by the camera”.
In other words, photographers are permitted to ‘touch up’ and even combine images, provided that the eventual picture is still an accurate representation of reality. For example, one Highly Commended photograph from the 2024 competition (Strength in Numbers by Theo Bosboom) was created from a stack of nine images to allow the whole picture to be in focus.
The key is that we are only prepared to accept image manipulation when it is used to make things clearer, and not when it is used to confuse or mislead.
What about AI-generated content?
The rules are clear about that: no AI-generated images are permitted at all in the competition. You also cannot add any new image content, either manually or using AI.
In other words, AI-generated images are not photographs, and cannot be used as such in this particular competition, at least.
This seems a reasonable distinction to draw: we can accept manipulation that enhances the image, and makes things clearer. However, we should be sceptical about AI-generated images, because they do not show reality. These images can be used when we want to make a point—but should always be acknowledged as AI-generated.
Asking Questions
If you come across an image or video on social media, and you are not sure whether it is AI-generated, there are some questions that you might consider asking. They include:
-
Does it look right?
This is always the first question to ask: is it realistic, or are there elements that don’t quite stack up?
For example, AI-generated images often contain too many (or not enough) legs or arms for the people or animals they show. For example, an image might show an octopus with too many arms, or arms that don’t connect back to the body. Videos might have unrealistic angles, or show something that’s physically impossible or just very unlikely. Examples include animals behaving in an unnatural way, such as a lion rescuing a lamb from a flood, or rabbits bouncing on a trampoline (both real examples).
-
What’s the quality like?
Is the picture or video pixellated at all?
A pixellated video, or one that blurs at the edges, is quite likely to be AI-generated. This is not an absolute guide, but it is a good starting point for asking more questions. Poor quality videos allow the creators to hide the rougher edges of the AI generation, so many creators reduce the resolution and compress the video to make it look more realistic, and as if it has been filmed on an older phone.
-
Who posted it?
What is the source, and particularly, is it a parody account?
Parody accounts are easy to miss, because they look like genuine news or comment accounts. However, they often include ‘fun’ and ‘shock’ content to generate likes.
A parody account will usually say in its bio on social media—and then you need to be sceptical about everything that you see, however realistic it looks. Basically, if it’s on a parody account, it’s probably not real, although it might mash together two real images or videos.
-
What do the comments say?
Not always a reliable guide, but there are usually plenty of people there to set the record straight.
And if it’s not an AI-generated image? The original poster will generally respond to explain how they took and edited the photograph or video, and you will see a discussion about it.
-
Can you verify the story behind the picture or video?
This is perhaps the most useful question of all.
Is anyone else showing the same picture or video, or anything similar, and is that ‘anyone else’ a reliable source? Has anyone except the original account verified its accuracy? That doesn’t mean shared it, but checked it and confirmed that it is accurate. If you can’t verify the story behind the picture using other sources, it’s probably not true.
There is more about this approach in our page on Critical Thinking and Fake News.
A Growing Problem—and a Simple Answer
AI-generated images and videos are here to stay. What’s more, as the AI algorithms that generate them improve, so will the quality of the videos. This means that using only questions like ‘does it look right?’ will not be a reliable index for long.
Instead, the key to spotting AI-generated images and videos is to develop a healthy scepticism.
Images and videos used to be much harder to manipulate or create, so we are used to thinking of them as reliable. Now, however, that is not true. They are no more reliable than a report of what someone else said. You wouldn’t believe something simply because someone had written it down—so why believe a video?
Instead of taking things at face value, you need to look at who posted it, the context, and whether anyone else has verified it. Those clues will endure long after the quality questions have become redundant.
