Using Artificial Intelligence (AI)
Ethically in Education
See also: Study Skills
Like it or not, artificial intelligence (AI) is here to stay. Tools powered by AI are also getting more effective and useful every day. It is not surprising that students and learners are increasingly turning to these tools for help with their studies. However, is their use always ethical? And how would you know if you were using AI ethically or not?
Many educational institutions, including schools, colleges and universities, have policies about the use of AI, including to support the writing of graded work. Academic journals also have policies to explain when the use of AI is acceptable in writing academic papers. The rapid development of AI-based tools, however, means that the situation is not always easy to disentangle. This page provides some advice for students about how to judge the use of AI, and how best to use it to support learning.
Some Facts and Figures about AI in Educational Settings
It is important to be clear: students are already using AI. For example:
The Digital Education Council’s (DEC) 2024 Global AI Student Survey shows that 86% of students claim to use AI in their studies, and more than half use AI at least weekly.
A YouGov survey of 1000 UK students found that two-thirds have used AI, and a third used it at least weekly.
A study by the Higher Education Policy Institute (HEPI) in the UK in 2025 found that 88% of students had ever used generative AI for assessments. The proportion using any AI tools was 92%.
However, these surveys also found that many universities and colleges lagged behind their students.
More than 10% of respondents to the YouGov survey said that their university or college only gave a general warning against AI, with little or no clear guidance about its use. Almost half said that although the organisation’s guidance about AI use was clear, they had not been taught about the appropriate use of AI, or the potential pitfalls. A quarter felt that their university’s rules were not strict enough—and this was most likely to be true where the policy was very broad-brush.
In the HEPI survey, only 29% said their university encouraged them to use AI, with 40% disagreeing. However, 80% believed that their university’s policy was clear, and three-quarters that they believed the use of AI would be spotted in assessed work, which provided a clear disincentive to use AI to cheat.
Think you don’t use AI? Think again.
You may think you’re not using AI if you don’t use ChatGPT.
However, AI is now almost everywhere.
Google Search? The top of every search page is an ‘AI overview’. You’ve probably used it at least once without even being aware of it.
Grammarly? AI-powered.
Accepting the suggestions from Microsoft Copilot in Word or Excel? You’re using AI.
Duolingo? AI-based (as is almost anything that offers personalised learning).
Are you really certain that even your basic spellcheck isn’t AI-powered?
You can’t really afford to be blasé about AI. You need to understand how to use it appropriately.
How Students Use AI
It is illuminating to look at how students are using AI.
The DEC survey found that 69% of students were using AI-powered tools to search for information, followed by 49% using it to check grammar.
This is potentially problematic: as our page on Understanding Large Language Models explains, the most common LLM (ChatGPT) is NOT a search engine. It can hallucinate results (make information up) in a very convincing way. Google’s Gemini is slightly better. It will show you the sources of its information—but its understanding of those sources is not always right.
Using AI ‘blind’—that is, without checking back to the sources—can therefore leave you with a significant problem.
Interestingly, the results from the two UK-based surveys about how students are using AI are rather different.
Both those surveys suggest that the primary uses of AI among UK students are to explain concepts more clearly, and to summarise sources. These uses are likely to be less problematic in practice.
Are students using AI to cheat?
One of the biggest questions is whether students are using AI to ‘cheat’.
The YouGov survey asked students about three behaviours that would be considered cheating:
- Using AI to create sections of a piece of work that counted towards a pass/fail grade;
- Using AI to create a whole piece of work that counted toward a pass/fail grade, and then editing it before submission; and
- Using AI to create a whole piece of work, and submitting it without checking it at all.
Overall, 15% of students (and 23% of those who ever used AI) had engaged in one or more of those behaviours. However, very few (only 3% overall, and 5% who used AI) were submitting without checking the output. This is backed up by the HEPI survey, which found that only 8% did not always check the output before submitting, whereas 25% would edit the output themselves before using it for an assessment.
However, edited or not, this is a grey area—and this use of AI might well be considered cheating.
Appropriate Use of AI in Education
How can students assess whether their use of AI is appropriate?
A good starting point is information from the United Nations Educational, Scientific and Cultural Organization (UNESCO) (see box).
UNESCO’s AI competency framework for students
The United Nations Educational, Scientific and Cultural Organization (UNESCO) has produced competency frameworks for the use of AI in education, one for teachers and one for students.
The students’ framework is designed to help schools, colleges, universities, teachers and policy-makers to equip students with the skills and knowledge to engage with AI appropriately.
It contains four core competencies:
- A human-centred mindset: students should understand and “assert their agency” when using AI: that is, understand that AI is a tool that must be managed by humans to make its use appropriate.
- Ethics of AI: students should understand about the responsible use of AI, building in ethics-by-design and how to use AI safely.
- AI techniques and applications: students should be given the skills and knowledge to manage AI.
- AI system design: students should understand how they can use AI to support their own problem-solving, creativity and design thinking.
Source: unesco.org
Drawing on this framework, it is clear that appropriate use of AI is where:
-
AI is used as a tool to support work, and not to do the work for you
For example, it is reasonable to use AI to summarise your notes, including notes supplied by lecturers, or to summarise large documents available online. It is also perfectly acceptable to use it to explain concepts to you, especially very complex ones. You can even give it a set of bullet points and ask it to show you where your argument could be improved, or use it to help you think through ideas.
However, you should be aware that its summary of any content may not be the same as you would have produced yourself. For example, it might not have the same priorities as you, or identify the same number of key points. It won’t be the same as reading something for yourself—but if you have read the source document, the AI summary will probably be enough of a reminder.
Similarly, by all means grammar- and spell-check your work. Just don’t incorporate every suggestion automatically.
Make sure that what you submit for any assessment is all your own work, and carries your intended meaning. If in doubt, reword it.
-
The output is never going to be used for an assessment
This one in clearcut: you can use AI however you like if the output is not contributing to any kind of assessment, and provided the use does not contravene copyright laws. Summarise your notes, books, and papers as much as you like into revision notes, bullet points or prompt cards. Provided you have to sit in an exam and write a paper without help, it’s acceptable.
As soon as you are submitting any part of the output for any kind of formal assessment, you are on trickier ground, and need to check your institution’s policy.
-
The outputs of AI are within clear ‘guardrails’ and fact-checked thoroughly
This is more about appropriate, rather than ethical.
Don’t take anything for granted that emerges from an AI-based tool. If you don’t already know that it’s correct, check it. Use any built-in guardrails appropriately, and where guardrails are not built-in, add your own. Any other use of AI cannot fundamentally be considered ‘safe’.
It bears repeating that AI doesn’t know anything, it only puts appropriate-sounding answers together.
-
The student takes full responsibility for the work, and is confident in doing so
If you have used AI appropriately in the preparation of any assignments, you will be confident in taking responsibility for the content of your work. If it is AI-generated, you won’t be able to do that—at least, not realistically. Think about it like this: if your assessor or marker asks you why you explained something in a particular way, could you answer? Could you justify the choice of one word over another? If not, it’s not really your work.
This is really the bottom line: you should be able to answer questions and explain everything in any piece of work your submit.
As our page on Using Large Language Models notes, the more that you engage with the output from an LLM, the more useful it will be to you.
You may also find it interesting to read our guest post on using AI in education, and particularly the section on the skills required by learners to succeed when using AI.
Inappropriate Use of AI in Education
It is also worth touching briefly on what would be considered inappropriate use of AI in education settings.
This is, largely, the reverse of the appropriate use. It includes:
-
Using AI to do the work for you, especially for assessments
If you asked an LLM to produce you an essay on a particular subject, and then handed it in, that use would not be either ethical or appropriate. Even if you edited the essay first, it’s still not really ethical, because it’s not your ideas.
It’s also questionable whether it is appropriate to ask it to write a paragraph for you. It might just be OK if you gave it bullet points and asked for them to be turned into clear text—but you would need to be absolutely confident that it hadn’t added anything.
Remember, it’s a tool for you to use to help you, not a substitute for your brain.
-
Accepting the output from an AI algorithm without checking it thoroughly
There is never a guarantee that the AI is not hallucinating. Even with clear guardrails, go back to the source and check the accuracy, or make sure that its suggestions for grammar and spelling are appropriate. This is your work, and you have to be able to take responsibility for it.
You cannot take responsibility for the work if you have not carried out these checks.
-
Anything that contravenes the guidelines set out by your educational institution
Even if your use of AI is consistent with these guidelines, and falls within what you consider to be appropriate, it may not be permitted by your institution. Make sure that you check rules and guidelines carefully, especially for assignments. Policies may be deliberately very broad, rather than specific, and might exclude some reasonable use.
You don’t want to be caught out, because that could have serious consequences.
Further Reading from Skills You Need
The Skills You Need Guide for Students
Develop the skills you need to make the most of your time as a student.
Our eBooks are ideal for students at all stages of education, school, college and university. They are full of easy-to-follow practical information that will help you to learn more effectively and get better grades.
If in Doubt, Either Ask, or Don’t
There is a simple principle that you can follow about using AI: if in doubt, either ask whether your proposed use is appropriate, or just don’t do it at all.
If you, the student, have doubts, you can be confident that the university or school will have more.
Asking the question is unlikely to get you into trouble. Not asking just might do so. This really is not a case to experiment with asking for forgiveness not permission.

