Ethical AI Use
See also: Using Large Language Models (LLMs)It is only a few years since the release of ChatGPT, but generative artificial intelligence (AI) and large language models have now become part of the landscape for many of us.
We use AI—knowingly or otherwise—to support internet searches, help us write letters, supply images and choose films or books. However, alongside the explosion in the use of these tools have come concerns about the ethics of their use.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) states that AI systems have potential to “embed biases, contribute to climate degradation, threaten human rights and more”. It adds that these risks have “already begun to compound on top of existing inequalities, resulting in further harm to already marginalised groups”. In other words, AI is a threat to human rights, via the embedding of biases and inequalities.
This is a fundamental problem—but it is by no means the only issue with AI. This page explains more, and also provides information about how ethical AI-based systems can be developed and how you can ensure your own use is ethical.
What is Ethical AI?
UNESCO’s recommendation on the ethics of AI sets out four core values, and ten core principles that are required for AI use to be considered ethical.
Defining Ethical AI
Four core values:
Human rights and human dignity;
Living in peaceful, just and interconnected societies;
Ensuring diversity and inclusion; and
Environment and ecosystem flourishing.
Ten core principles:
Proportionality and do no harm: AI systems should only be used when necessary to deliver a legitimate aim;
Safety and security: risks to safety and security should be considered and minimised;
Right to privacy and data protection of users at all times throughout the lifecycle of AI (development, deployment and use);
Multi-stakeholder and adaptive governance and collaboration to ensure inclusive approaches to AI-based systems;
Responsibility and accountability, meaning that systems must be auditable and traceable;
Transparency and explainability of all AI-based decisions;
Human oversight and determination: AI must not replace human accountability;
Sustainability: the wider impact of AI systems should always be considered;
Awareness and literacy: promoting public understanding of AI and data; and
Fairness and non-discrimination to ensure that AI is fair to everyone and its benefits are accessible to all.
These principles are easy to read and say—but what do they mean in practice?
Ethical AI use basically means developing and using AI systems that prioritise fairness, transparency, accountability and privacy. This approach will minimise harm and maximise societal benefits.
If you are responsible for developing and training AI systems, this means that you need to consider:
-
Fairness and non-discrimination
This is one of the biggest reasons why AI systems have been withdrawn from use over the years.
Examples of withdrawn systems have included Amazon’s trial recruitment sifting algorithm that only selected men (and used very subtle ways to identify them, such as the societies that they had been involved in at university). Police systems have also come under fire for identifying poorer areas for increased policing.The problem is that AI systems learn from training data—and when there is a bias in the training data, then that bias is replicated in the AI system. This means that existing imbalances in systems are likely to be amplified by AI.
Developers need to apply care and thought to this problem—but there is also a role for users. Users should always ask whether AI use is necessary and proportional to solving the problem, or whether humans could do a better job with less bias. They should also oversee the outputs from AI, and ask questions if any odd patterns (such as selection of particular groups) seem to be occurring.
It is also important to involve stakeholders in developing any new systems—and that means a diverse group of stakeholders, not just ‘the usual suspects’. A group of people from a broad range of backgrounds overseeing a project is far more likely to identify potential bias than a group of, say, software engineers.
-
Transparency and explainability
The outputs and decisions made by AI need to be understandable and transparent to users.
It must be possible to audit the decisions, and understand how they were made. AI may look like a ‘black box’, but “Computer says no” is not an acceptable answer. It must be possible to explain why the computer says no—and justify that answer using evidence. -
Accountability and oversight
AI systems should routinely be overseen and monitored by humans, who retain ultimate accountability and responsibility for decisions.
This responsibility is shared between developers and users. Organisations and individuals who use AI-based systems cannot simply start the system running, and then ignore it (“set it and forget it”). Instead, the outputs need to be monitored continuously to ensure that they are fair, comprehensible, and generally make sense. Users should always check outputs to ensure that they are factually accurate.
-
Privacy and data protection
The obvious issue here is the protection of user data, and respecting individuals’ right to privacy.
However, there is also another issue. AI cannot produce anything that is genuinely ‘original’. It can only learn from what it has seen, and reproduce things in slightly different forms. This means that once it has been shown any data, of any kind, it might reproduce it for someone else.This includes sensitive data such as details about companies or individuals—which is why you should never share anything with an AI algorithm that is not already in the public domain.
However, it also includes artwork, photographs and original text—and AI is no respecter of copyright.
Yes, AI is capable of generating images for you—but that means that someone, somewhere, is not being paid for their photograph or painting. That does not seem very ethical, and it also means that over time, fewer people will produce original artwork, because it simply doesn’t pay. For now, that probably doesn’t matter all that much. However, in a few years, when nobody is producing original work, we will be stuck in a loop with no new creative output of any kind, and nobody with the necessary skills—and that is potentially extremely worrying. -
Safety and security
The other side of the ‘privacy and data protection’ coin is safety and security.
AI systems should be safe to use, and as far as possible, secure from attacks. However, it also behoves users to be aware of the potential problems of using AI, and ensure that they do not share any sensitive information. -
Environmental impact
AI has a huge environmental impact.
AI algorithms are enormously resource-intensive. They require huge amounts of energy to run their processes, and the data centres that store the computers that run AI algorithms also need to be cooled. They therefore have high levels of demand for both water and energy.We all need to be aware of this.
Even just a casual ‘play’ with an AI algorithm has a cost. This means it is crucial to be sure that you really need to use AI, and that there is no other reasonable (or better) way to do the task you have in mind. This is the principle of proportionality: that AI use should not go beyond what is really needed to achieve a legitimate aim.
THINK! Do you really need to use AI?
Before any kind of casual use of AI, especially if you are using it to generate text or anything that might be considered ‘art’, ask yourself: “Should I use AI, or could I either do it myself, or pay someone else to do it for me?”
Using AI may be fun, and fast, but it is almost literally costing the earth. More immediately, it may also cost us our creative industries.
A Few Overarching Rules
We can define a few overarching rules for using AI.
First is the proportionality principle: no use that goes beyond what is strictly necessary for a legitimate aim.
Second, systems should be designed around the needs, rights and well-being of humans. Just because you can do something does not mean that you should do it.
Third, the governance and management of AI needs diversity and inclusivity. A broad group of stakeholders is much more likely to identify and challenge bias or unfairness (and our page on Diversity in Groups and Teams explains more about this).
Fourth, monitoring is vital. AI systems cannot be left to run without supervision. You have no idea what they are learning from the data passing through. Organisations using AI need to ensure that they carry out regular audits and risk assessments of all algorithms. Individuals should always fact-check and scrutinise the output of AI to ensure that it is both accurate and meets their needs.
If you follow these rules when using or considering implementing AI-based systems, you are unlikely to go far wrong.
A Final Thought: Think Long-Term
All these issues can broadly be summed up as the need to think long-term about the impact of AI before you jump into its use.
What seems like an immediate gain right now may have much longer term impacts, and it is advisable to consider these before use. Asking AI to generate an image or write a screenplay may seem a quick and cheap way of getting something usable, but it has long-term implications for creative industries. Playing with AI sounds like fun, but has an unseen environmental cost. Getting AI right is likely to be beneficial to us all, but there are serious issues that must be considered by anyone using AI. The ethics of AI cannot be left solely to developers: users also have a responsibility.
