Blogs & Articles

15 April 2024
Fairness in AI: A View from the DRCF


All Digital Regulation Cooperation Forum (DRCF) member regulators are keen to ensure that AI is fair: that wherever AI is used in the UK, it does not result in anyone being exposed to discrimination or unfair treatment.

As fairness in AI is an important consideration in our work on artificial intelligence (AI), the DRCF gathered to examine the ‘Fairness’ principle, including what it means, how it relates to AI and what its consequences are for different regulatory remits. The DRCF brought the Equality and Human Rights Commission (EHRC) into this discussion given the significance of fairness to their work and their collaboration with some of the DRCF member regulators in this area.

Fairness is one of the five principles proposed by the UK Government for the responsible development and deployment of AI across all sectors of the economy. The principles set out in the UK Government’s “A Pro-Innovation Approach to AI” White Paper, published in March 2023, are:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress.

In February 2024, the Government published initial guidance for regulators on considerations for the implementation of the five principles. As the guidance explains, the Government’s expectation is that regulators will interpret and apply the principles within their respective regulatory remits. Coordination between regulators is key, therefore by building on our existing work, the DRCF aims to explore the areas of overlap in regulators’ understanding of fairness to ensure people are protected from AI harms.

How does fairness relate to AI?

Fairness can arise in a variety of contexts, and what is defined as “fair” in AI differs depending on the situation. It includes avoidance of discrimination by reference to protected characteristics such as race and gender, and is significantly broader as it also includes other forms of fairness such as requirements to follow fair processes. In some situations, fairness means that people experience the same outcomes, while in others it means that people should be treated in the same way, even if that results in different outcomes. In other legislation, “fairness” aims to ensure that personal data is not unfairly exploited, also that business practices do not create unfair marketplaces.

One major challenge in the adoption of AI has been algorithmic bias: that when AI is called upon to make decisions, it treats some groups of people unfairly compared to others. Such bias can have extremely harmful consequences, particularly when bias emerges in AI systems used for life-altering decisions, for example deciding upon job applications or making significant financial decisions such as on insurance pricing or mortgage applications. Some AI systems have been shown to produce biased results, from facial recognition technology that is better at recognising male and white faces, to recruitment screening software that penalises job applications from female candidates.

The challenge of algorithmic bias is central to the discussion on fairness in AI. Bias can surface due to a variety of reasons and its source can be traced back to various points across the AI lifecycle and supply chain. Bias in algorithmic decision making can emerge as a result of the data used to train a model (which may be unrepresentative of the population being studied), or as a result of the behaviours and outlook of the individuals developing the algorithms, among other factors.

It can be difficult for regulators to determine whether algorithmic decision-making has been biased. This is often because of the indirect nature of such bias, the complexity of the models used and the interrelations between different data points used (i.e. protected characteristics and other socio-economic indicators). The EHRC’s role in tackling bias is partly rooted in the Equality Act 2010, which legally protects people from discrimination according to nine protected characteristics. Similarly, one of the core principles underpinning human rights is fairness. Article 14 of the Human Rights Act requires that all the rights and freedoms set out in the Act must be protected and applied without discrimination. All public bodies must comply with the Human Rights Act and the Public Sector Equality Duty as they adopt AI, and all regulators must do so as they consider fairness in AI within their own remits.

Besides the overarching duties in equalities and human rights law, each of the DRCF member regulators has existing requirements and frameworks which relate to oversight of fairness in AI, described below.

What does fairness mean for each of the DRCF member regulators?

Fairness in the context of data protection and consumer vulnerability is a key consideration for all DRCF member regulators. Fairness is a central principle and legal requirement of data protection law, on which the ICO has issued guidance. As the ICO guidance says:

“fairness means you should only process personal data in ways that people would reasonably expect and not use it in any way that could have unjustified adverse impacts on them. You should not process personal data in ways that are unduly detrimental, unexpected or misleading to the individuals concerned.”

Fairness in this context applies to all instances where personal data is processed, including both the way data is processed as well as the outcomes of that processing. The ICO highlight that a “by design” approach to AI development is required by law to address the issue of fairness, alongside the other principles, from the very beginning of the AI lifecycle. Data protection law also provides additional protections against AI-driven discrimination on the basis of special category data, with greater safeguards in place around “significant” decision-making.

Some regulators, such as Ofcom, do not have direct powers to regulate fairness in AI, but they do have duties that can help them to consider fairness. Ofcom has a non-statutory Fairness Framework which allows it to consider fair practices in broadband, mobile, home phone and pay TV. Fair practices relate to customers' services working as promised, reliably over time. Ofcom has also published a paper exploring how algorithmically driven personalised pricing could both benefit and undermine consumer fairness. The research found that most participants felt that personalised pricing was unfair and were concerned about lack of transparency in how prices would be calculated.

The FCA has various regulatory requirements about fairness that are relevant when firms use AI when providing financial services. These include the new Consumer Duty, that requires firms to act to deliver good outcomes for retail customers. The Duty includes a requirement for firms to act in good faith, avoid causing foreseeable harm, and enable and support retail customers, including those with characteristics of vulnerability, to pursue their financial objectives.

The CMA is examining consumer vulnerability and the principles required to help guide firms to design services that treat consumers fairly and help them make better choices. The CMA’s initial foundation model (FM) review includes guiding principles to aid firms in their development and deployment of FMs, one of which is “Fair Dealing”, warning against anti-competitive conduct and requiring maintenance of an effective competitive process. This principle enables firms to compete without unfair hindrances, including those arising from AI systems that underpin the functioning of markets, such as self-preferencing in recommender systems.

What's next?

While each regulator has its own requirements and frameworks related to fairness in AI, we are joining up where possible on cross-cutting issues to ensure we are protecting people against the implications of unfair practices in AI decision making.

This initial cross-regulatory exploration provides us with a firm foundation to deepen our understanding and explore potential alignments. As we take this work forward, we will begin to shape our understanding of other AI principles, moving next to AI transparency and explainability. We will also continue to probe AI fairness through its application to individual case studies. By grounding our work in emerging use-cases, we aim to improve the clarity of our rules and help prevent AI from harming consumers and citizens.

Some of the DRCF’s other work on AI has an impact on how its member regulators oversee implementation of AI fairness. For example, the DRCF’s work to ensure that algorithmic systems meet good governance standards will help to improve the ecosystem for assessing whether any specific AI or algorithmic system is fair.

We would welcome views from developers and users of AI systems about what interactions between fairness and AI they see in their domain. Please contact with any views on this.