It is important to all member regulators of the Digital Regulation Cooperation Forum (DRCF) that algorithmic systems meet good governance standards. This blog explains work being carried out across DRCF regulators to share technical knowledge and develop capabilities for effective assessment of algorithmic systems. It also introduces future work that the DRCF plans to undertake in relation to the growing third-party algorithmic auditing market, which DRCF regulators see as an important element of the overall assessment landscape.
Regulators’ existing powers and investments in auditing algorithms
Algorithmic systems have huge consequences for society: from social media recommender systems on Facebook and TikTok, to generative Artificial Intelligence (AI) such as ChatGPT and Bard, facial recognition systems, and algorithmic trading.
The ability to understand how algorithmic systems work is important to all regulators. Each of the DRCF regulators has powers to investigate and hold firms to account for the way that their algorithms operate. DRCF regulators have powers to obtain data from companies and assess and investigate algorithmic systems: for example, the Information Commissioner’s Office’s (ICO) remit under the Data Protection Act 2018 and the UK GDPR; the Competition and Market Authority’s (CMA) existing information gathering powers under the Competition Act 1998 and Enterprise Act 2002; and the Financial Conduct Authority’s (FCA) broad information gathering powers under the Financial Services and Markets Act 2000.
DRCF regulators, applying these powers, have developed practical experience of assessing algorithmic systems over time. For instance, the FCA has experience supervising algorithmic trading activity, and has published a review of algorithmic trading in wholesale markets. The CMA has an ongoing investigation into the systems of large platforms to tackle fake online reviews and an open investigation under the Competition Act 1998 into the operation of Amazon’s retail marketplace. The ICO has undertaken a programme of audits of AI systems and taken enforcement action against AI-driven company Clearview AI.
In addition, some DRCF regulators are likely to be given new powers to require firms to provide information about the performance and oversight of their algorithmic systems (such as in the Online Safety Bill for Ofcom, and in the Digital Markets, Competition and Consumers Bill for the CMA in respect of undertakings designated with having significant market status, and potential conduct requirements or interventions regarding the operation of algorithmic systems).
Building our capabilities in algorithmic assessment and audit
Each DRCF member regulator has been investing in the skills and capabilities of its staff to conduct investigations, audits and/or reviews of algorithms and technical systems. This has included creating technical and multidisciplinary teams, spreading relevant knowledge beyond technical teams, and supporting technical teams with the right tools and infrastructure.
For example, the ICO established an AI and Data Science team as a centre of excellence to guide regulatory policy on AI and support the assessment of AI systems in the regulator’s advice, audits and investigations. The CMA set up the Data, Technology and Analytics (DaTA) unit to build up its technical expertise and provide technical support during investigations. Ofcom has substantially grown its technical expertise, and now employs close to 50 data science and machine learning experts in its Data Innovation Hub and Trust and Safety Technology teams. These teams have been conducting internal pilots of algorithmic assessments, including a recent exercise to road test a method for assessing age estimation models. They have also built machine learning models to help Ofcom understand the efficacy of technologies such as content moderation systems. Similarly, the FCA has been developing its broader technological capabilities, for example creating Data Science units across the organisation which use advanced analytics to analyse risk, triage cases and automate processes.
As algorithmic systems are highly varied, regulators may deploy different approaches and methodologies for assessing them, taking account of the details of the systems and specific issues or potential harms under investigation. Different types of assessment such as governance, empirical or technical audits require different resources, and regulators will need to determine what type of audit or assessment is the most feasible and proportionate.
A key advantage of our collaboration under the DRCF is that regulators are able to share insights on our respective legal and technical tools, and lessons and practical experience from undertaking investigations, reviews and/or audits of algorithmic and technical systems. This collaboration will continue, particularly in the light of the developments in DRCF regulators’ legislative responsibilities mentioned above.
The independent algorithmic auditing ecosystem
As more UK companies begin to use algorithmic systems, consistent with the Government’s work on the development of an effective AI assurance ecosystem, firms are looking for ways to assess their own systems, whether in the finance, healthcare or online services domains. To meet this demand, there has been a growth in new firms and services specialising in algorithmic systems auditing and assessment.
DRCF regulators have an interest in how companies audit their AI systems for the following reasons:
- An effective and reliable independent algorithmic audit and assessment ecosystem (where third parties undertake the audit) should help companies to better understand and improve the management of the risks in their algorithmic systems, which will affect their ability to comply with relevant regulations.
- Some DRCF regulators may expect to rely on independent algorithmic audit and assurance providers to assist in our investigations (such as producing skilled persons reports) or to monitor compliance with remedies or commitments that firms have made about their systems (such as in the role of monitoring trustee for CMA cases).
Algorithmic audit and assessment is a relatively new field, and norms and standards are less developed than in more established fields like financial audit. Audit firms are developing services such as:
- Mitigating bias and improving fairness, such as making sure that the algorithmic system does not unfairly impact certain groups based on characteristics like ethnicity or gender.
- Creating or improving transparency, to highlight the use, output, or outcomes of algorithmic systems to an external audience, such as the public.
- Providing explainability, to explain in more detail the use, output, or outcomes of algorithmic systems to internal stakeholders.
DRCF member regulators have commenced research aimed at understanding which companies are currently providing these services, the nature of services being provided and how clients are using them. We are also interested in how these firms are marketing their services, and how firms are evidencing the effectiveness, trustworthiness, and credibility of the services they are offering. Our work will involve further engagement with firms in this emerging market on issues such as emerging standards and norms.
Overall, our research and engagement could lead to insights for regulators on their own approach, or suggestions for how regulators may seek to work with external providers in the future. We look forward to engaging with companies in this developing sector – please look out for future opportunities to engage with the DRCF on this.