DRCF Generative AI Adopters Roundtable: High-level Findings

7 May 2024

In January 2024, the DRCF’s AI team held a virtual workshop attended by 21 firms who are deploying Generative AI (GenAI). This follows a discussion last year that examined how DRCF regulators could maximise the benefits of this technology.

The workshop explored a range of GenAI deployments; considered their opportunities and risks; and examined how policymakers could support AI innovation. Below we outline the most salient discussion points and set out the DRCF’s next steps on GenAI.

GenAI is being deployed across a wide range of sectors to power numerous applications

We heard about GenAI applications being deployed across HR, cybersecurity, social media, safety technology, telecoms, broadcast, and mental health services, among other domains. These applications were intended to serve several purposes, including:

  • Improving back-office functions, for example to save time by automating the creation of job descriptions.
  • Facilitating direct engagement with customers, for example by powering chatbots that can provide information on new deals or direct customers to relevant resources.
  • Improving safety and cybersecurity, for example using LLMs (Large Language Models) to classify and analyse malicious content and stress test clients’ systems against attacks.

We heard that deploying GenAI helped firms to remain competitive in the marketplace and supported wider business efficiencies, for example helping to streamline processes and save staff resource. A couple of firms told us that they had received promising feedback on their deployment of GenAI, with one company noting that its employees were impressed with their chatbots’ ability to recognise the intent of users.

While GenAI is already bringing demonstrable benefits for businesses and customers, firms are cautious about moving too quickly to deploy GenAI in contexts which they believe could pose risk

Firms were conscious that GenAI models are rapidly evolving, becoming increasingly powerful, and may pose risks if deployed without care. They said they deliberately identified lower-risk use cases for initial deployment, for example, for parts of their business that experienced limited interaction with customers. Many firms were also taking a phased approach to deployment, including by testing models in discrete parts of their business before rolling them out more widely.

Some firms have their eye on further GenAI use cases. One company from the broadcast sector indicated their interest in using GenAI to facilitate transcription and translation tasks during the production and editing process. Some of these use cases are in the early stage of research and development.

Many of the safety measures that firms have implemented for GenAI are the same as those needed to safeguard other types of AI systems.

Firms explained a range of steps that they take both pre- and post- deployment of GenAI, which included:

Pre-deployment

  • Due diligence to ensure that a product meets domestic and international regulatory requirements.
  • Adapting systems to ensure that the model is appropriate for the intended use case, for example, by fine-tuning.
  • Testing, for example red teaming by using prompts to try to get the model to produce harmful outputs.

Post deployment

  • Supportive partnerships with vendors to ensure continued assistance.
  • User-feedback processes to monitor uptake, use and complaints.

Many of the safety steps we heard about mirrored those discussed in the DRCF’s 2022 workshops on procuring AI systems. This suggests that many of the safeguards used to mitigate the risk associated with AI systems in general are also applicable for managing the potential harm posed by more novel GenAI models.

Firms expressed interest in developing models in-house but many lack the infrastructure, compute and data needed

We heard that firms sourced GenAI models and datasets from a range of vendors to avoid lock-in and ensure the best products for their clients. Firms purchased off-the-shelf GenAI models, accessed models via API or used open-source foundation models that they finetuned in-house. Currently, firms told us that they couldn’t develop their own foundation models in-house, given compute costs among other factors.

Many firms expressed an ambition to use, or make greater use of, open models in the future. They saw fine-tuning and deploying open models as being less computationally intensive than training a model from scratch, and as giving them greater control to adapt and test existing foundation models for their own purposes. Firms acknowledged potential risks posed by open models although noted that many risks remain in proprietary models.

Firms would like Government and regulators to clarify transparency obligations across the AI supply chain

Firms pointed to several regulatory regimes which could govern aspects of GenAI, including IP (Intellectual property) and copyright, data protection, online safety and equalities law.

Firms told us that the complexity of the GenAI supply chain could create additional challenges for compliance. They strongly supported the idea of transparency guidance for model developers and downstream adopters, including advice on how best to make available information on training data, risk assessment, and model performance.

They were also interested in how existing regulations would apply to new GenAI companies who have recently entered the market.

The DRCF’s AI team will be hosting an internal workshop this year, where we will explore transparency in more detail.

Some firms raised concerns about the wider implications of GenAI on society, and discussed how AI solutions play a role

Several participants raised concerns about the wider implications of GenAI on democracy and society, for example how it could facilitate bad actors' ability to create misinformation or fraudulent content.

We discussed potential solutions to address the creation of harmful AI-generated content. Some participants were hopeful about AI-content classifiers, which are a type of automated content moderation tool that considers whether content is or is not something, e.g., a hate speech classifier would identify whether text is hate speech or not. Others were wary of nascent solutions like watermarking, which they noted could be imperfect.

Firms called for policymakers to continue collaboration efforts to support regulatory coherence on AI

The US, UK and EU have taken different regulatory approaches to AI, which firms felt could lead to challenges – particularly for those operating across domestic and international markets.

Firms commended international commitments made at the UK’s AI Safety Summit and called for continued cooperation on AI definitions and AI standards to support innovation and assessment.

Firms also broadly supported the UK Government’s approach to AI regulation. They welcomed the Government’s commitment to exploring regulatory gaps, enabling AI innovation and exploring accountability for AI across the supply chain.

Generative AI remains a priority for DRCF regulators

The DRCF is engaging closely with the Government on AI and has just launched its AI and Digital Hub pilot, which will help innovators bring new AI products safely to market.

Individually, the four DRCF regulators have published AI statements in response to the government’s new AI regulation framework (Ofcom’s statement can be found here). Each regulator will also continue to undertake discrete GenAI-related research and investigations:

  • The ICO is clarifying how data protection applies to the development and use of GenAI and is developing a joint statement on foundation models with the CMA;
  • The CMA has published its update report on foundation models and will be continuing its work in this area;
  • Ofcom will publish discussion papers exploring measures to help address harmful AI-generated content (focusing initially on red teaming and deepfake detection);
  • The FCA and Bank of England jointly published their AI Feedback Statement based on findings from the Machine learning in UK financial services 2022 survey and will conduct a third joint ML survey this year.

If you would like to hear more from the DRCF, please contact DRCF@ofcom.org.uk.

Back to top