The Digital Regulation Cooperation Forum (DRCF) hosted its Responsible Generative AI Forum in March, featuring more than 30 expert speakers and 200 representatives from industry, civil society, academia, government and regulators.
Our goal was clear: to bring experts and decision-makers together to take stock of recent developments in Generative AI (GenAI), and to raise awareness of the latest research and industry best practices for responsibly maximising the benefits of this technology.
The day began with opening remarks from Sarah Cardell, DRCF Chair and CEO at the Competition and Markets Authority (CMA), who emphasised the need for a clear and proportionate regulatory environment to harness AI's benefits. This was followed with a thought-provoking presentation by Benedict Evans, a media and technology analyst, setting the stage for a series of in-depth panel discussions that explored topics from deepfakes to data protection, and from competition dynamics to the role of regulators in enabling responsible innovation.
Here are our six main takeaways from this day of dynamic discussion:
1. GenAI’s trajectory remains uncertain
Panellists highlighted that we are witnessing a period of significant technological transformation with rapid developments in GenAI capabilities, but the trajectory of AI is still unknown. Benedict Evans highlighted several critical uncertainties: can the returns to scaling continue at their current rate? Will GenAI firms prevent their models from becoming interchangeable commodities? How will business models evolve? AI policy expert Verity Harding, stated in the ‘Lifting the hood on GenAI’ panel, that GenAI will continue to attract attention given a perception at political levels of its importance for national sovereignty and defence capabilities. Angie Ma from Faculty and Kir Nuthi from the Startup Coalition both highlighted the supportive UK regulatory environment for innovation, but that firms need to consider what the market actually needs when developing innovative solutions. Regardless of where we land in this debate, there was a general recognition that GenAI is bound to be disruptive. Or as one panellist put it, ‘There might be a bubble but that doesn’t mean AI will not create waves.’
2. GenAI’s impact is already being felt
Despite being relatively new, GenAI is already breaking ground, including in the creative sectors, for practical skills like coding, and in the way that people search for information online (e.g. with an increasing number of people using chatbots to search for information). Recent research commissioned by DRCF found that 70 percent of UK adults claim to have used GenAI, with nearly a fifth of respondents (18 percent) falling into the category of ‘confident embracers’.
Our second panel, ‘Taking the Leap: Stories from the frontline of GenAI adoption’, provided real world examples of how GenAI is being deployed in public and private sectors to solve meaningful problems. Brhmie Balaram, Head of Responsible AI Adoption at the NHS AI Lab, discussed how AI-driven 'ambient scribes' in GP surgeries are automatically transcribing patient consultations, reducing paperwork burdens and allowing doctors to maintain full focus on their patients.
Jonathan Thurlwell from Ofgem highlighted in a separate session how energy companies are blending GenAI and traditional machine learning approaches to optimise infrastructure, streamline operations, and enhance customer experiences. Similarly, financial service providers are deploying GenAI to improve fraud detection while making customer service more responsive and personalized.
3. Legitimate concerns require collaborative solutions
Several speakers argued that the promise of GenAI should not distract us from the legitimate risks it poses. The technology has already been misused to create deepfake pornography, child sexual abuse material, and fraudulent content. The way these systems are built and used can also pose risks to people’s privacy and challenge their rights.
Several of these risks were considered at length in the afternoon breakout panels. Sophie Compton from My Image My Choice, a campaign tackling intimate image abuse, provided a powerful account of the enduring harm that results from being a target of non-consensual sexual deepfakes, with victims suffering long -term trauma and a deterioration in their personal relationships. Michael Veale, an Associate Professor in digital rights and regulation, walked through the challenges of understanding accountability in AI supply chains, emphasising the difficulty users of the technology face in determining responsibility when harms occur.
However, the forum demonstrated that our understanding of potential solutions is evolving quickly, with industry, civil society, and regulators all contributing to the development of practical safeguards.
4. Our understanding of the solution space is maturing rapidly
We now have a much clearer picture of best practices for responsible GenAI deployment than we did a year ago.
Andy Parsons, Senior Director of the Content Authenticity Initiative at Adobe, shed light on the Coalition for Content Provenance and Authenticity (C2PA) project. He explained how the C2PA content metadata scheme functions like a nutrition label for digital content, helping users and platforms assess authenticity by providing key information about the content’s origin, what tools were used to make it, and in some cases who created it.
In the ‘Accountability’ panel, Aidan Peppin, UK & EU Government Affairs and Public Policy Lead at Cohere, an enterprise-focused AI developer, talked about their new Secure AI framework, which focuses on ensuring models are deployed safely and securely.
We also learnt that conventional, ‘tech-agnostic’ solutions can still be useful. Jessica Rose Smith, Tech Policy Manager at Ofcom, argued that the methods platforms use for tackling harmful content generated by humans can also support us in addressing AI-generated deepfakes. One example is platforms establishing easily accessible user reporting tools. Another is identifying and blocking accounts that repeatedly post harmful content, be it real or synthetic.
5. DRCF regulators are moving at pace to stay ahead of this ever-evolving technology
Sarah Cardell highlighted how each DRCF member has invested significantly in technical capabilities. Ofcom now has almost 60 in-house AI specialists conducting cutting-edge technical research. The ICO, CMA and FCA have made similar investments in AI and data expertise.
Speakers from the four regulators also highlighted the state-of-the-art technical research they are undertaking on GenAI-related topics. Benedict Dellot from Ofcom discussed their forthcoming research on content authentication techniques, examining the relative merits of watermarking and labelling approaches. Sophia Ignatidou referenced the ICO's extensive consultation on Generative AI and their research into data protection compliant training methods.
This technical capacity allows regulators to engage meaningfully with rapidly evolving technologies and develop evidence-based interventions that protect consumers without stifling innovation.
6. Regulation is an enabler for AI opportunities
Forum participants emphasised that proportionate regulation is one of the UK’s most effective levers for promoting growth. Attendees shared enthusiasm for initiatives like the FCA's regulatory and digital sandboxes and the ICO’s innovation service. Many also supported cross-regulatory programmes like the DRCF's AI and Digital Hub, which provided cross-regulatory advice to those building innovative AI products.
In the final session of the day, senior directors from the four regulators discussed the role of the DRCF in enabling regulatory coherence and certainty. Examples of our close collaboration include the ICO and CMA’s joint article showing how competition and data protection law apply to the development and deployment of foundational models.
The Directors also spoke about the benefits of skilled experts moving more freely between the public and private sectors – something that would enhance regulators’ understanding of the industries we regulate. The CMA is pioneering this approach by building out a strategic business analysis capability which includes a focus on business models and commercial realities of AI development, deployment and monetisation.
Future Directions
As DRCF CEO Kate Jones remarked in closing the event, this forum represents just the beginning of deeper collaboration between regulators and the AI ecosystem. Attendees were keen to hear more about why we make the decisions that we do and our plans for the future.
Get Involved
- Read our 2025/26 work plan – This contains more information about our plans on GenAI, including work on agentic systems
- Review our research – The results from our GenAI industry adoption survey will be published shortly
- Join the conversation – Follow the DRCF’s work on LinkedIn and email drcf@ofcom.org.uk if you would like to receive the bi-monthly newsletter