The DRCF has published a new report, Consumer use and understanding of Generative AI, including in financial and debt advice, prepared by Thinks Insight & Strategy. The report reviews consumer perceptions of and responses to benefits and risks of generative AI in financial services, and consumer appetite for future use of generative AI including the impact of regulation and warnings on trust and receptiveness to use.
The report found that consumers have some awareness of generative AI but few of them have a deep understanding of how it works or how it is different from other AI. Consumers are concerned about risks, and tend to use “signifiers” as short cuts to decide how much they can trust generative AI in different contexts. These signifiers include: human oversight; a well-known provider offering the tools; use cases feeling recognisable and routine (as opposed to novel); and not being asked for large volumes of personal data. Consumers tend to assume regulation is in place if using generative AI in financial services settings, and expect organisations deploying generative AI tools to be accountable if things go wrong. Warnings and messaging can increase consumers’ sense of personal responsibility.
This summary report will inform the CMA’s Foundational Models Review and the FCA’s overall approach to the regulation of AI, as well as informing further joint DRCF research.