New findings from BoE and FCA survey on AI adoption in UK financial services
On 21 November 2024, the Bank of England (BoE) and the Financial Conduct Authority (FCA) published its 2024 survey on artificial intelligence (AI) adoption in the UK financial services sector.
The key takeaway from the survey is that 75% of firms were found to be currently using AI, with another 10% having plans to do so within the next three years. This represents a sharp increase from the previous 2022 survey (up from 58% and 14% respectively).
Further AI adoption trends
The survey also found that foundation models now account for 17% of AI use cases, illustrating their growing importance in standardising and scaling applications across the sector. Automated decision-making (ADM) features prominently in AI deployments, with 55% of use cases incorporating ADM. However, fully autonomous decision-making remains rare, at just 2%, indicating the sector's cautious approach and preference for maintaining human oversight in critical processes.
Risk landscape
Data-related risks dominate the current landscape, with concerns about data privacy, quality, security, and bias featuring among the top five risks. This reflects the sector’s heavy reliance on accurate and secure data to power AI systems. Emerging risks, such as dependence on third-party AI models and increased complexity in AI applications, are expected to grow, raising questions about transparency and control. Cybersecurity continues to be the highest perceived systemic risk, and its importance will persist over the next three years. However, critical third-party dependencies are expected to pose the largest increase in systemic risk, underscoring the need for stronger oversight of external AI providers.
Governance and accountability
The survey also found that a significant challenge exists in firms’ understanding of the AI technologies they use, with 46% of respondents reporting only a partial grasp of these systems. The survey suggests that this is particularly acute for firms relying on third-party solutions, where visibility into the AI supply chain is often limited. Despite this, 84% of firms reported having an accountable person for their AI frameworks, and over half have implemented nine or more governance components specific to AI use cases. Accountability is often shared among multiple stakeholders, with 72% of firms assigning responsibility to executive leadership. However, fragmented governance structures, involving three or more accountable persons or bodies in many firms, could dilute oversight and effectiveness.
Regulatory constraints, particularly around data protection, resilience, and cybersecurity, are perceived as significant barriers to AI adoption. Non-regulatory challenges, such as ensuring the safety and robustness of AI models and addressing talent shortages, further complicate firms' ability to scale AI responsibly.
Balancing benefits and risks
Despite these challenges, firms remain optimistic about the potential of AI. The perceived benefits of AI, such as enhanced efficiency, innovation, and personalised customer experiences, are expected to grow by 21% over the next three years, compared to a 9% increase in perceived risks. This positive outlook reflects the sector's confidence in its ability to address the complexities of AI while leveraging its transformative potential.