DSIT Secretary of State Announces RTA AI Assurance Initiative: £6.5bn Market Growth Potential and New Public Consultation
This morning, the Department for Science, Innovation and Technology’s (DSIT) Secretary of State, the Rt Hon Peter Kyle, announced the publication of two new Responsible Technology Adoption Unit (RTA) products at the Financial Times Future of AI Summit.
This includes the launch of the Assuring a Responsible Future for AI’ report, which assesses the current state of the UK AI assurance market, identifies opportunities for future growth, and sets out the targeted interventions government is taking to drive the future growth of this market.
As noted in the report, the UK’s AI assurance market already employs more than 12,000 people and generates more than £1 billion Gross Value Added (GVA) and could grow six-fold in the next decade to over £6.5 billion GVA if market barriers are addressed.
The UK currently has 524 firms supplying AI assurance services. This includes 84 specialised AI assurance companies, 225 AI developers, 182 diversified firms, and 33 in-house adopters. Most suppliers are concentrated in London (47-69%), with smaller hubs in the South East, Scotland, and North West. Notably, the UK's AI assurance market is proportionally larger than those in the US, Germany, and France. Some of these firms in techUK membership include Advai and Holistic AI.
To realise this potential, DSIT shares the intention to drive demand for AI assurance goods and services by developing an ‘AI Governance Essentials toolkit,’ to improve quality supply of AI assurance by working with industry to develop a ‘Roadmap to trusted third-party AI assurance’, and support the international interoperability of the UK’s AI assurance regime by developing a ‘Terminology Tool for Responsible AI.’
However, the report notes that this assurance market faces several key challenges. On the demand side, there is limited understanding of AI risks among firms and the public, lack of awareness about AI assurance benefits, and uncertainty about regulatory requirements. Supply-side challenges include a lack of quality infrastructure to assess assurance tools, limited access to AI model information for third-party providers, and concentration of supply among AI developers rather than independent providers. The market also struggles with interoperability issues, including fragmented terminology across sectors and countries, lack of common understanding of AI assurance concepts, and different governance frameworks internationally.
To address these challenges, the UK government has outlined several key actions. To drive demand, they are creating an AI Assurance Platform as a one-stop-shop for information and developing an AI Essentials toolkit to help startups and SMEs engage with good practices. To increase supply, they are developing a "Roadmap to trusted third-party AI assurance," collaborating with the AI Safety Institute to advance research and development, and exploring capital investment and grant mechanisms. For improved interoperability, they are creating a Terminology Tool for Responsible AI to define key terms, working with US NIST and UK NPL to promote common understanding, and developing sector-specific guidance.
The report notes that these measures are part of the UK's broader AI governance framework, which includes plans to introduce binding requirements for companies developing the most powerful AI systems, while maintaining a proportionate, sector-specific regulatory approach.
The report emphasises that collective action across the AI assurance ecosystem is necessary to realise the market's potential. The UK government aims to position itself as a leader in AI assurance while ensuring AI is developed and deployed safely and responsibly. The report concludes by inviting organisations to collaborate on these initiatives by contacting [email protected].
Secondly, DSIT has launched a public consultation on the AI Management Essentials (AIME) Tool. AIME is a self-assessment tool which supports businesses to follow responsible AI practices in their organisations, and the first product for their AI assurance platform. DSIT notes they are set to make sure the AIME tool is a straightforward, easy to access tool, which provides simple and clear terms on what is required of businesses to ensure the development and use of AI systems is safe and responsible.
The public consultation will be open for 12 weeks, after which DSIT will decide on next steps. This could include helping public sector organisations make better and more informed decisions on purchasing AI systems.
Sue Daley, techUK Director of Innovation said:
DSIT's publication of the 'Assuring a Responsible Future for AI' report and the call for public consultation on the 'AI Management Essentials tool' (AIME) mark a crucial step in building an AI assurance industry in the UK.
With potential to grow to £6.53bn by 2035, the AI assurance market represents a significant opportunity for the UK to lead globally in responsible AI development and adoption.
We welcome DSIT's targeted approach to developing practical tools and frameworks that will help businesses of all sizes adopt robust AI assurance practices. We look forward to working with government and our members to support and refine these important initiatives as they move forward.
Next Steps
The Digital Ethics Working Group's final meeting of 2024 will take place on 12 November, where members of techUK will discuss these releases and intentions to submit a formal consultation. Those interested in contributing to this consultation are encouraged to contact [email protected]. Please note further conversation on Digital Ethics and AI Safety will continue at techUK’s eighth annual Digital Ethics Summit on 4 December, you can register to attend here.
We would encourage you to contribute your thoughts to the public consultation on AIME and thank you for your continued engagement with RTA as we take forward our work to support and strengthen the UK’s AI assurance ecosystem.