UK Government puts forward plans to regulate AI

The UK Government sets out its approach to AI regulation in response to the AI White Paper consultation.

The Government published its highly anticipated response to the AI White Paper consultation, which announces £10m in new funding to equip regulators in their efforts to address the sector-specific risks and opportunities of AI, a further £80m to launch nine AI research hubs across the UK, as well as initial thinking on how the Government may respond to highly capable, general-purpose AI models.

Although there are no immediate plans to introduce new laws, the Government intends to remain agile in its approach, and has not ruled out legislating in the future. The response also offers details on the Government's own new structure for governing AI, such as the set-up of a Central Function to help coordinate regulators, and the renaming of the Centre for Data Ethics and Innovation (CDEI), to the Responsible Technology Adoption Unit.

The response comes after the Government received over 400 responses from across industry, regulators, and industry including techUK’s response. In that time, the UK also hosted the inaugural Global AI Safety Summit, which double downed the Government’s ambitions to design a pro-innovation and pro-safety approach to AI.

 

Bolstering Regulator Capabilities

Following wide support from the AI White Paper’s consultation responses, the Government will press ahead with plans to establish a principles-based and context-specific regulatory framework, which delegates responsibility to individual regulators to address the risks and challenges of narrow, market-based uses of AI in their own sector, on a non-statutory basis. Already, the Government has published initial guidance to regulators on how to apply the cross-sectoral principles in their respective remits. The five principles are:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

To deliver this, regulators will be supported by a £10m fund for new tools and research, as well as a new Central Function within Government to help coordinate activities, facilitate knowledge exchange, and horizon scan. This will be underpinned by the Digital Regulatory Cooperation Forum’s (DRCF) AI and Digital Hub which will help innovators navigate the new framework. The DRCF published an update on the types of innovators it is seeking to support with its new service. 

Almost 15 of the UK’s regulators will also be required to publish their strategic approach for AI regulation by 30 April 2024, which will include a 12-month roadmap, as well as an assessment of the risks and challenges in their sector, and a plan to address them.

 

Addressing advanced general purpose AI systems

To sufficiently address the many different types of AI systems currently deployed, the Government has recognised a distinction between narrow, market-based uses of AI, subject to sector-specific regulation, and highly capable, general-purpose models which are the least understood, and most exposed to regulatory gaps. They also raise complex questions around liability and accountability.

Alongside existing voluntary actions, the Government will explore a range of targeted, binding measures in order to apply a higher standard of regulation for frontier AI, such as transparency measures, pre-deployment testing on open models, or additional compliance requirements. An update on new responsibilities for developers of highly capable, general-purpose AI systems will be published by the end of the year. If deemed ineffective, legislation has not been ruled out.

 

Prioritising short-term risks and harms

The Government’s response identifies a range of short-term risks of AI, including deepfakes and disinformation, the use of AI in public services and in the workplace, intellectual property, bias and discrimination as well as improving trust and safety in the technology, among others.

In response to each, the Government has outlined a package of activity to address the risks, such as bolstering the Government’s capabilities to track AI enabled crimes, conducting a review of guidance related to AI in the workplace, considering technology to watermark electoral content, and engaging in dialogue across international forums. A call for evidence will also be launched on AI-related risks to trust in information and related issues such as deepfakes.

An approach to Intellectual Property is one area where the path forward is less clear, after a proposal for a voluntary code was struck down. While the Government revisits this complex issue, it will also explore transparency mechanisms to offer rights holders more clarity over when their content is input into an AI model.

 

Next steps

Alongside outlining a clear list of milestones and deliverables, the Government has committed to developing and consulting on a Monitoring and Evaluation Framework for the activities set out in the document. As this ambitious programme of work is actioned, the Government must prioritise implementing at pace, while providing stakeholders with an extra layer of detail on the specifics, such as how and where funding will be allocated across regulators.

While the UK’s approach is domestically grounded, the Government will also need to consider how this framework will interact with an increasingly complex international regulatory landscape. There is a key role for the UK to play in facilitating interoperability and alignment between regimes as they develop across different jurisdictions.

As the Government moves toward implementation, techUK aims to ensure a diverse pool of industry players – including SMEs – are able to help shape and contribute to developments in this space.

 

Commenting on the Government’s response, techUK’s CEO says 

“techUK welcomes the Government’s commitment to the pro-innovation and pro-safety approach set out in the AI Whitepaper. We now need to move forward at speed, delivering the additional funding for regulators and getting the Central Function up and running. Our next steps must also include bringing a range of expertise into Government, identifying the gaps in our regulatory system and assessing the immediate risks.  

If we achieve this the Whitepaper is well placed to provide the regulatory clarity needed to support innovation, and the adoption of AI technologies, that promises such vast potential for the UK.”  
 

The Government's full response can be found here
 


 

Neil Ross

Neil Ross

Associate Director, Policy, techUK