Government responds to call for views on the cyber security of artificial intelligence and publishes voluntary code of practice
Further to 2024’s Call for Views on the Cyber Security of Artificial Intelligence, the Department for Science, Innovation and Technology has announced steps to help companies developing AI to better protect those systems and applications from growing cyber security threats through a new voluntary Code of Practice (Code).
The Code is intended to provide businesses and public services the confidence to harness AI’s transformative potential safely, by ensuring security is build in across the AI lifecycle and supports government’s AI Adoption Opportunities Plan.
Government’s response recognises the importance of developing internationally agreed and aligned security requirements and it is taking the Code and accompanying implementation guide into the European Telecommunications Standards Institute (ETSI) to develop a new global standard* focused on baseline cyber security requirements for AI models and systems.
(*Government states that it will also update the content of the Code and implementation guide to mirror the future ETSI global standard and guide.)
What is the AI Cyber Security Code of Practice?
This new voluntary Code aims to equip organisations with “the tools they need to thrive in the age of AI”, and to help developers build secure, innovative AI products that drive growth and fuel Government’s Plan for Change. The Code sets out how organisations using AI can protect themselves from cyber threats by taking steps such as implementing cyber security training programmes focused on AI vulnerabilities, developing recovery plans following potential cyber incidents and carrying out robust risk assessments.
The scope of the Code is focused on AI systems, including generative AI; and it sets out the cyber security requirements for the lifecycle of AI – that is, secure design, secure development, secure deployment, secure maintenance and secure end of life – while signposting relevant standards and publications for each principle.
The Code defines 5 stakeholder groups that form the AI supply chain: Developers, System Operators, Data Custodians, End-users and Affected entities.
The Code is structured around 13 Principles which are:
- Raise awareness of AI security threats and risks
- Design your AI system for security as well as functionality and performance
- Evaluate the threats and manage the risks to your AI system
- Enable human responsibility for AI systems
- Identify, track and protect your assts
- Secure your infrastructure
- Secure your supply chain
- Document your data, models and prompts
- Conduct appropriate testing and evaluation
- Communication and processes associated with End-users and Affected entities
- Maintain regular security updates, patches and mitigations
- Monitor your system’s behaviour
- Ensure proper data and model disposal
To accompany the voluntary Code, government has published an implementation guide to support businesses as they shore up their cyber defences by providing a one-stop shop which brings together guidance, key steps to follow and implementation examples.
You can read Government’s full response to the Call for Views here.
You can access the voluntary AI Cyber Security Code of Practice here and the accompanying Implementation Guide here.