16 Feb 2024
by Tess Buckley

Responsible Technology Adoption Unit (RTA) Launches Introduction to AI Assurance

In February 2024, DSIT released the Introduction to AI Assurance guide, committed to in Government’s AI White Paper for the Spring of 2024. The guide provides practitioners with a better understanding of how AI assurance techniques can be used to ensure the responsible development of AI systems and supports the pro-innovation approach illustrated in the UK’s AI White Paper. It introduces key AI assurance concepts and terms and situates them within the wider AI governance ecosystem. It also provides information on the assurance mechanisms and technical standards which will be regularly updated to reflect stakeholder feedback, emerging best practices and a changing regulatory environment. The following insight provides an overview of the key areas the guidance covers including how assurance is defined by the RTA and AI assurance’s relationship to governance and regulation. 

 

Definition of Assurance  

The guide explains that assurance is a form of accountancy, the process of measuring, evaluating and communicating something about a system or process, documentation, a product or an organisation. Using assurance processes provides accessible evidence about these systems' capabilities, measuring intentions, limitations and risks. Further examining the ways these risks are being built-in throughout the AI lifecycle. 

The guide says that assurance requires robust techniques that organisations can use to measure and evaluate systems and then communicate their system’s trustworthiness and alignment to relevant regulatory principles. The RTA further explains these three steps to assure AI: 

  1. Measure – Gather qualitative and quantitative data on how an AI system functions to ensure that it performs as intended (performance, functionality and potential impact seen in documents about system design and management processes)  

  1. Evaluate – Activities that assess the risks and impacts of the system and inform further decision-making (evaluate against benchmarks, standards and guidelines of use) 

  1. Communicate – effectively communicate findings internally and externally (reports, presenting in a dashboard, publicly disclosure the assurance processes the company has undertaken, consider certification)  

In addition to providing a definition of assurance the guidance provides further detail on how AI assurance will play a role in operationalising relevant regulatory principles including the five principles which underpin the pro-innovation framework in the UK’s White Paper; safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, contestability and redress. The guide explains that regulatory principles can be operationalised by providing agreed-upon processes, metrics and frameworks such as SDO-developed standards through ISO, IEC, IEEE, ETSI, sector specific rules and/or guidance developed through RTA and assurance mechanisms such as audits or performance testing. 

 

AI Assurance Mechanisms and Scope 

The guide also details a sample of key assurance techniques (qualitative and quantitative assessments) for organisations to consider as part of the development and deployment of systems. These include: 

  • Risk assessment 

  • (Algorithmic) Impact Assessment 

  • Compliance Audit 

  • Conformity Assessment  

  • Bias Audit  

  • Formal Verification  

Within the scope of these AI assurance techniques are: 

  • Training data 

  • AI models  

  • AI systems 

  • Broader operational context   

 

The AI Assurance Ecosystem: Key Stakeholders  

The guide also includes further information and descriptions of the range of actors that play a key role within the assurance ecosystem. These actors include Regulators, Accreditation bodies, Government, Standards bodies, Research bodies, Civil society organisations, and Professional bodies.  

 

AI Assurance’s Relationship to Governance and Regulation 

The RTA’s Introduction to AI Assurance explains that delivery of the UKs outcome-based approach to AI regulation will require existing regulators to be responsible for interpreting and implementing the regulatory principles in their respective sectors and establishing clear guidelines on how to achieve these outcomes within a particular sector.  

AI assurance will be useful for regulators in the following ways: 

  • It includes processes for making and assessing verifiable claims to which organisations can be held accountable for outcomes 

  • AI assurance will also support organisations to measure whether systems are trustworthy so they can demonstrate this to government, regulators and the market 

  • AI assurance approaches, tools, systems and technical standards which ensure international interoperability (demonstrate risk management in ways understood in other jurisdictions) between differing regulatory regimes will support cross-border trade 

 

AI Assurance’s Relationship to Standards  

The guide says that global technical (consensus-based) standards are the underpinning of AI assurance. Standards can be seen as agreed ways of doing things which facilitate shared and reliable expectations about products' status. These processes are released by global standards development organisations (SDOs) such as the international standards organisation (ISO). The different type of standards that support assurance techniques include: 

  • Foundational and terminological 

  • Interface and Architecture 

  • Measurement and test methods 

  • Process, management and governance 

  • Product and performance requirements  

 

So, What Now?  

At the end of the guide, it provides key sugggested actions for organisations to take forward to help build AI assurance as suggested in the RTA’s guidance. These key actions are: 

  1. Consider existing regulations that are relevant for AI systems (ex. GDPR, Equality Act 2010 and industry-specific regulations 

  1. Upskill within your organisation (ex. The Alan Turing Institute has produced training workbooks and the UK AI Standards Hub has a training platform on AI Assurance)  

  1. Review internal governance and risk management (ex. NIST RMF)  

  1. Look out for new regulatory guidance that is sector-specific (ex. ICO guidance on AI and data protection)  

  1. Consider involvement in AI standardisation (ex. Engage with BSI

 

Spotlighted Resources 

The guide also provides more information on resources that exist that can help oragnistions in the next steps in operationalising AI ethics through AI assurance. These include: 

 

If you have found this summary of the RTA’s AI Assurance guide useful and would like to find out more about techUK’s work on digital ethics, and AI Assurance and how to get involved alongside members through the Digital Ethics Working Group. Please contact: [email protected] 

 

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music. 

Email:
[email protected]
Website:
tessbuckley.me
LinkedIn:
https://www.linkedin.com/in/tesssbuckley/

Read lessmore

Related topics

Authors

Tess Buckley

Tess Buckley

Programme Manager, Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email:
[email protected]

Read lessmore