13 Jun 2024

Ensuring Responsible AI Development: Let’s talk about AI Assurance 

As emerging technologies like artificial intelligence (AI) become increasingly integrated into various aspects of our lives, ensuring their responsible development and deployment is paramount. AI assurance plays a critical role in this process, ensuring that these technologies are reliable, safe, and aligned with ethical principles. 

In February 2024, DSIT released the Introduction to AI Assurance guide, committed to in the Government’s AI White Paper for the Spring of 2024. The guide provides practitioners with a better understanding of how AI assurance techniques can be used to ensure the responsible development of AI systems and supports the pro-innovation approach illustrated in the UK’s AI White Paper. It introduces key AI assurance concepts and terms and situates them within the wider AI governance ecosystem. It also provides information on the assurance mechanisms and technical standards which will be regularly updated to reflect stakeholder feedback, emerging best practices and a changing regulatory environment. You can read techUK’s summary here. 

Definition of Assurance 

The DSIT guide explains that assurance is a form of accountancy, the process of measuring, evaluating and communicating something about a system or process, documentation, a product or an organisation. Using assurance processes provides accessible evidence about these systems' capabilities, measuring intentions, limitations and risks. Further examining the ways these risks are being built-in throughout the AI lifecycle. 

The DSIT guide says that assurance requires robust techniques that organisations can use to measure and evaluate systems and then communicate their system’s trustworthiness and alignment to relevant regulatory principles. The Responsible Technology Adoption Unit (RTA) further explains these As emerging technologies like artificial intelligence (AI) become increasingly integrated into various aspects of our lives, ensuring their responsible development and deployment is paramount. AI assurance plays a critical role in this process, ensuring that these technologies are reliable, safe, and aligned with ethical principles. 

In February 2024, DSIT released the Introduction to AI Assurance guide, committed to in the Government’s AI White Paper for the Spring of 2024. The guide provides practitioners with a better understandi three steps to assure AI: 

  1. Measure - Gather qualitative and quantitative data on how an AI system functions to ensure that it performs as intended (performance, functionality and potential impact seen in documents about system design and management processes) 

  1. Evaluate - Activities that assess the risks and impacts of the system and inform further decision-making (evaluate against benchmarks, standards and guidelines of use) 

  1. Communicate - effectively communicate findings internally and externally (reports, presenting in a dashboard, publicly disclosure the assurance processes the company has undertaken, consider certification) 

In addition to providing a definition of assurance, the guidance provides further detail on how AI assurance will play a role in operationalising relevant regulatory principles including the five principles which underpin the pro-innovation framework in the UK’s White Paper; safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, contestability and redress. The guide explains that regulatory principles can be operationalised by providing agreed-upon processes, metrics and frameworks such as SDO-developed standards through ISO, IEC, IEEE, ETSI, sector specific rules and/or guidance developed through RTA and assurance mechanisms such as audits or performance testing. 

Sector-specific Guidance: AI in recruiting  

In addition to general AI assurance practices, sector-specific guidance is also gaining importance. For instance, in human resources (HR) systems, where AI is used for tasks like candidate screening or performance evaluation, sector-specific guidance provides tailored frameworks to address unique challenges and ethical considerations. By incorporating such guidance, organisations can not only mitigate risks but also foster trust and transparency in their AI-driven HR processes, ultimately promoting fairness and equity in the workplace. 

You can read more about RTA’s sector-specific guidance for AI in recruitment here, there are promises in the White paper to deliver further sectors this year. The RTA guidance offers organisations seeking to procure and adopt AI recruitment tools with a toolbox of considerations and assurance mechanisms through which to evaluate the implications of particular applications and mitigate potential risks. This could prove helpful to businesses considering adoption that lack the resource, capacity or expertise to navigate the complexities of opportunities and risks by signposting key interventions at every stage of the process. 

However, as the RTA guidance notes, there is no one size fits all approach to AI assurance and some organisations may only have the resources to focus on key areas of highest risk. This is welcome recognition, though more insight could be provided to businesses on where these areas of highest risk may reside. The rollout and business engagement around this guidance will be key. 

techUK will be continuing to engage with the Responsible Technology Adoption Unit in this area and welcomes member’s feedback on the guidance. If you found this insight interesting and want to learn more about AI Assurance and techUK’s Digital Ethics programme, please contact [email protected].

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music. 

Email:
[email protected]
Website:
tessbuckley.me
LinkedIn:
https://www.linkedin.com/in/tesssbuckley/

Read lessmore