Ensuring Responsible AI Development: Let’s talk about AI Assurance
As emerging technologies like artificial intelligence (AI) become increasingly integrated into various aspects of our lives, ensuring their responsible development and deployment is paramount. AI assurance plays a critical role in this process, ensuring that these technologies are reliable, safe, and aligned with ethical principles.
In February 2024, DSIT released the Introduction to AI Assurance guide, committed to in the Government’s AI White Paper for the Spring of 2024. The guide provides practitioners with a better understanding of how AI assurance techniques can be used to ensure the responsible development of AI systems and supports the pro-innovation approach illustrated in the UK’s AI White Paper. It introduces key AI assurance concepts and terms and situates them within the wider AI governance ecosystem. It also provides information on the assurance mechanisms and technical standards which will be regularly updated to reflect stakeholder feedback, emerging best practices and a changing regulatory environment. You can read techUK’s summary here.
Definition of Assurance
The DSIT guide explains that assurance is a form of accountancy, the process of measuring, evaluating and communicating something about a system or process, documentation, a product or an organisation. Using assurance processes provides accessible evidence about these systems' capabilities, measuring intentions, limitations and risks. Further examining the ways these risks are being built-in throughout the AI lifecycle.
The DSIT guide says that assurance requires robust techniques that organisations can use to measure and evaluate systems and then communicate their system’s trustworthiness and alignment to relevant regulatory principles. The Responsible Technology Adoption Unit (RTA) further explains these As emerging technologies like artificial intelligence (AI) become increasingly integrated into various aspects of our lives, ensuring their responsible development and deployment is paramount. AI assurance plays a critical role in this process, ensuring that these technologies are reliable, safe, and aligned with ethical principles.
In February 2024, DSIT released the Introduction to AI Assurance guide, committed to in the Government’s AI White Paper for the Spring of 2024. The guide provides practitioners with a better understandi three steps to assure AI:
-
Measure - Gather qualitative and quantitative data on how an AI system functions to ensure that it performs as intended (performance, functionality and potential impact seen in documents about system design and management processes)
-
Evaluate - Activities that assess the risks and impacts of the system and inform further decision-making (evaluate against benchmarks, standards and guidelines of use)
-
Communicate - effectively communicate findings internally and externally (reports, presenting in a dashboard, publicly disclosure the assurance processes the company has undertaken, consider certification)
In addition to providing a definition of assurance, the guidance provides further detail on how AI assurance will play a role in operationalising relevant regulatory principles including the five principles which underpin the pro-innovation framework in the UK’s White Paper; safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, contestability and redress. The guide explains that regulatory principles can be operationalised by providing agreed-upon processes, metrics and frameworks such as SDO-developed standards through ISO, IEC, IEEE, ETSI, sector specific rules and/or guidance developed through RTA and assurance mechanisms such as audits or performance testing.
Sector-specific Guidance: AI in recruiting
In addition to general AI assurance practices, sector-specific guidance is also gaining importance. For instance, in human resources (HR) systems, where AI is used for tasks like candidate screening or performance evaluation, sector-specific guidance provides tailored frameworks to address unique challenges and ethical considerations. By incorporating such guidance, organisations can not only mitigate risks but also foster trust and transparency in their AI-driven HR processes, ultimately promoting fairness and equity in the workplace.
You can read more about RTA’s sector-specific guidance for AI in recruitment here, there are promises in the White paper to deliver further sectors this year. The RTA guidance offers organisations seeking to procure and adopt AI recruitment tools with a toolbox of considerations and assurance mechanisms through which to evaluate the implications of particular applications and mitigate potential risks. This could prove helpful to businesses considering adoption that lack the resource, capacity or expertise to navigate the complexities of opportunities and risks by signposting key interventions at every stage of the process.
However, as the RTA guidance notes, there is no one size fits all approach to AI assurance and some organisations may only have the resources to focus on key areas of highest risk. This is welcome recognition, though more insight could be provided to businesses on where these areas of highest risk may reside. The rollout and business engagement around this guidance will be key.
techUK will be continuing to engage with the Responsible Technology Adoption Unit in this area and welcomes member’s feedback on the guidance. If you found this insight interesting and want to learn more about AI Assurance and techUK’s Digital Ethics programme, please contact [email protected].
Tess Buckley
Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.