07 Oct 2024
by Tess Buckley

Responsible Technology Adoption Unit Expands Portfolio of AI Assurance Techniques 

On 26 September, the Responsible Technology Adoption Unit (RTA) enhanced its Portfolio of AI Assurance Techniques with new use cases. This portfolio, developed by the RTA (a directorate within DSIT) in initial collaboration with techUK, serves as a valuable resource for individuals and organisations involved in the design, development, deployment, or procurement of AI-enabled systems. 

The portfolio showcases real-world examples of AI assurance techniques, supporting the development of trustworthy AI. These additions provide valuable resources for organisations, offering practical insights into assurance mechanisms and standards in action. 

 

TechUK members Anekanta and Kainos shared best practice in this update of the Portfolio of AI Assurance Techniques:  

  1. Anekanta AI: Facial Recognition Risk Assessment System 

This case study describes Anekanta® AI's Facial Recognition Privacy Impact Risk Assessment System™, a tool designed to address the ethical and legal challenges associated with facial recognition technology. The system helps organisations identify and mitigate risks related to the use of facial recognition, ensuring compliance with relevant laws and regulations while promoting responsible and ethical use. Anekanta® specialises in de-risking high-risk AI and contributes to global best practices and standards, including input on the BS9347 British Standard. Their system is based on recognised regulations, principles, and standards for AI technology, including the EU AI Act, and considers specific regional, national, and local requirements. Using a proprietary regulation database, the system provides an independent pre-mitigation report with tailored recommendations for compliance and risk minimisation. The report covers potential risk levels, applicable legislation, EU AI Act requirements, recommended mitigations, and residual risks requiring ongoing management, helping organisations navigate the complex landscape of facial recognition technology implementation. 

 

  1. Kainos: Kainos and Dstl partner to implement AI ethical principles in Defence 

This case study outlines Kainos' collaboration with the Defence Science and Technology Laboratory (Dstl) on the Defence AI Centre (DAIC) programme, focusing on implementing the UK Ministry of Defence's AI ethics principles in defense-related AI products and services. The approach centered on conducting ethics and harm workshops, inspired by Microsoft's Harms Modelling, which brought together a diverse team of experts to identify potential benefits, harms, and mitigations of AI systems. These workshops, structured around the MoD's AI ethical principles, were integrated into the agile delivery cycle from the start and revisited throughout the project, ensuring an ethics-by-design approach. The process was part of a broader framework addressing safety, legal considerations, and testing, highlighting the critical importance of ethical implementation in AI development, particularly in sensitive areas like defense. 

 

We welcome these enhancements to the Portfolio, as they offer concrete examples of ethical principles in practice and guidance for ensuring responsible AI implementation across various sectors. 

If you are interested in learning more about digital ethics join us at the eighth annual Digital Ethics Summit by registering here

Related topics

Authors

Tess Buckley

Tess Buckley

Programme Manager, Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email:
[email protected]

Read lessmore