Event Round-Up: Launch of the Department for Science, Innovation and Technology's Portfolio of AI Assurance Techniques
techUK, in collaboration with the Centre for Data Ethics and Innovation (CDEI), recently organised a highly anticipated event to launch the Department for Science, Innovation and Technology's Portfolio of AI Assurance Techniques. Bringing together key stakeholders from government, industry, and civil society, the event explored the crucial role of AI assurance for the UK's AI governance approach.
Speakers included:
- Susannah Storey, Director General for Digital Technologies and Telecoms at the Department for Science, Innovation and Technology (DSIT)
- Felicity Burch, Executive Director for the Centre for Data Ethics and Innovation (CDEI)
- Dr Florian Ostmann, Head of AI Governance and Regulatory Innovation at the Alan Turing Institute
- Luis Aranda, Artificial Intelligence Policy Analyst, OECD
- Lisa Allen, Director of Data & Technical Services, ODI
You can watch the recording here. Please note that the below is a summary of the event, and readers are encouraged to watch the webinar to understand the full details of the discussion.
Felicity Burch, Director of CDEI, delivered an enlightening opening speech, highlighting the collaborative efforts of CDEI and techUK in AI and technology regulation. The UK government's vision to become the most innovative global economy was highlighted, emphasizing its recognition as a leader in agile and pro-innovation regulation.
The event focused on the five key principles outlined in the White Paper on AI regulation: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. These principles form the foundation for trustworthy AI systems, and assurance techniques such as impact assessment, compliance, audit, and future certification were discussed as crucial tools for evaluating the trustworthiness of AI systems.
Technical standards were identified as essential components in assessing AI systems and mitigating risks. These standards provide methods and metrics for effective evaluation and promote responsible AI practices. The event emphasized the importance of an AI assurance ecosystem, involving collaboration between government, industry, and other stakeholders to support a pro-innovation regulatory approach and foster public confidence.
The highlight of the event was the unveiling of the portfolio of AI assurance techniques. This comprehensive portfolio showcased real-world examples and case studies from diverse sectors, mapped to the principles outlined in the UK White Paper on AI regulation. Attendees appreciated the user-friendly nature of the portfolio, allowing easy filtering and searching based on AI regulatory principles. The portfolio's recognition by the OECD AI Policy Observatories further underscored its potential in supporting international interoperability.
A stimulating panel discussion and Q&A session followed, focusing on the collaborative efforts required to establish an effective and trusted AI assurance ecosystem aligned with the UK's governance approach. The importance of international cooperation and the risk of fragmentation between jurisdictions and sectors as AI scales were key topics of discussion. Participants were encouraged to navigate the complex landscape of responsible AI practices by assessing team skills and gaining a comprehensive understanding of the evolving nature of AI, utilising available resources such as academic papers, assurance services, and the portfolio of AI assurance techniques.
The event highlighted the role of recognised organizations, including ISO, in developing standards for various areas of AI, such as transparency and explainability. The AI Standards Hub, featuring over 300 relevant standards, was praised as a valuable resource. Ongoing research to map these standards to the UK's White Paper on AI further facilitates stakeholder engagement.
Discussions on data assurance and open standards emphasized their crucial role in ensuring the trustworthiness of AI systems. The efforts of the Open Data Institute in defining processes for data assurance and establishing open standards and codes of conduct were acknowledged as essential components of a holistic AI assurance approach.
The government's role in fostering collaboration between regulators and industry while striking a balance between regulation and innovation was discussed. Interoperability, standards, and assurance techniques were highlighted as enablers through shared terminology, data guidelines, and addressing issues like bias, fairness, and transparency. The importance of mapping and comparing standards across jurisdictions for international operations was emphasized, with the OECD's mapping exercise providing guidelines for responsible business conduct in AI.
The event underscored the importance of top-down adoption of standards and compliance, with incentives from top-level management driving industry adoption. Standards were recognized as a means to provide a competitive advantage and build trust in companies. It was acknowledged that standards need to evolve to address emerging challenges and keep pace with technological advancements, particularly in generative AI.
Certification in AI assurance was a significant point of discussion during the event. The need for certifying AI assurers and ensuring their competence was emphasized, drawing parallels with other professions such as accounting. Certification was seen as particularly important in the education sector, where it can elevate standards and incorporate AI ethics and skills into the curriculum. The challenges and potential burdens associated with certification, including business acceptance and industry collaboration, were also considered.
Metrics emerged as a crucial aspect in measuring AI system performance and meeting requirements. The event emphasized the importance of developing more metrics and enhancing the understanding of existing ones. By effectively measuring AI system performance, organizations can ensure they are meeting the necessary standards and building trust in their AI applications.
The event concluded with gratitude expressed to the panelists and the CDEI for their valuable contributions. Participants were encouraged to explore the AI assurance case studies presented during the event, with additional resources available on the government website. The CDEI will also produce different interactions of the portfolio with a potential to expand globally, which we would encourage you to check here.
Overall, the launch of the Department for Science, Innovation and Technology's Portfolio of AI Assurance Techniques event marked a significant milestone in promoting responsible AI practices and building public confidence. It fostered collaboration among government, industry, and civil society, highlighting the importance of assurance techniques, technical standards, and international cooperation. As the field of AI continues to evolve, events like these contribute to the establishment of a virtuous cycle, ensuring the adoption of trustworthy AI practices and facilitating the UK's path toward becoming the best pro-innovation economy in the world, boosting growth.
For more information, please contact: