19 Apr 2023
by Tim McGarr

Generative AI and Standards: an overview (Guest blog by BSI)

Guest blog by Tim McGarr, AI Development lead at BSI #AIWeek2023

For generative artificial intelligence to reach its full potential, users need to trust all layers of the technology in play. Standards play a key role in fostering coherence, operationalization and consensus around trustworthy AI, and a wide range of both existing and emerging standardization efforts which aim to address questions in this area.

Working with our partners the Alan Turing Institute and the National Physical Laboratory through our work in the AI Standards Hub, BSI has explored the themes of safety, security, and resilience in relation to AI, and the role standards play in ensuring that the AI systems developed ultimately are trustworthy.  

Key developments in AI and Standardization

Through our research, interviews, and workshops, we explored the current understanding of safe, secure, and resilient AI with a variety of stakeholders, including SMEs, corporates and academics. Our results showed a complex landscape, including differing challenges around the definitions of these terms, areas where existing standards are already playing a role, and potential challenges and solutions to improving understanding of and access to standards.   

Key to deploying trustworthy AI is managing risk. Here, two standards can offer strategic guidance to organizations on integrating risk management in relation to AI.  ISO/IEC 23894 offers concrete examples of effective risk management implementation and integration throughout the AI development lifecycle. Similarly, ISO/IEC 38507 provides guidance on governance to an organization using, or considering using, AI, and the wider implications of doing so.  

Additionally, ISO/IEC 42001, due to be published later in 2023. ISO/IEC 42001 will be what is known as a ‘Management System’ standard, developed specifically for AI. A Management System sets out the processes an organization needs to follow to meet its objectives and provides a framework of good practice. This will ensure that trustworthy AI is built into the leadership and governance of the organisation.

Worldwide approaches to AI and standardization

Our review of the approaches taken by other National Standards Bodies (NSBs) around the world revealed some differences in approach to AI and standardization as well as reflecting common themes and principles such as the need for transparency and explainability within AI development.  

Within the EU, the EU AI Act is expected to be fully integrated within all 27 member state policy and regulatory frameworks, with Spain being the first to pilot a regulatory sandbox approach to ensure compliance with existing regulation.  

The UK has also recently published its Pro-innovation approach to AI Regulation, outlining its approach to regulate AI and promote innovation, as well as establishing its own £2 million regulatory sandbox to ensure compliance. The UK approach differs from the Spanish pilot in its regulatory oversight, with Spain relying on a single, newly-created regulator to cover all aspects of AI, and the UK taking a multi-agency approach by empowering existing regulators to oversee AI in their respective sectors  

Additionally, many countries have existing policies for participation in international technical committees. For example, more than 50 nations participate in ISO/IEC JCT 1/SC42, the ISO committee concerned with Standardization in the area of Artificial Intelligence. UK stakeholders can get involved via BSI.

Further resources on AI and Standards

Throughout our research, stakeholders reported that using standards helped them to develop a common language both internally and within their supply chain, so a good entry point to standards may be understanding the common terminology for AI and machine learning outlined in ISO/IEC 22989 and ISO/IEC 23503, respectively.  

The AI Standards Hub also provides a good entry point into the world of AI and Standardization. Here you can search for standards related to AI, learn more about the standards making process, and attend the latest events from BSI and others.  

As generative AI continues its rapid development, standards must continue to play their role ensuring these are safe, secure, resilient, and are trusted by both the organization and the user.

Wednesday 7.png

Get our tech and innovation insights straight to your inbox

Sign-up to get the latest updates and opportunities from our Technology and Innovation and AI programmes.

 

 

Authors

Tim McGarr

Tim McGarr

AI Development lead, BSI