31 Jan 2025

Event Roundup – AI and Data Assurance and Standards with BSI

TechUK and BSI recently hosted a collaborative event bringing together industry experts, policymakers, and practitioners to explore the critical role of standards in AI development and adoption. Featuring a keynote from Innovate UK in partnership with Built Environment, the event attracted diverse stakeholders eager to understand how well-designed standards can simultaneously foster innovation, build trust, and ensure regulatory alignment. 

The discussions delved into available key standards, implementation challenges, the fundamental importance of data governance, effective assurance mechanisms, and promising future directions for the field. Participants engaged in thoughtful dialogue about balancing technical requirements with ethical considerations while navigating the rapidly evolving AI landscape. 

More details on the specific topics discussed can be found in the sections below. 

 

Key Standards and Frameworks Discussed  

The discussions primarily centered around several fundamental standards, including ISO/IEC 5259 Data Quality for Analytics and ML, ISO/IEC 42001 AI Management System , ISO/IEC 31000 (risk management), and ISO/IEC 27001 Information Security Management System. The first ever AI MSS standard, 42001 emerged as a particular focal point, being described as a "linchpin" that draws from OECD principles and provides flexible blueprints for AI implementation. The standard's recent adoption by Anthropic through certification demonstrates its practical application in the industry. 

 

Implementation Challenges and Solutions 

A recurring theme throughout the event was the challenge faced by SMEs in balancing AI model scalability with standards implementation. The consensus emphasised that training proves more effective than simply reading standards documentation. Panel experts identified three critical ingredients for successful AI adoption: 

  • Adequate resource allocation 

  • Strong leadership commitment 

  • Cultivation of an innovative culture 

 

Risk Management and Data Governance 

The forum dedicated significant attention to risk and safety in data management. Experts highlighted the need for: 

  • Technology-agnostic foundation standards 

  • Enhanced frameworks for machine learning 

  • Balanced guidelines that accommodate rapidly evolving technology 

A notable insight came from distinguishing actual risk versus risk perception, emphasising how  42001 provides organisations flexibility in managing their risk propensity and preferences. The discussion also touched on the importance of post-market surveillance and performance drift monitoring. 

  

Data Quality and Assurance 

Data is the feedstock for AI, and the event highlighted several critical aspects of data quality and assurance. Discussions emphasised the necessity of moving from passive to proactive data quality checks while underscoring the importance of cross-sector learning for data requirements.  

Participants explored the challenge of balancing technical compliance with ethical considerations, alongside examining the role of socio-technical and socio-cultural approaches in standards implementation. These interconnected aspects were recognised as fundamental to ensuring robust AI systems that can be trusted and effectively deployed across different sectors. 

  

Industry Perspectives 

A diverse panel featuring representatives from Ada Lovelace Institute, Advai, DataBricks, and DSIT discussed various aspects of data assurance and AI implementation. Key points included: 

  • The need for comprehensive model validation 

  • The importance of consistent data practices in AI development 

  • The role of incentive structures in driving AI transformation 

  • The gaps in AI safety testing and evaluation methodologies 

  

Regulatory and Ethical Considerations 

The event addressed the critical intersection of standards with regulatory requirements, emphasising standards as tools for global interoperability and regulatory compliance. Discussions delved into the importance of feedback mechanisms for addressing AI bias, alongside the need for contestability and redress actions when AI systems fall short of expectations.  

Participants emphasised the requirement for contestable and trustworthy outcomes in AI implementations, while also exploring the complex integration of compliance risk with commercial and insurance considerations. These interconnected aspects highlighted how standards serve as a crucial framework for ensuring responsible AI development while managing associated risks and maintaining public trust. 

  

Future Directions 

Participants identified several areas requiring further development: 

  • More inclusive participation of SMEs and startups in standards development 

  • Better integration of academic frameworks with industry development pace 

  • Enhanced mechanisms for public feedback and engagement 

  • Improved tools for monitoring AI system performance over time 

The event concluded with a strong emphasis on the need for a balanced approach to AI implementation, one that considers both technical and social aspects while ensuring proper risk management and ethical considerations.  

The discussions highlighted how standards can serve as practical tools for organisations of all sizes to navigate the complex landscape of AI development and adoption while maintaining trust and safety. 

Related topics