Event Round-Up: Risk Management Frameworks for Responsible AI
In a thought-provoking webinar session hosted by techUK, industry experts came together to explore the critical intersection of risk management frameworks (RMF) and digital ethics. The panel, comprising leaders from research institutes, artificial intelligence (AI) governance firms, and ethics innovation organisations, shared valuable insights on operationalising ethics in AI development and deployment. The discussion offered practical guidance on building effective RMF while maintaining ethical principles.
The webinar, which is a part of techUKs Decoding Digital Ethics Webinar Series, highlighted how RMF can serve as a crucial form of AI assurance, ensuring that AI systems gain justified trust through evidenced actions that ensure ethical development.
We were joined by a group of leaders in responsible AI practices:
-
Zachary Goldberg, Ethics Innovation Manager, Trilateral Research
-
Pauline Norstrom, Founder, Anekanta AI
-
Lara Groves, Senior Researcher, The Ada Lovelace Institute
-
Sachin Beepath, Deputy Head of AI Assurance, Holistic AI
-
Chair: Tess Buckley, Programme Manager, Digital Ethics and AI Safety, Tech and Innovation Programme
Our panel of experts shared current industry applications, showcasing how leading organisations are leveraging these frameworks to enhance AI safety. Attendees received valuable resources, including tools such as the OECD principles, NIST RMF, ISO 42001 and 27001 to help them start or enhance their own risk management efforts.
Understanding RMF in Responsible AI (Context)
As noted by our panelists, risk management frameworks in AI draw from established practices in safety-critical sectors but have evolved to address the unique challenges posed by AI. These frameworks help organisations better manage risks to individuals, organisations, and society associated with AI systems. The panel emphasised that effective risk management should cover the entire AI lifecycle, from initial design through development, deployment, and ongoing evaluation. This comprehensive approach ensures that potential risks are identified and mitigated at every stage, rather than being treated as an afterthought. Please note this webinar used the OECD definition of AI in our discussion.
The Business Imperative for RMF
According to our panel, the business case for AI assurance has strengthened significantly in recent years, with risk management shifting from being a "nice to have" to an essential component of AI development.
Strong business drivers include regulatory compliance, investment attraction, and building stakeholder trust. As AI systems increasingly interact with each other, early risk identification becomes crucial to prevent compounding effects. The panel noted that organisations that implement robust risk management frameworks are better positioned to attract investment and maintain public trust in their AI solutions.
Implementation Hurdles to Consider
The session acknowledged that implementing comprehensive risk management frameworks comes with significant challenges at times. The resource-intensive nature of these frameworks requires substantial time investment and skilled personnel across multidisciplinary teams.
Another barrier to entry could include organisations struggling to keep pace with an evolving regulatory landscape while dealing with limited AI literacy across industries, which can affect risk assessment capabilities. Finally, a significant challenge lies in establishing appropriate risk appetites and balancing various trade-offs, particularly when ethical considerations come into play.
Navigating Global Contexts
For organisations operating across borders, the complexity increases substantially. The panel advised that organisations operating across jurisdictions should aim for the highest standards rather than settling for minimum compliance.
The EU AI Act was highlighted as having a "halo effect," influencing practices globally even in regions where it doesn't directly apply. The discussion emphasised that while universal baselines, such as the OECD principles, provide a foundation, they must be adapted for local contexts while maintaining alignment with fundamental human rights frameworks.
Organisational Transformation and the Role of Standards
Organisational change management emerged as a crucial factor in successful implementation. The panel stressed that success requires genuine top-down support from the board level, with clear organisational values helping to guide decision-making throughout the company.
According to our panelists, this needs to be coupled with truly interdisciplinary teams that bring together technical, ethical, and legal expertise. The experts emphasised that ethics teams should work closely with technical teams to ensure that ethical principles are effectively translated into technical requirements and implementations.
Also noted was the role of standards, particularly in the UK the emerging ISO/IEC 42001, was discussed as an important reference point for organisations. While sector-specific approaches were acknowledged as valuable, the panel emphasised the need for universal baselines that can be adapted to different contexts. Standards help provide structure for implementing ethical principles and ensure consistency across organisations, though they need regular updates to keep pace with technological advances and evolving understanding of AI risks.
The Relationship of Risk Appetite and Ethics
The discussion highlighted an interesting dynamic between risk appetite and ethical considerations, particularly in industrial settings. Risk appetite isn't just determined by internal factors but is influenced by entire value chains and stakeholder expectations.
The panel noted that in safety-critical systems, risk appetite tends to be lower, with organisations often waiting for standardisation before implementing new AI technologies. This creates an interesting tension between innovation and risk management that organisations should carefully navigate.
Building AI Literacy
The importance of AI literacy and training emerged as a crucial theme throughout the discussion. Organisations have the opportunity to invest in building AI literacy across all levels, from board members to operational staff, to ensure effective risk management.
This includes understanding both the technical aspects of AI systems and their potential ethical implications. The panel suggested that organisations should provide regular training and updates to keep pace with rapidly evolving AI technology and its associated risks.
Conclusion: Practical Steps Forward
Looking ahead, the experts emphasised that while compliance is important, true AI trustworthiness goes beyond checkbox exercises. Organisations need to focus on building genuine responsibility into their AI development practices through several key approaches: investing in team upskilling, embracing multidisciplinary approaches, starting with ethical principles as a baseline, adapting frameworks to specific use cases and sectors, and maintaining continuous monitoring and iteration. The panel stressed that this is not a one-time exercise but rather an ongoing process that requires regular review and update.
The session concluded with a strong emphasis on the practical steps organisations can take to begin or enhance their AI risk management journey. Starting with established principles like the OECD framework and building up to more specific standards and regulations provides a structured approach to implementation. The experts noted that while the task might seem daunting, particularly for smaller organisations, there are increasing resources and support available through industry bodies and professional organisations.
The webinar made clear that while implementing comprehensive risk management frameworks requires significant effort, it's becoming increasingly crucial for organisations developing or deploying AI systems. As the regulatory landscape evolves and public awareness grows, having robust risk management practices will be essential for building and maintaining trust in AI technologies. The future of AI development will likely see even greater emphasis on risk management and ethical considerations, making it crucial for organisations to establish strong foundations now.
If you have found this summary of our Decoding Digital Ethics webinar session interesting and would like to find out more about techUK’s work on digital ethics, and AI Safety please contact Tess Buckley at [email protected].