OECD incident report: definitions for AI incidents and related terms

As AI gets more widely used across industries, the potential for AI systems to cause harm - whether unintentional bugs, misuse, or malicious attacks - also increases. Definitions help identify and prevent such incidents. By providing common definitions of AI incidents, hazards, etc., this allows the UK tech industry, regulators, and others to align on terminology. This shared understanding facilitates cross-organisation and cross-border learning from AI incidents.  

The OECD has released a report proposing a draft definition of an "AI incident" as an event where the development, use or malfunction of an AI system directly or indirectly leads to harms such as injury, disruption of critical infrastructure, human rights violations, or property/environmental damage. 

Agreed definitions like this allow for consistent governance of AI's risks and preparation for possible technology or application failures. This governance is crucial for enabling the tech industry's trustworthy and sustainable development of ever more capable AI systems. This OECD report marks an important first step in the process of establishing governance frameworks to ensure AI's safe and trusted development. 

Defining through Differentiation: Actual versus Potential Harms 

The OECD report takes the differentiation between actual and potential harm as a starting point for defining AI incidents and related terms.  

Understanding the distinction between actual and potential harm allows organisations to proactively address risks associated with AI systems and ensure responsible deployment practices. This awareness enables stakeholders to navigate the complexities of AI incidents, fostering transparency and accountability in AI governance. 

Actual harm refers to tangible consequences that have already occurred, whereas potential harm denotes the risk or likelihood of harm occurring in the future. Evaluating potential harm is as crucial as assessing actual incidents. This differentiation is essential for effective risk management and the ethical deployment of AI. 

Actual Harms: AI incident, serious AI incident and AI disaster   

The report defines three types of actual harms, read more and review examples here (pg. 11 – 12), these are the definitions provided: 

  1. AI Incident: An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to harms (the report provides a list of harms)  

  1. Serious AI Incident: A serious AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to harms (the report provides a list of harms) 

  1. AI Disaster: An AI disaster is a serious AI incident that disrupts the functioning of a community or a society and that may test or exceed its capacity to cope, using its own resources. The effect of an AI disaster can be immediate and localised, or widespread and last for a long period of time. 

Potential Harms: AI Hazards and Serious AI Hazards 

The report defines two types of potential harms, read more and review examples here (pg. 13 – 14) , these are the definitions provided: 

  1. AI Hazards: An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident (the report provides a list of harms) 

  1. Serious AI Hazards: A serious AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to a serious AI incident or AI disaster (the report provides a list of harms) 

So, What does this mean for UK AI companies / UK AI industry? 

As AI risk management regulations emerge (e.g. EU AI Act), UK tech companies need clear definitions to comprehend their compliance obligations around AI incident reporting, risk assessments and other forms of AI assurance. Well-defined terms for AI incidents lay the groundwork for frameworks to systematically report, analyse and respond to such events. This increases accountability and incentives for responsible AI development. 

The OECD dimensions of harm outlined can guide how tech companies assess and mitigate risks across the AI system lifecycle - from data issues to model vulnerabilities to real-world deployment hazards.  

Read more here: Defining AI incidents and related terms | en | OECD 

 

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music. 

Email:
[email protected]
Website:
tessbuckley.me
LinkedIn:
https://www.linkedin.com/in/tesssbuckley/

Read lessmore

 

Related topics