6 Steps to Developing Ethical AI Systems

Charles Radclyffe (@dataphilosopher), Partner at EthicsGrade, writes about the six key steps required to ethically develop responsible AI systems.

Creating ‘safe’ technology should never only refer to technological issues a gadget might cause. It doesn’t just stop at whether your Alexa chatbot might overheat and set itself on fire. Technology, and the structures around them also have ethical issues, which refer to the impact it might have on society, marginalised groups of people, and what problems might arise in the future.

In developing ethical AI systems, I propose a simple(ish) approach…

AI ethics from the ground and up

An individual companies’ ethical AI framework will look different according to industry, country, and how they use AI — whether it’s in their products or how they use AI within their workforce. Organisations require transparent and democratic governance structures, encouraging open conversation and a positive environment for error and change.

Conversations around ethical AI

Debate is growing across industries, public interest groups, policy circles regarding how we should be using technologies and data responsibly, and minimising harm or risks to the public. Whilst many technologies are high risk, the nature of risks and how to mitigate them would involve different conversations from different voices.

Participating in such conversation, considering recommendations when developing your tech and governance structures, and aligning to upcoming legislation are clear methods of upholding ethics. These discussions are growing, and now is the time to get involved.

Building trustworthy systems

Throughout the development of AI systems, steps must be taken to monitor and review technologies’ decision-making processes to mitigate risks regarding biases and risks.

Earlier this month, we saw the European Commission proposing legislation to prohibit some high-risk uses of AI systems and calling for greater transparency for others. The commission asks for clear and accessible disclosure of high-risk systems’ purposes, potential risks and logs, aiming to protect the freedoms and fundamental rights of the general public. Much of the legislation focuses on how we must continually review and monitor any AI systems’ decision-making processes.

Mitigating ethical risk

Diverse and marginalised voices must be heard throughout the development process. Creating biased machines, such as racist chatbots, stems from having a limited view of society. Discussions around how we can train these models to cause less bias are vital, including how to implement these when building AI.

We’ve seen that companies’ reactions to internal issue raising can lead to reputational risk. To mitigate reputational and ethical risk, steps must be taken to listen and act on concerns raised.

Data Privacy

Informed consent and data privacy are critical to building ethical AI systems. The public must understand where, when and how their data is being used, how this may impact them, and what options they have to minimise any risk that may occur when submitting data. If not, you may be facing significant fines, as data privacy, security and consent have grown in the public policy agenda.

Building sustainable AI

Building responsible AI doesn’t just require consideration of an algorithm’s impact on a societal or individual level — you need to consider its impact on the environment. While AI is looking positive in the fight against climate change, complex and messy algorithms might end up producing more CO2 than necessary. According to research, training a single AI model could create as much carbon as five cars.

The approach to building responsible AI might not be as simple as expected. But, the cost is worth it in the long run — helping avoid reputational risk, non-compliance fines and legal battles.

 

Author:

Charles Radclyffe (@dataphilosopher), Partner at EthicsGrade, writes about the six key steps required to ethically develop responsible AI systems.

 

You can read all insights from techUK's AI Week here