How to Ensure Digital Identities are Ethical and Inclusive
It's 2024, and ethics, diversity, and inclusion are at the forefront of societal progress. Many might argue humanity has moved on from its ethically scrupulous, prejudiced past – a world where judgement was often based on demographic attributes. Initiatives can be found at every granularity of society, from national drives to promote equality, through corporate efforts to ensure equal opportunity, and even at the individual level, where more of us than ever express ‘compunction’ and are committed to challenging [our own and societal] prejudiced belief systems.
The artificial intelligence sector has developed furiously and has brought with it an insatiable appetite for data to power the world's leading AI systems, whether foundation models or the computer vision models that power digital identity verification. Such is this appetite for data that data scientists and researchers have trawled through the entirety of accessible internet history and data or image repositories. That's right, that awkward year 11 yearbook photograph your friend posted on their public profile may have been purged from your mind, but chances are it will be forever embedded in computer vision systems of the future. You can’t remove an ingredient from a baked cake.
Thankfully, efforts are underway to ensure that this old, prejudiced world doesn’t rear its ugly head in our modern systems.
There has been an enormous undertaking to rectify issues of prejudice. Researchers driven by a robust morality have spent endless hours cleansing data and developing testing processes to detect and remove old prejudices from our new systems. It's easier now than ever to ensure AI systems remain ethical and inclusive.
So, here's a short, helpful guide on how to keep the AI powering digital identities ethical and inclusive.
1. Prioritise diversity in your datasets!
The first rule to ensure your AI-powered digital identity system remains ethical and inclusive is to prioritise diverse datasets above all else. This starts at the data collection stage; make a concerted effort to collect data that represents a wide range of demographics, such as ethnicities, genders, and socioeconomic backgrounds. This will help you achieve a streamlined user experience for everybody!
There are precise technical methods like adversarial debiasing to help you identify and prevent outdated or biased data from significantly influencing your system, so you can eliminate bias without discarding valuable data (baby, bathwater). Using diverse datasets results in fairer, more accurate outcomes, increased online sign-ups or verifications, and improved security – adversarial debiasing (ethics) and adversarial resistance (security) go hand in hand. Who wouldn't want that?
It's straightforward. Seek out comprehensive, diverse datasets for your training set, and spend the time to preprocess it and correct for bias.
2. Test and probe your system thoroughly!
Transparency in AI is essential. There's immense value in knowing how your system works and what data it relies upon to make its decisions. Be diligent; don’t unleash the pattern recognition might of AI without guardrails that ensure it acts responsibly. Analyse what features are being highly weighted against an output classification to ensure fairness.
For example, when determining a potential doctor versus nurse, ensure the system evaluates based on qualifications and relevant attributes, unlike unfiltered systems that can use historical patterns to propagate gender or racial biases.
It's especially important when it comes to digital identity verification that you document how the algorithms decide who gets verified and who doesn't. That way, when people are unfairly excluded or discriminated against, you can trace it and make necessary adjustments. What gets measured gets managed!
Maintaining openness in your AI systems empowers regulators and users, pleases human rights groups, and fosters trust and collaboration throughout society (or at least doesn’t inhibit it), and ensures transparency in testing outcomes using counterfactuals and fairness metrics.
3. Make your systems accessible!
Why make your digital identity system accessible? Because inclusivity expands your user base.
Design the interface to be user-friendly and intuitive, so people from all walks of life can navigate it. Consider the needs of the elderly, the economically disadvantaged, or those without access to the latest technology. Ensure your system functions smoothly even on older devices and with limited internet connections.
Let's not forget about language inclusivity. Offer your digital identity system in multiple languages. English-only limits accessibility. Nothing says "inclusive" like making a critical service usable for non-native speakers.
Consider blindness, colour blindness and other impairments. Features like screen reader compatibility, high-contrast modes, and adjustable font sizes can significantly improve usability. Accessible systems remove barriers and ensures more people have equal opportunities to benefit from the solutions enabled by Digital Identity verification. You can achieve this by involving a diverse range of stakeholders throughout the project and testing how a diverse array of users use the interface.
4. Embrace ethics in your organisational policies!
Ethical guidelines encourage ethical practice. Ethics policies supporting enhanced privacy, informed consent, and fairness will strengthen your team and your product. Clearly mapping who's responsible for each policy prevents a discussion of responsibility and ensures accountability.
Enable your system to behave in ethically desirable ways by design and be transparent about the limitations of your system, so users can avoid areas of weakness. If your system doesn't work as well for certain communities, communicate and address these issues head-on, bring all necessary stakeholders together to tackle the challenges, and install regular progress reports to keep track of mitigation attempts.
Bake these policies into your culture, your job descriptions and key performance indicators, then watch best practice ethics trickle throughout your organisation.
Whilst it may require effort to map out and align technical approaches with each ethical guideline, it's crucial. Assuring your AI systems is a technical challenge that requires a coordination between a variety of technical and non-technical stakeholders. For example, we offer a service called ‘Advai Insight’ which is designed to surface only the relevant information to each role or function. Removing complexity and involving the right stakeholders bridges comprehension gaps and helps technical teams respond to the objectives of senior nontechnical decision makers.
Not only does upholding good ethical practices help you comply with laws – we’re coming to that – but it builds invaluable trust with users and consumers. The trust earned in your past AI initiatives will lubricate the acceptance and adoption of your future AI-enhanced capabilities.
5. Adhere to regulations and standards!
Regulatory bodies play a vital role in establishing fairness, accountability, and protection for individual rights. Aside from data specific laws, like GDPR or The AI Act, there’s already a body of preceding law in place that requires you to, for example, respect privacy rights and ensure non-discrimination.
Be proactive in understanding and complying with these regulations. Consider them not as red tape but as guidelines that help you create better, more trustworthy and most importantly sustainable products. Transparency in data usage and algorithmic decision-making will continue to be a key priority of regulators this coming decade.
Don't use the excuse that the technology is too complex to explain. Invest in making your system understandable to users. Trust, fundamentally, is about understanding how something works so how will work in the future can be intuitively predicted. When people understand the system, they’ll trust it more.
Bonus, concluding point: engage third-party assurance specialists.
AI Assurance is a hard technical challenge and ensuring the ethical and inclusiveness of a computer vision system is no exception. Consider working with a company like Advai. Interestingly, our own origin story involved the assurance of computer vision systems. In contrast to modern language models, the techniques to analyse and improve the robustness of computer vision systems are in a much more mature state. The tools to remove bias from these systems have been heavily researched and are largely effective.
Also, third-party assurance businesses hold no conflict of interest – we aren’t trying to sell you an AI system and so our priority is to assess robustness, reliability and safety.
Ethics and inclusion truly come with the "AI Assurance" territory. Removing all kinds of bias, including demographic ones, are a big part of this. Together, we can ensure that AI systems are developed responsibly and benefit everyone.
Welcome to techUK’s 2024 Digital ID Campaign Week! On the 14-18th Oct, we are excited to explore how our members are increasing efficiency for both businesses and users, combatting fraud, as well as what creative and innovative ways our members are expanding our understanding of Digital Identities.
Whether it’s how we’re communicating, shopping, managing our finances, dating, accessing healthcare or public services, the ability to verify identity has quickly become a critical vanguard to the Digital Economy.
Follow us on LinkedIn and use the hashtag #UnlockingDigitalID to be part of the conversation!
Upcoming events
Latest news and insights
Get our tech and innovation insights straight to your inbox
Sign-up to get the latest updates and opportunities from our Technology and Innovation and AI programmes.