×

To make the most of your techUK website experience, please login or register for your free account here.

07 Jun 2024

Tackling the bias blindspot in AI

Read this London Tech Week blog from Mercator Digital on AI and bias.

As someone who works as a content generator in the tech industry, I've become increasingly concerned about the issue of bias in artificial intelligence. AI systems are fundamentally shaped by the data they are trained on, and that training data reflects the demographics and perspectives of the people who create it. Unfortunately, the tech world has long suffered from a lack of diversity – with women only making up around 29% and ethnic minorities with even lower representation at 22% (Statistica). This lack of diversity gets baked into the AI models that influence everything from online search to recruitment tools to content curation.

The root of the problem lies in the fact that much of the text data feeding today's large language models and other AI systems has been written primarily by a relatively narrow segment of the population. Even the best intentions to remove bias from these models runs up against the bias inherent in their training data. An AI trained mostly on text from one demographic group will inevitably take on the perspectives, turns of phrase, and blind spots of that group.

We've already seen numerous reports of AI failing to be inclusive, as evidenced by a recent test by TechRadar involving Meta’s new AI chatbot, launched in April 2024, which highlighted ongoing biases in generative AI. The test revealed that prompts like “Indian men” predominantly generated images of men wearing turbans, despite this representing a small demographic. And a new report (April 2024) led by researchers from UCL, concluded that the most popular AI tools show prejudice against women as well as different cultures and sexualities.

One step that will help to address this bias is to make a concerted effort to bring more diversity into the tech workforce, particularly in roles that shape AI development. We need people from all backgrounds - women, people of colour, those from different socioeconomic levels, individuals across the LGBTQ+ spectrum, people with disabilities, and representatives of diverse cultures, religions, and geographic regions. Their voices, experiences, and perspectives need to be represented in the data feeding AI models.

Initiatives like the recently approved EU AI Act, which mandates testing AI systems for discriminatory bias and implementing risk mitigation measures, are a step in the right direction. However, such regulations alone cannot solve the systemic lack of diversity that leads to biased AI in the first place. At Mercator Digital, we’re on a journey to attract and encourage more diversity into tech - by partnering with organisations like Next Tech Girls, upholding the Armed Forces Covenant, running outreach programmes to a range of schools and universities, and being mindful of inclusive recruitment practices. Our Mercator Academy actively seeks people from varied backgrounds and career paths. And as Head of Brand and Marketing - I believe it‘s part of my duty to help change the narrative and normalise more diverse representation in the industry.

This isn't just about being equitable and inclusive, though those are obviously critical goals in and of themselves. It's also about creating better AI. The more varied the dataset used to train a language model, the more nuanced, broadly applicable, and less biased that model will be. We are leaving incredible amounts of knowledge and understanding untapped by failing to bring diverse voices to the table. In the tech sector, we too often pay lip service to diversity and inclusion without taking the difficult steps to address the root causes. We can’t create technology that works well for everyone without involving diverse perspectives in building it. Prioritising the recruitment, retention, and elevation of underrepresented talent is crucial to chipping away at the bias baked into AI training data. While a difficult challenge, increasing diversity is essential for AI to provide fair and beneficial outcomes for everyone. As AI's impact expands, the models defining our future need to represent the full spectrum of human perspectives and experiences.

Author: Zoe Billingham, Head of Brand, Mercator Digital

 


techUK and TechSkills at London Tech Week

techUK is proud to once again be a strategic partner of London Tech Week. The event is a fantastic showcase for UK tech, and we’re pleased that techUK members and colleagues are represented on the agenda.

We are pleased to be working with 20 techUK members at London Tech Week to highlight the positive impact of the tech industry in the UK. 

You can find more details on the author of this blog, below:

Mercator.png

Mercator Digital

Digital transformation specialists

London Tech Week: Fringe events

techUK is leading several fringe events across the week on topics as varied as Justice and Emergency Services, Local Public Services, Digital ID, and international trade. Visit the pages below to register:

 

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more

 

Related topics