Recruitment and AI in the workplace
Today, employers have the ability to access more employee-related data than ever before. Whilst the collection and processing of that data has been subject to significant regulation, most notably via the Data Protection Act 2018, an issue is developing regarding the use of artificial intelligence (“AI”) in the workplace.
Algorithms and machine learning, both at the centre of workplace AI adoption, have the potential to cause employers significant issues if not implemented carefully.
Recruitment AI
There are a range of AI uses that employers can use to streamline their recruitment processes. This largely centres around automated decision making, including:
- Web-based recruitment companies can post targeted adds for job roles based on the type of candidate they believe would be successful.
- AI models are able to filter through CVs for phrases or qualifications which they have been programmed to select.
- Profiling can be used in video interviews - software has been developed to recognise characteristics of a “desirable” candidate, such as clothing. An AI model will use that data point to decide whether the real candidate is suitable.
The above uses of AI can lead to flawed recruitment processes, causing issues for employers. AI models and algorithms can be inherently biased, only using the data given to them.
Discrimination is a risk
The International Association of Privacy Professionals (IAPP) recommends caution when applying models to real life data subjects. One model reported on by the IAPP that was “designed to differentiate between huskies and wolves, was not learning any anatomical differences. Instead, it learned that wolves are more likely to be in pictures with snow than huskies are. This drove a significant amount of the model’s predictions, but was not actually relevant to the task of telling two similar animals apart.” The model had mistakenly focused on and retained data that was not intended to be targeted or retained, and it had used this data as a key determinator in separating the two, making an automated decision based on incorrect data targeting.
Biased algorithms or incorrect modelling can adversely affect hiring. Recruitment agencies using automated decision-making applications can incorrectly identify candidates based on the colour shirt they are wearing, as the ‘right fit’ for the role.
As a result, the capacity for AI to lead to unfair or biased decision-making is coming under public scrutiny, and the importance of effective AI regulation to support employers with its use is growing. The ICO plans to publish biometric guidance in spring 2023. At present, there is limited statutory protections available to employers or employees, as only two cases of AI based discrimination claims have made it to an Employment Tribunal to date (although many more are expected over the coming years). At present, the relevant legislation is the Data Protection Act 2018, which provides restrictions relating to processing of special categories of data, including, racial or ethnic origin, religious beliefs, genetics, biometric or health data and data about an individual's sex life or sexual orientation, and the Equality Act 2010, which sets out various protections against discrimination.
Looking to the future
It is possible that the UK Parliament will soon pass the Data Protection and Digital Information (DPDI) Bill, which was initially laid before Parliament in July 2022. However, this is likely to be delayed due to the Government wishing to engage in further consultation.
Given the ever-increasing use of tech to enhance and streamline employer processes, it is crucial that UK AI regulation is introduced to assist employers implement their internal AI processes.
Authors: Robert Forsyth and Jo Tunnicliff, Shoosmiths