Swipe right for talent - AI’s role in recruitment

The rise of AI (Artificial Intelligence) and algorithms has been well documented, if not well regulated, over the last couple of years. It is perhaps not quite what was envisaged in the Minority Report or 1984 but few can say that the emergence of the likes of ChatGPT or google search’s generative AI summary is not impressive and often very helpful.

AI’s long-touted role in society is to enable automation of standard processes to increase efficiency, cut costs, and eliminate human error. However, are there areas where human discretion is preferable or even required by law? Can AI or automated processes be used in employee recruitment, management, and dismissal?

Using AI in recruitment

The Equality Act 2010 prohibits discrimination in the workplace in relation to the 9 protected characteristics of age, disability, race, religious belief, gender/sex, martial status/civil partnership, sexual orientation, gender reassignment, and pregnancy/maternity. This protection includes the recruitment process. Organisations are obligated to use non-discriminatory recruitment processes otherwise face falling foul of the Equality Legislation.

Failing to comply with the Equality Act can have significant reputational and financial consequences for employers. While much of the perceived benefit of AI is that it is fully automated and so lacks the likes, dislikes, and biases people have, it still requires to be ‘trained’ using existing data. The data itself can often reflect inequalities and biases prevalent in society and, thus, perpetuate these in their results.

In one high profile example, Amazon had to stop using a tool to screen CVs prior to inviting candidates to interview. The technology had been trained using 10 years’ worth of data, however, this data overwhelmingly related to men. This taught the AI tool to penalise CVs which indicated that the applicant was a woman.

There is also the potential for indirect discrimination claims, arising from the thoughtless application of rigid rules. For example, where there is a gap in a candidate’s CV, a person reviewing could understand the gap to be due to work breaks to allow the applicant to raise a family. AI not specifically trained on data sets which account for this and programmed to make decisions without context would not be accommodate this and may disadvantage the applicant on the basis of a gap in their work history.

There is huge potential for AI to streamline and improve recruitment processes, but it must comply with legislation and operate lawfully within the current legal framework.

A significant advantage of AI in recruitment is that a consistent process is used for all candidates. This may seem to be another way which utilising AI ensures procedural fairness and non-discrimination, however the Equality Act 2010 also specifically includes an obligation on employers to make reasonable adjustments to its systems and processes if they are found to place disabled people at a particular disadvantage. The ability of AI to identify the need to make adjustments, and to implement same, is likely to be limited without careful programming.

Mobley v Workday Inc

In Mobley a case heard in a court in California last year, Derek Mobley alleged that the AI Applicant Screening Platform created by Workday Inc (a system software provider) and implemented by various prospective employers had discriminated against him and others that shared his protected characteristics. Mobley is an African American man over the age of 40 years old. He attained a Finance degree from Morehouse College which is an all-male and historically black College and University. He also suffers from anxiety and depression.

The Applicant Screening Platform conducts an initial review of the applicant’s CV and in some cases requires a personality test to be undertaken. Mobley alleged that from his CV you could ascertain his age and race and that the subsequent tests were likely to uncover mental health disorders or cognitive impairments. He alleged that those who suffer from depression and anxiety are likely to perform worse on these assessments and be screened out.

The California court accepted Mobley’s assertions and allowed Mobley’s claim to move forward. It agreed that although Workday did not intend to discriminate, the AI it had developed had nonetheless a disproportionately negative impact on black, older and disabled applicants. Workday was under an obligation to ensure that their practices and methods of providing a service to their clients were not discriminatory towards applicants.

Although Mobley was decided in a court in California, it sets an important precedent in an area where many jurisdictions lack existing authority. It stresses the fact that while many engage AI specifically to ensure neutrality, that it is only as impartial as the data that is used to train it. Where there is inherent societal bias – whether related to age, disability, gender, or any other protected characteristic – that can be perpetuated by AI and can lead to risk and conflict in a field which requires the elimination or mitigation of unfair or discriminatory processes and treatment.

Conclusion

While the potential of AI to enhance efficiency and reduce human error is undeniable, its unchecked use—particularly in areas like employment—raises significant concerns, especially when it comes to issues of discrimination and fairness.

Its application in areas such as hiring and employee management requires careful scrutiny to ensure it aligns with existing legal frameworks in the UK.

There is significant risk in relying solely on AI systems that can inadvertently perpetuate biases embedded in training data, underscoring the need for careful oversight and regulation.

In the UK, existing laws like the Equality Act 2010 must be applied to ensure that AI in recruitment does not undermine the principles of equality and non-discrimination. As AI technology continues to evolve, it is crucial that businesses, developers, and policymakers work together to create frameworks that ensure AI systems are not only efficient but also ethical and inclusive.

Ultimately, while AI can be a powerful tool for progress, its implementation must be guided by the principle that human oversight and discretion are essential in protecting fairness and avoiding unintended harm.

This update contains general information only and does not constitute legal or other professional advice.

STAY INFORMED