When algorithms undermine equality of opportunity

Employers harnessing social media or artificial intelligence to reach new recruits and remove bias from their processes may find themselves inadvertently discriminating against the very people they are trying to reach.

Figures from the Office of National Statistics show there are almost 1.3 million vacancies currently on offer, and businesses are having to try harder than ever to get their jobs in front of candidates.

In this challenging market, more employers are embracing social media advertising to fill their vacancies, in pursuit of a wider audience. But algorithms used by platforms such as Facebook may mean users do not see job adverts because of their age or gender, even when an employer conducts what they believe to be an unrestricted advertising campaign.

Research by Global Witness found that jobs for mechanics were shown almost entirely to male users, where nursery nurse positions were delivered to a predominantly female readership. Other research has shown that job advertisements may not be delivered to older readers.

Age and gender are just two of the characteristics that are protected under the Equality Act 2010, which was designed to safeguard against discrimination, harassment and victimisation. Other characteristics protected under the Act include race, sexual orientation, religious beliefs, disability and gender reassignment. In the workplace, the law requires employers to guard against discrimination from the very start of the recruitment process through employment to termination.

But even though an employer may act with the best intentions, they may find themselves breaking the law if their advertising does not reach all groups, as indirect discrimination could be taking place through social media’s automated ad targeting.

Another potential source of recruitment bias lies in the increasing use of machine learning and artificial intelligence (AI) for recruitment purposes. Whether used to sift applications or to conduct initial interviews, there are many challenges for both HR and data management compliance.

Bias may stem from the underlying programming and data sources, whether through the data used to train machine learning software, which may not fully represent gender or ethnicity, or through those writing the algorithms.

And when tech solutions are designed with the aim of automatically learning a recruiter’s preferences and using those predictions to identify similar applicants in future, bias can become reinforced rather than removed.

Explained Chris Dewey, employment expert with Ward Gethin Archer solicitors. “Employers may be trying to improve their processes or make decision-making more objective. But when machine learning is making predictions based on past screening decisions, it may reinforce the very pattern those employers are trying to change, and so undermine equality and diversity intentions.

Automated decision-making is restricted under the UK’s GDPR legislation and if a decision may have significant legal or other effect, as with an employment decision, it cannot be based solely on automated processing. The problem is also being addressed in draft legislation from the EU on the use of AI, which sets out requirements for human oversight.

Chris added: “There are many aspects to diversity in the workplace and making sure that the recruitment process and ongoing opportunities are accessible to all is clearly an important component of that. A discrimination tribunal, with uncapped compensation for successful claims, is reputationally damaging as well as potentially costly. Also, failing to comply with data handling requirements can give rise to large penalties.”

And for the future, with empathic AI on the horizon – and its potential to detect human emotions during an interview – there will be further challenges for both HR and data protection.

Empathic technology will use aspects such as heart rate, pupil dilation, blood pressure, flushed cheeks or changes in tone of voice to assess an individual’s emotional state. But while it may be useful to know how an applicant is feeling in an interview, use of such techniques will take employers into uncharted territory.

HR professionals will have to demonstrate how they are interpreting the information they collect in this way and human oversight is likely to remain essential in any recruitment decision. Data controllers will have to address whether such processing is truly necessary to achieve objectives and demonstrate how empathic AI in recruitment is proportionate.

Chris added: “There’s every expectation that algorithmic bias will become a commonplace in future discrimination claims, whilst our employment and data protection legislation tries to catch up.”

For further help and advice, please contact Chris:

  • 01553 667209
View Chris's profile

This article aims to supply general information, but it is not intended to constitute advice. Every effort is made to ensure that the law referred to is correct at the date of publication and to avoid any statement which may mislead. However, no duty of care is assumed to any person and no liability is accepted for any omission or inaccuracy. Always seek our specific advice.

#askWGA

We are here to help, giving you clear and practical legal advice when you need it.

Got a question?
Contact us and one of our experts will get back to you as soon as possible.

Offices and phone lines are open:
Monday to Friday:   8:45am - 5:00pm

Please read our privacy policy to see how we use your data.

Can’t find what you’re looking for?     Browse our A-Z of Legal Services