​Will using AI lead to more discrimination in recruitment?

​Will using AI lead to more discrimination in recruitment?

Back to Blogs

Recruitment has always been plagued by discrimination. With over a third of adults having experienced some form of discrimination in either the workplace or recruitment process it’s an issue that recruiters are continually looking to address.

When new tools and technologies are introduced that supposedly mitigate discrimination in the process, they’re instantly of interest. But they need to be approached with caution as they can cause more issues than they solve. 

That’s certainly the case with AI. More firms are looking to integrate AI into their recruitment processes to improve efficiency and help reduce discrimination. But the risk is that the tech they’re using to reduce discrimination and bias might actually lead to increased levels.

Why is AI being used in recruitment?

AI tech in recruitment isn’t a new thing. Screening software, online interview tools, chatbots, outreach tools are all things that have existed in one form or another for a while now. While they might not be the all-singing, all-dancing AI that immediately springs to mind, they still fall in that category. 

With recent developments in AI it seems like every day a new update is dropping. From Chat GPT to analysing and predicting candidate behaviour there’s a world of possibilities just waiting to be  discovered. 

The big issue that recruiters are looking to solve with recruitment is discrimination. We all have our own bias which we may, or may not be aware of. When we start to look through candidate information and make decisions these unconscious biases can creep in. AI, in theory, eliminates those and uses set criteria and parameters to make unbiased decisions.

The theory is that it levels the playing field and can get recruitment to a stage where the best candidates are up for consideration.

Can AI discriminate? 

In short, yes.

The discrimination isn’t linked so much to how it’s been programmed, but in it’s inability to read between the lines. Take gendered language. Research shows that women are more likely to downplay their strengths and skills and as a result they use softer language on their CVs. Men are more likely to use assertive language that’s more relevant to the role. When read by an AI application, the male CV is going to score higher and be put forward to the next round. 

Another example is video screenings that analyse applicant’s speech patterns to understand their ability to problem solve. Those with strong accents or speech impediments are often screened out. Not because of their abilities but because the right accommodations can’t be made with the technology. 

The University of Cambridge also found that AI tech decisions can also be influenced by separate factors like lighting, backgrounds, clothing and facial expressions. So when it comes to leaving AI tech to make basic screening decisions, firms are really leaving themselves open to losing quality candidates. 

While there’s the risk the software is programmed to reject applicants with certain qualities, lack of skills or other characteristics, the other risk with AI is machine learning. AI is constantly learning and if it’s learning the wrong things then it’ll create a feedback loop which will exacerbate discrimination in the process without you noticing anything is wrong. 

​​

Should we use AI in recruitment?

We already are. 83% of employers, including 99% Fortune 500 companies now use AI in one way or another in their hiring processes. Asking if we should use AI in recruitment is closing the stable door after the horse has bolted. The question we should be asking is how should we use AI in recruitment. 

Using AI is putting your recruitment on autopilot, fine for small periods of time or at certain points in the process but it shouldn't be on permanently. Finding the right people for your roles is too important not to have any level of human interaction. 

A blanket use of AI for the first stage could result in turning down quality candidates whose CV doesn’t hit the right AI marks. They could have had a career break, or use gendered language, or have a disability that requires reasonable accommodations which AI can’t cope with. A hybrid AI/human approach will result in a higher quality pool of screened candidates. Yes it might take more time, but overall you’ll save time by not proceeding with the wrong candidates and your talent pool will be far more diverse.

The future of AI in recruitment

​​AI is inevitable, we’re already using it and we’ll continue to use it. But for firms looking to protect themselves and their processes from discrimination it’s obvious that we need to treat it with caution rather than the solution. 

Right now AI is unregulated. In the UK the Government has only just published a regulation policy and the Information Commissioner’s Office is investigating allegations around the negative impact of algorithms in recruitment. So it’ll be a while before any clear regulations come into place to help protect candidates. 

Over in the EU the AI Act has come into play. This highlights employment as an area of high risk and lays out strict obligations that AI tech used by an employer needs to comply with before it can be put to market. For UK businesses that operate in the EU this will have implications and it’s likely that other non-EU countries will follow suit. 

You’ve only got to look across the pond to New York to see AI laws coming into play. The Automated Employment Decision Tool law which has recently come into force means employers have to tell candidates they’re using AI in their hiring processes. They’ll also have to submit annual independent audits to prove their systems aren’t discriminatory. It’s not gone down well with everyone with public interest groups saying it isn’t enforceable or extensive enough. But as with all initial regulations of new tech or ideas, it’s the first step and undoubtedlyit’ll be amended and improved as time goes on. What it does show though is that there’s a recognition of the need for regulation of AI tools in the recruitment process.

For now, UK firms looking to utilise AI should think about the following:

  • What technology they’re using and how it works

  • Which defined part of the recruitment process is AI most suited to

  • Internal policies and procedures around the use of AI

  • Training for those using AI in recruitment

  • Regular tests to check for discrimination practices in AI tech either  inbuilt or through machine learning 

If left untamed then AI will undoubtedly lead to more discrimination in recruitment. With AI quickly embedding itself across all levels of recruitment the risks are increasing. It’s easy to see why it’s becoming so popular, automating costly and time-consuming processes is a win for any business. But when it comes to recruitment we need a people-first approach.

Both putting the candidate, their needs, individual circumstances and adaptations first. But also a people-first approach to screening. Bringing AI in further down the road to request references or automate parts of the recruitment process itself. Relying on AI to screen candidates puts too much at risk, and increases the chances of overlooking the best candidates because they don’t tick the AI’s boxes. 

Regulation will come, but for some candidates that could be too late so it’s down to firms and employers to take responsibility for their tech and their processes to protect less represented groups and give them the chances and opportunities they should have regardless of the tech used.

Related articles