There has been a lot of recent debate regarding the use of artificial intelligence to interview and screen potential candidates. Companies like HireVue claim that their technology can eliminate human bias and candidate interview anxiety, while decreasing cycle time and costs associated with employing recruiters.

According to HireVue, their tool scans the face of each candidate while speaking, picking up on cues which might be missed by a human interviewer. These cues include facial expressions, body language and eye-contact which are measured and categorized with each question.

Would You Trust an AI to Handle Your Interviews?

According to HireVue, this algorithm-based element reduces the risk of a number of biases. An “AI-driven approach mitigates bias by eliminating unreliable and inconsistent variables like selection based on resume and phone screens, allowing you to focus on more job-relevant criteria,” HireVue says.

Artificial intelligence deployed for the purposes of interviewing has been receiving lots of recent press coverage. Unfortunately, much of this news coverage is not for solving bias, but for extending it. A number of recent studies have found that AI may actually increase bias, especially when it comes to facial recognition!

Another Case of Silver Bullet Syndrome

When it comes to recruiting, we have a tendency to place more trust in systems and tools than in humans. Silver bullet syndrome is not uncommon as we try to automate, save time, and increase reliability of candidate data. The problem with AI systems starts with the information being fed into the system in the first place. AI technologies like HireVue benchmark candidates based on previously successful candidates with the organization. As a result, AI often learns from and reinforces the biases organizations are trying to eliminate in the first place!

Dr. Lauren Rhue, as assistant professor of information systems and analytics at Wake Forest University, studied the results of two AI facial recognition systems. Her study found that these systems identified professionals of color as expressing negative emotions 2-3 times as frequently as their white counterparts. Could this lead to a disproportionate impact on candidates of color? And does this actually translate into the ability to predict candidate performance on the job?

Does AI Invite More Trouble than it’s Worth?

We all have heard the stories of Amazon having to drop its AI assessment of resumes after it was found that the technology was less likely to accept profiles which featured the term “women’s.” This unintended bias resulted from trends the computer identified from previously successful employees. Studies have also found that AI analysis into emotions is unreliable as emotional states can be displayed in a variety of ways. Artificial Intelligence could be erroneously rejecting applicants based on inaccurate data.

What about the impact of this technology on those with learning difficulties, of that come for cultures where eye-contact and facial expresses are viewed differently? For some, making eye contact during an interview can be anxiety-inducing. Consider those on the autism spectrum or those from cultures who see direct eye-contact as reserved for those in powerful positions? Could these individuals be disproportionally disadvantaged?

HireVue argues that much of the negative press is based in misconceptions of the underlying technology. They make the argument that their AI goes well beyond facial recognition to score candidates by adding additional factors such as language. However, the challenges recently faced by Amazon show that the question of language is a problematic one. It is unlikely that AI is currently able to take into account the use of dialect or slang, especially if it bases decisions on predefined key words or phrases.

There is no question that AI tools are being trained to understand regional accents, it raises additional questions about automatically deciding on the fate of candidates due to certain language features that may create biases against people of certain socio-economic or educational backgrounds. A human recruiter, if they pay attention, can recognize common nuances in language in a way that AI still cannot.

AI Undoubtedly Has Potential – But HR and TA Should be Cautious For Now

Here is our current reality. AI might counter some unconscious bias, but its human creation means that it has been fed with some of the same biases we are trying to avoid! There is no question that human interviews and AI technologies need to improve, however practicing Results Based Interviewing™ techniques allow for greater diversity of background when selecting candidates.

Leveraging people analytics along with a consistent, structured interview process still allows for the greatest accuracy in predicting human performance and removing both conscious and unconscious bias. Until AI can truly understand the whole human including their head and their heart, I would recommend that we tread cautiously in this area.