From gender stereotypes, to gravitating towards people like yourself, to overvaluing first impressions, humans undoubtedly have bias. These biases can have major impacts on hiring at your organization and hinder your ability to hire the right people with the right skills for the job.
To combat this, many companies have turned to AI-powered tools to reduce bias in their hiring process and identify qualified candidates more efficiently and effectively. This raises new questions about bias in hiring: what biases do AI systems have? And how can companies mitigate those biases to promote diversity and inclusivity, while still capitalizing on the benefits of AI innovations?
We held a webinar with Dr. Clementine Collett, BRAID Fellow and AI researcher at Cambridge University, and Adam Vassar, Director of Talent Science at CodeSignal, to dig deeper into these questions. In this blog post, we recap the main takeaways from their conversation.
You can watch the full webinar recording here.
How automation and AI have transformed recruiting
Clementine: “Over the last decade, we’ve seen AI introduced into the hiring process and it has just completely boomed. In 2022 the AI recruitment market was valued at $540 million. COVID-19 also caused a big acceleration in the market because of people being at home and having to hire remotely. It’s a really big market and has completely transformed the way that we recruit.
Firstly, sourcing includes systems which scrape data from platforms like LinkedIn or Facebook and match or recommend candidates to jobs or jobs to candidates. This might also take the form of neural network systems which write job descriptions, some of which try to make language as neutral as possible to promote a more diverse applicant pool.
Secondly, we have screening systems. This includes systems that screen CVs using machine learning.
Thirdly, assessment systems, like CodeSignal’s, might take the form of games, exercises, or tests which utilize anything from behavioral science to psychometrics to neuroscientific tests. These can use machine learning or basic decision trees to compare candidates to desired profiles or clients’ top performers and recommend the most suitable candidates.
Lastly, selection might analyze candidates’ criminal history or their social media, for example, using image recognition to aid with that final decision. Its main transformation has been automation and largely the removal of humans from the recruiting process, affecting the pipeline entirely.”
How AI tools can reduce bias in hiring
Adam: “In the early days of AI, it was kind of unbridled in its usage—the automation was turned loose. The decision criteria and the guard rails really weren’t there. I think early use of AI got a bit of a bad reputation. But a big part for me is, I want people to embrace it and not be afraid of it. We’ve increased how wide we can cast the net, we can identify more candidates from nontraditional learning backgrounds and get a better representation of underrepresented groups.
Thus far, all we’ve done is be able to evaluate who actually has the skill and, and we’re doing so better. We’re not just looking at universities or resumes. We’re giving candidates a chance through AI and through technology to show they have the skill and actually do a simulated version of the job. So that’s a huge benefit to overcome some of the diversity challenges.”
How to mitigate bias in AI systems
Clementine: “I think that there’s never going to be no bias in hiring practices. And I know that’s something that’s often said, but I think that it is true and it’s really important to recognize—but we can try and reduce it.
The other thing is that while these systems can help to reduce bias in some of the ways I was talking about before, it depends so much on the way that the system is designed and the way that it’s used, and those two things together. It’s not impossible that these systems will help us to reduce bias, but it is so dependent on context, how they’re being trained, and how they relate to social realities.
This requires looking at the social assumptions underlying the tests, and talking to design teams at vendor companies about how the tool interacts with society.”
Adam: “I’ve spent a number of years designing behavioral interview guides and programs training on best practices for human interviewers. And, building role play simulations that then humans had to act out in front of a candidate and properly evaluate with notes after the fact. It took six months to do any kind of program, and I was so proud of the content, the spirit, and the context, but always a little disappointed and concerned about the variance in interviewer skill and capability.
People are often fearful of AI bias, but human bias exists in thousands of interviews that will be conducted in the next few hours across the world—not through any maliciousness, just through inconsistent asking of different questions or inconsistent evaluation of answers, or the similar-to-me, different-from-me human biases that Clementine referenced.
To me, AI offers an opportunity to take those tried-and-true approaches—ask this question, evaluate it this way, don’t worry about this part, don’t ask this question—and to really control a lot of that, and use that traditional, proven methodology of validity and reliability. We can train AI with that, as opposed to allowing AI to conjure up decision rules and things that just aren’t fair or don’t make sense for job relatedness. So I see it as an opportunity to use what we’ve come up with in the past, but not count on interviewers to be good roleplay actors or to fight against their own human biases.”
What’s next for AI and hiring
Clementine: “The first thing that comes to my mind is how generative AI will change the way that skills are developing and being learned. The use of these systems will change how people acquire skills and complete these tests and also the job market in general—what skills are needed, what jobs require. It might change how these systems are designed.
In the short term as well, I think another change within AI systems that will change the landscape of how AI recruitment tech is used is agentic AI. I think that would be an interesting development to see how it changes these systems and workplaces in general. This is the ability of systems to act proactively rather than reactively and to use that independent, autonomous ability to act.
I think that will be interesting to see how that shapes workforces or even the way that AI systems are designed and how they function. And I think that could take a slightly unanticipated form. I also hope we will see changes in the design and use of these systems to be more inclusive, to really think more about these issues because they are so important and impact people’s lives and livelihoods.”
Learn more
You can watch the full 30-minute conversation about the use of AI in hiring, ethical considerations and challenges, and what the future holds for AI recruitment technology here.