AI in Hiring: A Mirror of Our Own Biases

Bloomberg's testing of AI resume screening turns up some alarming results.

RECRUITING

People walking on big circuit board that has dead end paths.
People walking on big circuit board that has dead end paths.

For years, studies have exposed a troubling reality: applicants with Black-sounding names often face discrimination in the hiring process. From fewer interview callbacks to undervalued qualifications, the bias is clear. Unfortunately, as a recruiting leader focused on DEI, I've seen new evidence that technology, even cutting-edge AI, can perpetuate the same biases.

A recent Bloomberg report on generative AI, specifically OpenAI's GPT-3.5, found that resumes with names associated with white men were consistently ranked higher than those with names associated with Black Americans or Black women. This echoes a disturbing trend that's been present long before AI entered the picture. Studies such as "Are Emily and Greg More Employable than Lakisha and Jamal?"  have repeatedly shown that identical resumes receive significantly fewer callbacks when they have Black-sounding names.

Here's why this matters:

  • AI Reflects Societal Biases Generative AI learns from vast datasets, which can unfortunately reflect and amplify existing societal biases. In hiring, this can lead to unfair screening and the exclusion of qualified candidates, regardless of race or gender.

  • Potential for Discrimination If left unchecked, this bias can disadvantage racial minorities and other qualified candidates based on factors like gender. This can lead to a situation where resumes with white male names are also unfairly screened out, especially compared to qualified Asian women.

However, it's important to consider the limitations of the report. The study focused on a single AI model (GPT-3.5) and used simulated data. Further research across various AI models and real-world scenarios is necessary.

The Bloomberg report serves as a crucial wake-up call. It underscores the need for vigilance when implementing AI in the hiring process. Here are some steps we implement with our clients that might help you combat bias in your process:

Evaluating AI Tools:

  • Transparency: Request transparency reports from AI vendors on how their models are trained and what safeguards are in place to prevent bias.

  • Sample Data: Ask for examples of the data used to train the AI model and assess its diversity across various demographics.

  • Testing: Conduct your own tests with the AI tool using diverse sample resumes to identify any potential biases in its decision-making.

Mitigating Bias During Implementation

Human Oversight: AI tools should be seen as supplements, not replacements for human judgment. Recruiters must remain actively involved in the screening process to identify and override biased recommendations from AI.

Diverse Hiring Teams: Assemble hiring teams with a diverse range of backgrounds and perspectives to help identify and challenge potential biases throughout the process.

Regular Audits: Regularly audit the results of your AI-driven hiring process to monitor for emerging biases and adjust your approach accordingly.

By acknowledging the potential pitfalls of AI and taking proactive steps to mitigate them, we can harness its power to make the hiring process more efficient and inclusive.  Let's keep the conversation going and work together to ensure AI becomes a force for good in the recruiting world.

Further Exploration

While this blog post focused on racial bias highlighted by the Bloomberg report, it's important to acknowledge that AI hiring can also be susceptible to gender bias.  You are invited to work with us at TalentWyze for a more comprehensive understanding of bias in AI recruiting practices. You are also welcome to download our free guide, “Navigating the AI Landscape in Talent.”

References: