Well, AI still hasn’t solved bias in hiring
Understanding AI’s impacts on bias remains important as utilization and use-cases grow.
• 6 min read
The promise of AI in talent acquisition (TA) has long been to reduce time to hire and assist recruiters in making better, fairer decisions at scale.
As HR and TA teams deploy more AI tools for sourcing, screening, and interviewing candidates, HR pros are learning that AI can reduce, reinforce, or even obscure bias, depending on how it’s trained and used.
Human recruiters are increasingly making decisions based on insights from AI across the globe. While fewer marquee headlines are addressing how AI tools are impacting biases in hiring, it’s still a critical issue for HR teams and the vendors that are working to deploy this technology.
“When AI first came out, you heard a lot of conversation around bias and hallucination and all that, and now you hear less about those things and more about all the new tech and all the possibilities,” Daniel Chait, Greenhouse cofounder and CEO, said. But he cautioned that just because the online discourse has moved on from concerns around bias or other AI-related risk, doesn’t mean that HR pros and vendors aren’t still thinking about it.
Some of the historic data training AI tools are flawed and biased. (Newsflash: humans can be biased.) Since the models are being trained in the context of the human-led hiring process, sometimes it reproduces the same flawed outcomes, but at scale.
Chait suggested that the emerging use-cases, models, and frameworks can deliver amazing unlocks for HR and TA teams, but deploying them should include a pause to understand both the risks as well as the benefits.
But it’s not just data. It’s also you. “Bias emerges not only from the data itself, but also from the dynamic interplay between human behavior and machine learning systems,” according to behavior and motivation scholars Grace Chang and Heidi Grant in the Harvard Business Review.
A 2025 University of Washington study looked at how human-in-the-loop AI use can impact recruiters’ decisions. It found that recruiters who reviewed applicants using AI LLM tools with bias built into the models “mirrored” the inequitable choices of the AI up to 90% of the time. But when recruiters made decisions without AI or with unbiased AI, they chose white and non-white candidates equally, the study found.
“There is a bright side here,” said Kyra Wilson, a UW Information School doctoral student and lead author of the study in a press release. “If we can tune these models appropriately, then it’s more likely that people are going to make unbiased decisions themselves. Our work highlights a few possible paths forward.”
As vendors improve the models and their training, the humans who rely on it (and their bias trusting the technology) could, indeed, produce more fair and unbiased outcomes.
“We’re absolutely still thinking about this topic, putting a lot of effort and resources behind it,” Chait said. “We have paid a lot of attention to the recent industry lawsuits against Workday and Eightfold and our legal team is providing a lot more oversight and input and advice to our product development teams as we build and launch AI solutions to make sure that we’re not exposing ourselves or our customers to these kinds of risks.”
Quick-to-read HR news & insights
From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.
By subscribing, you accept our Terms & Privacy Policy.
HR leaders and TA vendors are closely watching a lawsuit against Workday involving AI and bias in hiring. The federal lawsuit alleges Workday’s AI-powered candidate screening tools disproportionately overlooked older applicants and those from other protected groups.
Workday argued their tools don’t make final hiring decisions and don’t disparately impact job applicants, but the litigation highlights a growing reality of possible legal problems for HR teams using AI tools that may unintentionally create bias in the hiring process.
A separate lawsuit filed against Eightfold AI highlights different concerns amid allegations that the company’s AI-powered talent intelligence platform can produce biased outcomes in candidate recommendations during the screening processes through its use of secret “dossiers.” The plaintiff claims these are akin to credit reports and background checks, only without the protections to consent to or correct the reports.
These cases signal an increase in compliance scrutiny of AI tools, and in vendors requiring HR teams and customers to consider how they’re using the tools and what their impacts could be, whether negative or positive.
What’s HR to do? Chait told HR Brew that TA pros looking to use recruiting and hiring software that deploys AI should talk to vendors and understand their AI-use and priorities. AI is capable of doing some really amazing things to improve the hiring process, but each new feature comes with risks, and Chait said it’s good to understand a vendors’ approach to those risks.
“Don’t just take their word for it,” he said before recommending looking for regularly published audits by well-respected third parties.
He also suggested working with vendors tuned into the evolving compliance landscape. They don’t just need to deliver on current compliance requirements, but plan to incorporate emerging laws and regulations as they’re developed, he said.
A sunny future. The entire hiring process could be reimagined in the future. While bias can occur in the résumé-screening process and when winnowing candidate pools for interviews based on résumés and applications, Chiat pointed out that it’s actually human capacity that’s the limiting factor when deciding who gets an interview. But as AI-enabled hiring processes improve and evolve, everyone could get a fairer shot, because AI—not humans who require time and salaries—could be conducting interviews with all applicants, and no one is screened out.
“I do think that the promise of these technologies is so great that if you do it from a principled perspective,I think we can achieve all kinds of good stuff,” Chait said. “I think we can achieve better experiences, better decisions, faster, more efficient processes, and increase fairness and transparency at the same time.”
About the author
Adam DeRose
Adam DeRose is a senior reporter for HR Brew covering tech and compliance.
Quick-to-read HR news & insights
From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.
By subscribing, you accept our Terms & Privacy Policy.