What a landmark AI hiring bias lawsuit means for employers and candidates
What companies can and cannot do as far as using AI tools in their hiring processes is becoming more clear following a landmark case settled in New York earlier this month.
The Equal Employment Opportunity Commission filed suit against a China-based online tutoring company for allegedly using an AI tool that automatically filtered through and rejected about 200 female applicants over the age of 55, and males over 60.
One applicant discovered the error when she was quickly rejected but decided to submit her same resume with a different birthdate — which got her an interview — according to the suit, which was filed in the U.S. District Court for the Eastern District of New York.
The company agreed to pay $365,000 this month though admitted to no wrongdoing, and will create new anti-discrimination policies, reconsider all those wrongfully rejected applicants and divvy up the settlement amount between them.
“This is as obvious as a human interviewing candidates and rejecting older applicants over younger ones,” Natalie Pierce, partner at Gunderson Dettmer and chair of the firm’s employment and labor practice, said. “But AI, not a human being, did the screening,” she added.
Why is this significant?
This is the first time the EEOC — the U.S. government agency that investigates employment discrimination complaints — has settled a case involving biased AI tools used in hiring processes. It comes as rapid AI adoption continues and regulators around the world look to clamp down on unfettered use of the tools.
This summer the first U.S. law covering AI hiring tools and bias went into effect in New York City. It requires a bias audit on any automated employment decision tool that scores or ranks applicants before it’s used, and companies must notify applicants that they’re using the tools.
Ultimately though in this case the issue at hand is long-standing federal rules banning employers from making hiring decisions based on things like an applicant’s age, sex or race.
“It really shouldn’t be considered a new responsibility of employers but just an extension of what we already know,” said Sara Gutierrez, chief science officer at SHL, a workforce data and research firm.
What does this mean for businesses?
“Employers should use this as an opportunity to review their own hiring practices and how they utilize the tools that they have,” said Nicolette Nowak, associate general counsel and data protection officer at Beamery, a talent management and recruiting software company.
Today’s online hiring process means employers, especially major ones, are often flooded with applications, making AI tools to sift through resumes increasingly necessary. That’s also true as the great resignation and ensuing fierce competition for talent led many employers to tweak their hiring and recruiting policies to be able to extend offers to top candidates faster.
More than 60% of companies use AI tools to screen resumes and automatically filter out unqualified applicants, a February 2022 survey from the Society of Human Resource Management including over 1,600 respondents found. And almost 40% said their tools give a ranking or percentage match for each applicant.
“The lesson for employers using screening tools, AI-powered or not, is to find ways to make sure that these tools aren’t running afoul of existing laws,” Pierce said.
Hiring discrimination cases are generally more difficult to prove as applicants are mostly unaware of the behind-the-scenes process and criteria used to screen them, but NYC’s law includes a requirement that employers disclose in job postings when they’re using AI in the hiring process, she said.
Earlier this year Pierce and her team sent an alert to clients notifying them about NYC’s new law and guidelines around using AI tools in their hiring processes. That law offers a good framework for companies to follow whether they employ people there or not, she said.
What does this say about AI’s flaws?
This May the EEOC warned employers of their ongoing responsibility to comply with federal anti-discrimination laws, even when using new AI tools to help speed up their hiring processes.
“I think the biggest outcome of this is employers can’t hide behind vendors for their discriminatory hiring practices,” Nowak said.
But the EEOC guidance and now this settlement are bringing growing awareness to the importance of using quality algorithms.
“Certainly there are going to be tools out there that aren’t designed with intention and care and they will be flawed, but it’s really the quality of the AI model,” Gutierrez said.
About 70% of companies source the AI tools they use in hiring and other HR processes from a single vendor, according to the SHRM survey. And 24% said some of their tools were purchased from a vendor and others were created in-house, while 8% said their tools were developed exclusively in-house.
Among those using vendor-purchased tools, only 40% of those respondents said they totally trust those tools to prevent or protect against bias, the survey found.
“You have to really think through the ethical part of your AI application early on in the process,” said Milena Berry, CEO and co-founder of PowerToFly, a diversity recruiting and retention platform.