REGULATING AI IN HIRING

“I think it’s a very stressful time inside of modern HR departments. There’s a lot of questions about whether they should turn off the (AI) tool or even use the automated decisioning tool.”

Artificial Intelligence (AI) is making its way into just about every facet of our lives, from deciding for us whether our car will turn right or proceed straight ahead, what purchase choices we should make, and whether or not our social media post gets posted.

But now, AI may very well be deciding whether you get the job you’ve just applied for. An increasing number of HR departments and job recruiting services are employing AI for everything from resume screening to actually conducting a virtual job interview. Such a system should take any hiring bias out of the equation, right? But in fact, it is precisely the possibility—some say likelihood—of built-in bias that has various critics waving the flag of ‘proceed with caution’. In New York City, the flag currently being waved is a stop sign. We will examine the pros, the cons and why AI hiring needs to, at the very least, be regulated.

Bigoted Bots?

The theory behind AI hiring, like the concept behind putting AI in control of many of life’s decision-making processes, is that an algorithm will always be more efficient, faster, and more accurate of an arbiter over our decisions than any human being could possibly be. Efficiency and speed may very well be valid arguments, but accuracy in decision-making? And without the same sort of bias (or worse) that a human might possess? Bias built into the bot is just one of the concerns being voiced over the use of AI in hiring, another one being defining just what constitutes an Automated Employment Decision Tool (or AEDT).

Integration with HR and Recruitment

A 2022 survey conducted by the Society of Human Resources Management (SHRM) found that 79% of employers were using AI and/or automation for recruitment and hiring, while a March 2023 report by HR tech company Phenom. Com disclosed that 88% of companies globally already use AI in some way for HR. In China, the use of AI for HR is already reportedly at 100%.

A February 2023 ClearStar survey of 1,000 U.S. business executives found that 49 percent acknowledged using the AI chatbot ChatGPT as part of their hiring; however, that was largely for writing job descriptions, drafting interview requisitions, and responding to applicants. It wasn’t clear whether ChatGPT was actually involved in hiring decisions.

Not Everyone Onboard for AI Onboarding

Yet, despite much hoopla over having an algorithm give the thumbs up or thumbs down over a job applicant, the technology also faces some very stiff criticism. First, there is the human/machine bias problem: AI algorithms are typically trained on past data, and the bias that is inherent in the trainers who feed the data has been cited as a serious flaw whenever AI is substituted for humans in the area of moral and other judgment calls. Faulty assumptions by data input technologists result in faulty results from AI. With hiring in particular, if the data itself had historical issues with such matters as diversity, then training an algorithm on such hiring data can be a colossal mistake.

Measuring Intangibles

Aside from discovering that AI can have a hard time with unconventional resume formats that might use a particular font, layout or method of organization than what the algorithm was trained for, a problem with applicant screening itself has been detected. Candidates have been rejected based on the programmed criteria of the job description even where the candidate’s experience and skillset greatly overcompensate for that detected shortfall.

Bots can also get stuck on certain questions that a human interviewer might comprehend contextually or understand that it can be skipped and go to the next question. Thus far, AI has not been very successful at reading human emotions nor at detecting what is meant by slang expressions. It can do a great job of processing measurable data like years of experience and education level, but not with intangible qualities like personality, soft skills, and cultural fit within an organization. Perhaps that is why a solid 88% of candidates have responded that they much prefer to be interviewed by a human with whom they can interact.

NYC and LL144

In 2020, New York City passed Local Law 144 (LL144) that requires employers in the city to notify NYC-based job candidates in advance that the employer uses automated hiring tools and to conduct a bias audit on the tool every year in order to measure any adverse impact on candidates based on their race, ethnicity or sex. Although the law was supposed to go into effect at the beginning of 2023, the city’s Department of Consumer and Worker Protection has repeatedly delayed the roll-out so as to determine what AI tools should qualify and what the bias audit should entail. Although enforcement is now set to begin on July 5, the delays up until now have not totally eliminated the uncertainties surrounding the new law, with the final rules continuing to stir up criticism—and confusion—among employers and employee advocates alike.

The passage of the local law begs the question: if the intention of employing AI was to remove human bias in hiring, then why the need for a law to check up on whether AI itself is biased? And, how could a legislative body pass a law not knowing with any degree of certainty what technology would be regulated under it?

What the Futurists Say

Interestingly, although a World Economic Forum (WEF) report on AI hiring noted that companies are increasingly recruiting staff using AI-based algorithms, the report at the same time also underscored the ‘introduction of bias and the perpetuation of disparities’ in hiring, with such systems lacking reliability. But rather than denounce AI hiring altogether, the WEF has called for greater worldwide regulation of the technology as applied to hiring. And, until AI hiring tools are better regulated, job seekers need strategies to pass through AI-powered processes, such as using the technology to screen their resumes before sending them.

Yet, many Fortune 500 companies will continue to employ AI-based solutions if no other reason than the fact that they need to weed through the millions of job applications the companies receive every year. The problem, however, is that many companies don’t want to reveal what technology they are using, and vendors don’t want to reveal what’s in the ‘black box’. If that is what the future holds for HR, then clearly, AI hiring algorithms—and the companies that use them—need to be held accountable for the consequences of employing a still-unproven hiring machine.

Executive Summary

The Issue

What are the pros and cons surrounding AI hiring?

The Gravamen

Although AI technology may be helpful in sorting through myriad resumes, the bias kinks it was designed to eliminate have yet to be worked out.

The Path Forward

AI ‘black boxes’ need to be better regulated, but first, the question of what technology constitutes AI hiring (AEDT) needs to be refined.

Action Items

What Functions To Be Automated?:

‘Automation’ is a broad term, and an organization needs to first determine exactly where AI will fit into its overall HR approach.

Screening Resumes:

Technology exists for the initial screening of resumes by way of AI, and an organization that needs to review gigabytes of resumes will likely need some form of AI assistance.

Define Firm’s AEDT:

Several AEDT technologies exist, but not all of them are suitable for every organization; decide what AEDT means for your firm’s HR purposes.

Wait and See:

Rather than risk liability exposure due to hiring bias resulting from using AI for all hiring functions, perhaps it is best in this situation to not be an early adopter and to wait and see how AI hiring plays out in NYC and elsewhere.

Further Readings

  1. https://www.forbes.com/sites/forbestechcouncil/2022/04/12/five-potential-pitfalls-when-using-ai-for-hiring-and-how-to-avoid-them/?sh=150323ed119c
  2. https://www.cityandstateny.com/policy/2023/01/nycs-law-prevent-artificial-intelligence-bias-hiring-limbo/382106/
  3. https://fortune.com/2023/03/13/artificial-intelligence-make-workplace-decisions-human-intelligence-remains-vital-careers-tech-gary-friedman/
  4. https://www.weforum.org/agenda/2022/12/ai-hiring-tackle-algorithms-employment-job/
  5. https://www.phenom.com/blog/recruiting-ai-guide

Download Now

Submit your contact details to gain access to
all Articles for Free!