The Rose: Artificial Intelligence in the Current Hiring Process

By Denise Han

With companies like HireVue, SkillSurvey, and Fama boasting great success in improving the recruitment process through artificial intelligence, you might be wondering how to capitalize on these innovative HR trends. After all, your industry peers are finding that implementing this software enables highly efficient processing, the capturing of more intangible human qualities, and more successful job matching. However, here is a word of caution: as artificial intelligence becomes the norm in today’s business world, HR departments and recruiters must recognize, and work to minimize, the challenges that these tools will inevitably bring.

This article discusses artificial intelligence (AI) in recruiting, including what might prove to be a tremendous asset to your firm, what challenges may arise, and what consequences could potentially fester when a firm does not adequately address these challenges.


AI firms market many benefits of their software that specializes in screening prospective employees. At the forefront is their products’ ability to eliminate human bias in the recruitment process, creating more “diverse, empathetic, and dynamic workplaces,” according to FastCompany.1 This new technology is revolutionizing the way firms recruit new talent, and for good reason. The algorithms capture certain characteristics of applicants that may not be as easily determined by a human evaluator. Fortune attributes these algorithms’ success to their use of “natural-language processing and machine learning to construct a psychological profile that predicts whether a person will fit a company’s culture.”2 Since implementing these algorithms, companies have reported a decrease in employee termination, employee resignation, and employers’ time spent combing through vast amounts of employee data.3 Many firms  now clamor to harness this new power to reduce the subjective opinions of human recruiters.


But if you’re thinking this new technology sounds too perfect to be true, you’d be correct. Companies still must consider the challenges of understanding the algorithms enough to avoid accidental discrimination and bias. According to the ABA Journal of Labor and Employment Law, people generally assume that if recruiters don’t input demographic information into the algorithms, the computer won’t have a basis for discrimination; however, this is a misconception.4 Amazon is a high-profile example of the use of an AI recruiting tool gone awry. The well-known online retail firm decided to scrap the machine after it began to discriminate against women at no fault of any specific input. Because prior employee demographic data displayed a pre-existing disparity of men, the algorithm simply taught itself to penalize resumes with any indication that the applicant identifies as female. Even after editing the programs to make them neutral, Amazon could not guarantee that the algorithm would not devise other discriminatory sorting methods.5

Cases such as Amazon’s are resulting in a growing concern for transparency in AI. We learn several principles from this cautionary tale; most alarmingly, not even the developers of the software fully understand the workings of the algorithm. This creates a black-box solution that should be ethically questioned. Dr. Tomas Chamorro-Premuzic, a professor of psychology at UCL and Columbia University, emphasizes that, currently, no one understands exactly how “biological predispositions translate into different levels of performance, and the degree to which individuals are free to escape their genetic fate.”6 The use of such incomprehensible data to base recruitment decisions, therefore, would be completely unethical and unjustifiable. An article in Forbes states that even if companies have no intention to harm or disadvantage candidates in any way, algorithms may make these companies unwilling accomplices in the software’s biased decision-making process.7

In addition to the difficulty of pinpointing exactly how algorithms introduce bias, any existing data used to train these programs is contaminated because it originates from humans and their biases. As programmers train AI to mimic the human decision-making process, they only perpetuate the shortcomings of the human mind.8

If, however, for your company, the benefits of AI in hiring still outweigh the costs, your human resources department can implement certain measures to mitigate the risks of unintended bias. First, Julie Fernandez writes in the Strategic HR Review that HR must ensure due diligence of the automated program before fully integrating the program into the recruiting process.9 Jim Torresen, a professor of robotics and intelligent services at the University of Oslo, affirms, “It should be possible to inspect the AI system, so if it comes up with a strange or incorrect action, we can determine the cause and correct the system so that the same thing does not happen again.”10 Thorough analysis of the software and understanding of its logic may protect against allegations of discrimination; companies can explain the rationale behind the decision to hire or not to hire, should a question arise. Through careful due diligence, companies can also more effectively prevent accidental bias before it affects applicants. It might even give potential employees the opportunity to alter any existing habits that might impede their eligibility for a certain occupation.11

Second, Morgan McKinley recommends that technologists receive some type of ethics training, or that HR organize an ethics committee to review the algorithm and confirm the minimization of bias.12 This specialized team can then monitor AI in this particular hiring process to check for specifically unethical practices. As AI is used in other business processes, the ethics committee might consider doing the same in those as well to shield the company from liability. With technology quickly overtaking mundane tasks, a system of checks and balances might greatly benefit the company. If ignored, the complexities of integration could very well overtake the benefits of such technology.


Ultimately, you mustdecide whether or not AI is right for your specific company. If the prospects look favorable, you then must decide if the benefits are worth the hassle, because as much as these software companies paint a rosy picture of their products, you must always remember the thorns that come with them. Failure to enact preventative measures against discrimination may compromise the success of your company.

The reason behind these preventative measures is uncertainty—specifically, legal uncertainty. Because artificial intelligence is such cutting-edge technology, the legal sphere has no clear precedents for determining accountability, intent, fault, or damages.13 In fact, the current assumption holds that “[l]iability for any discriminatory effects of the algorithm’s use falls on the employer[, who]…may be penalized for disparate impacts they could not avoid.”14 For example, employers might assume that employees who don’t need to commute long distances work more productively. Perhaps they presume this because less commute time equates to more sleep and less stress on roads; this is fair, right? Consequently, the employers use the algorithm to screen for applicants who live in two specific neighborhoods, but those two areas happen to house predominantly wealthy, Caucasian, college-educated applicants. The company is now susceptible to Title VII lawsuits.15 AI currently cannot shield employers from any legal liability because algorithms themselves cannot be held accountable. After all, algorithms are operated mainly by humans. Additionally, no one can predict the legal infrastructure that lawmakers may implement in regard to AI that might put forth certain unfavorable ramifications.


As your company decides if using AI as a recruiting tool might increase effectiveness, don’t just smell the roses. Consider the thorns and how much they might prick. Fernandez again warns that even a hint of bias may “cast doubt on company practices, intent, and reputation, which in people matters can have grave consequences.”17 Your company might still choose to brave the thorns, but in that case, you must use careful deliberation and ethical precautions. Only then will you be able to nip those withering effects in the bud and truly enjoy the rose.

1Sean Captain, “Not-So-Human Resources,” Fast Company, no. 209 (October 2016): 44,

2Jennifer, Alsever, “Where Does the Algorithm See You in 10 Years?” Fortune 175, no. 7 (2017): 74–79,

3Alsever, “Where Does the Algorithm See You in 10 Years?” 79.

4Darrell S. Gay and Abigail M Kagan, “Big Data and Employment Law: What Employers and Their Legal Counsel Need to Know,” ABA Journal of Labor & Employment Law 33, no. 2 (2018): 197.

5 Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Accessed May 24, 2019,

6Tomas Chamorro-Premuzic, “Four Unethical Uses Of AI In Recruitment,” Accessed May 24, 2019,

7Chamorro-Premuzic, “Four Unethical Uses Of AI In Recruitment.”

8 Ibid.

9Julie Fernandez, “The ball of wax we call HR analytics,” Strategic HR Review 18, no. 1 (2018): 24,

10Jim Torresen, “A Review of Future and Ethical Perspectives of Robotics and AI,” Front. Robot. AI 4, no. 75 (2018),

11Chamorro-Premuzic, “Four Unethical Uses of AI In Recruitment.”

12David Leithead, “The risks of Artificial Intelligence recruitment tools,” Accessed May 24, 2019,

13Fernandez, “The ball of wax,” 24.

14Gay and Kagan, “Big Data and Employment Law,” 198.


16Alsever, “Where Does the Algorithm See You in 10 Years?” 78.

17Fernandez, “The ball of wax,” 24.

Leave a Reply

Your email address will not be published. Required fields are marked *