Artificial Intelligence in Hiring
Eliza Davis
Artificial intelligence has become increasingly common in the human resource industry. Artificial intelligence driven tools like chatbots, application screeners, and video interviews have allowed companies to improve the efficiency of their hiring processes, which saves them money and frees up human resource workers’ valuable time for more creative endeavours (Mashelkar 2018). Artificial intelligence has also been applauded for its potential to reduce bias in hiring. This is particularly important given that human bias during the hiring process, which has been a long documented struggle that transcends industries. For example, a 2016 study conducted in Germany, showed that applicants with foreign sounding names were less likely to get interviews than those with traditional German names and that applicants with headscarfs needed to apply to four times as many jobs to get an interview (Weichselbaumer 2019). An economic study from 2019, documented the persistent employment gap between African Americans and their white counterparts with the same level of educational attainment. That study also cited experiments that have shown that those with names typically associated with African Americans are passed over in favor of applicants with white sounding names when identical resumes are submitted (Roger 2019). These studies reveal a consistent and alarming pattern of implicit bias.
Artificial intelligence offers the promise of removing these biases and eliminating human error. AI algorithms use set criteria to evaluate applicants and make decisions based on data. AI evaluates every candidate the same whether it views them first thing in the morning, after the lunch it did not have to take, or in the middle of the night. AI does not differentiate between Jake and Jakking or care if the applicant wears a headscarf (Weichselbaumer 2010). Facially, AI solves discrimination issues in hiring by introducing a more calculative approach (Reynolds 2020).
However, the extent that artificial intelligence eliminates discrimination is dependent on the data it is fed. Companies often train artificial intelligence to recruit people like their current employees. It is logical, after all, that if someone is a good fit for the company, someone like them might also be. This becomes problematic, however, if a company is say heavily male or almost all white. Through seemingly neutral criteria the hiring process can get skewed so that applicants who share backgrounds with current staff are favored (Kim 2019).
It is hard in many industries to quantify what makes a good employee. In the absence of other forms of empirical data, performance reviews are sometimes used to train algorithms. Again, this is good in theory, as you are training the algorithm to select applicants who resemble your best employees or who differ from employees that are struggling. The caveat is that studies have revealed that men receive higher ratings than women on performance reviews and white people get rated higher than those of color. Performance reviews introduce back in human bias because they are subjective in nature (Rodgers 2019). The potential of AI perpetuating bias gives me pause, but does not lead me to believe we should refrain from using Ai in the hiring process. Acknowledging and accounting for potential biases is crucial, so that we can ethically enjoy the improved efficiency it offers.
Kim, Pauline. 2019. “Big Data and Artificial Intelligence: New Challenges for Workplace Equality.” University of Louisville Law Journal 57(2):313–28.
Mashelkar, R. A. 2018. “Exponential Technology, Industry 4.0 and Future of Jobs in India.” Review of Market Integration 10(2):138–57.
Reynolds, Tania, Luke Zhu, Karl Aquino, and Brendan Strejcek. 2020. “Dual Pathways to Bias: Evaluators’ Ideology and Ressentiment Independently Predict Racial Discrimination in Hiring Contexts.” Journal of Applied Psychology 1–19.
Rodgers, William. 2019. “Race in the Labor Market: The Role of Equal Employment Opportunity and Other Policies.” RSF: The Russell Sage Foundation Journal of the Social Sciences 5(5):198-222.
Weichselbaumer, Doris. 2019. “Multiple Discrimination against Female Immigrants Wearing Headscarves.” ILR Review 73(3):600–627.