black friday sale

Big christmas sale

Premium Access 35% OFF

Home Page
cover of Ethical evaluation of the use of AI in recruitment and hiring
Ethical evaluation of the use of AI in recruitment and hiring

Ethical evaluation of the use of AI in recruitment and hiring

Tapaswini Gnanasundaram

0 followers

00:00-07:42

Nothing to say, yet

Audio hosting, extended storage and much more

AI Mastering

Transcription

The podcast discusses the use of AI in recruitment and hiring. While the idea of using AI to eliminate bias and improve efficiency seems promising, there are significant ethical risks and challenges. AI programs used in resume screening may unintentionally favor applicants who resemble current employees, leading to algorithmic bias. Amazon's AI hiring tool showed bias against women because it was trained using past resumes from a male-dominated industry. Online job platforms can reinforce gender and racial stereotypes. While algorithmic biases may be easier to detect and remove compared to human biases, there is a lack of transparency and accountability in the use of AI. Applicants may not be aware of AI usage and have limited information about how their data is stored and used. The GDPR requires explanation when AI is used, but existing AI systems do not provide explainable results. It is important for companies to invest in training managers and employees to use AI correctly and dete Hello and welcome everyone. My name is Tatha Sweeney and this is the Ethics of AI podcast. Today's topic of discussion is the use of AI in recruitment and hiring. Almost everyone has or will apply for a job. So undoubtedly, we have a large stake in recruitment and it is worth discussing how companies understand and manage the ethical risks behind using AI as part of their hiring practice. Let's begin by considering the purpose of recruiting companies and why the idea of using AI came about. The aim of a recruiter is to hire the best possible candidate for a job and in an ideal fairytale world, all applicants will be judged in a consistent manner solely on their skills and qualifications with zero interpersonal bias. But the harsh reality is that a potentially strong candidate could be failed simply because of the way they look or the way they talk or even their name. I once seeked advice from someone who works in HR and they told me I should not use my full name in CVs as long as foreign surnames could potentially deter recruiters. This is blatantly unfair. But the recruiter might not actively be trying to discriminate. All humans have an incurable unconscious bias and it is because of this that recruitment companies have turned to using AI in hopes of making better assessments. The premise is that an AI program could be designed to replicate and improve the existing screening process and because they are not quote-unquote human, they have no human bias. On the surface, this is a great idea. A win-win situation. We get better assessments and recruitment companies become far more efficient and accurate. But this is all under the unlikely assumption that the design and application is faultless. Current applications of AI include but are not limited to resume screening, video interviews and scouting potential candidates online. Let's look at the resume screening stage. At this stage, the AI is taught to pass through several candidate CVs looking for certain buzzwords, whether that be their qualifications, extracurricular interests or how they describe their role at a previous job. While in essence the AI is carrying out the same process that human recruiters follow, albeit more efficiently, it does not actually understand the task at hand. Many of these AI programs are trained using data from current or previous employees of the company. Therefore, it is not looking for the best objective applicant but simply applications that contain the same buzzwords as current employees and correlated these to be better. Not only are these programs poorly designed, but could also create another form of bias. Algorithmic bias. If existing employees are not proportionally represented of the broader application pool, the algorithm would unintentionally have a bias against those from underrepresented groups. We can see a real-world example of this kind of algorithmic bias with Amazon. In 2018, they came under scrutiny as they discovered that their new AI hiring tool showed bias towards men. Their goal was to develop AI that could search the web and spot potential candidates worth recruiting. Their AI team created 500 contribution models focused on specific job functions and locations and they taught each to recognize terms that showed up on past candidates' resumes. The algorithm also learned to assign little significance to skills that are common across most IT applicants, such as the ability to write various computer codes. However, a year into its development, the company realized its new system was not reaching candidates in a gender-neutral manner, specifically for technical posts. That is because Amazon's AI was trained using past resumes of their previous 10-year period. Most of these resumes came from men, a reflection of male dominance across the tech industry. In effect, Amazon's AI taught itself that male candidates were terrible, penalized resumes that included the word women's, for example, and demoted graduates of two all-women's colleges. The AI also favored candidates who described themselves using verbs more commonly found on male engineers' resumes, such as executed and captured. Amazon did well to recognize this inaccuracy and did attempt to fix the issue by editing the program to make them neutral to these particular terms, but the project was ultimately shut down. This was a case of unintentional bias because of poor training data, but there are also examples of AI programs that are biased simply by design. A biased design, for example, can be seen on online job platforms that make superficial predictions, not focusing on who will actually be successful in the role, but instead on who is most likely to kick on the job ad. This is unethical by design, as it can lead to a reinforcement of gender and racial stereotypes. Studies found that targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, reinforcing the stereotype that women are better in non-technical positions. In both these cases, however, it is clear that algorithms can introduce bias and even magnify discrimination, affecting entire classes of individuals. Proponents of AI recruiting tools admit that there is a possibility for unintended consequences. However, they state that compared with human biases, algorithmic biases are much easier to detect and remove. While this may be true, from the perspective of applicants, there is still a power asymmetry and a clear lack of transparency. First off, the use of AI is not always proactively communicated to applicants. Furthermore, they have limited information about how their data is stored and used. By law, the GDPR does state that users must give informed consent for the use of their personal data, but in certain cases, applicants might worry that not giving access might negatively affect their chances. Secondly, there is obfuscation of accountability. If an algorithm is found to have made inaccurate, potentially discriminatory decisions, who should be held accountable for it? This problem is even more complex when you think of companies who do not develop the AI themselves, but outsource them from third-party vendors, who may want to protect their intellectual property and not share dark details about the algorithm used. For these reasons, many applicants might perceive the use of AI to be unfair. A decision that plays such a huge role in people's lives will be undecidablely frustrating to deal with a blanket rejection with no clear feedback. The GDPR states that people are allowed the right explanation when AI has been used. But often, engineers themselves do not know the exact driving factors behind an AI decision, and no existing AI system provides explainable results. Algorithms aren't morally biased. They have no autonomy, no consciousness, and no sense of morals. They do what the designers have asked them to do. Which means if engineers are not diligent in how they train the AI, i.e. the data used, there is a risk of replicating and amplifying our own biases. If limited training data is a problem, the program could be re-engineered to account for this bias, i.e. placing less significance to words that are used by a particular group of people. But I think it is not enough to simply engineer them better. Companies also need to invest in making managers and employees aware of how to use these AI programs correctly, and how to detect and mitigate any risks. Human accountability and oversight is key. AI cannot solely make recruitment decisions. There must be a human in the loop. But HR teams should consistently question and assess the outcomes made by these programs for any bias or inaccuracies. Building on this, many, including myself, believe that current regulations should be extended to make companies that apply AI in any recruiting or hiring practices solely liable for any occurrence of discrimination or implicit bias in employment decisions. To conclude, I would like to reiterate what I said at the start. AI has potential to be an invaluable tool for recruitment, making the process fairer and more efficient. But we must not forget that it is only a tool. Its performance is dependent on how we design and implement it. Thank you for listening, and I hope you enjoyed today's podcast.

Other Creators