Discrimination? How artificial intelligence influences the hiring process

Sustainable Development Goals (SDGs) 5 -equality- and 9 -industry, innovation and infrastructure- are intrinsically related to algorithms and bias in hiring. What’s more, whether or not we use artificial intelligence (AI)When a new worker is needed in a company, the first thing that crosses our mind is a thought: “You have to create a new job.” This is usually accompanied by another about the person who could fit into it.

In this early phase, in which companies usually already have an idea of ​​their talent needs and the profile of the person who fits, the first prejudices they can make their appearance.

If a hired person needs to be replaced, the company is likely to look for another with the same profile. If a new position has to be created, the hiring managers will probably already have an idea of ​​the profile they are looking for.

The first step in the recruitment process is to search for candidates through advertisements, job postings and individual contacts. During this process, if companies are not careful, they could first cases of discrimination.

Although the tools are far from perfect, they help companies create more inclusive job offers

A clear example is an ad in which “waitress wanted”. Actually, what should be advertised is an offer from “service personnel” or “waiter or waitress”.

Fortunately, there are already some AI-based technologies that claim to avoid some of these biases and discriminations. Apps that help companies create job descriptions, for example, are designed to reach more applicants and nurture a larger and more diverse talent pool, with a particular focus on gender diversity.

Although, as researcher Aaron Rieke says, these tools are far from perfect, they help companies create more inclusive job offers.

But according to Rieke and his team, problems can also arise in the advertising phase– Many companies use paid digital advertising tools to spread job opportunities to a larger number of potential candidates.

However, as another expert, Pauline Kim, argues, “not informing people of a job opportunity is a very effective barrier to applying for that position.” For this reason, Rieke concludes that “the complexity and the opacity of digital advertising tools they make it difficult, if not impossible, for aggrieved job seekers to detect discriminatory advertising patterns in the first place. “

He comes to a similar conclusion regarding matchmaking tools. And it affirms that “those that are based on attenuated approximations of relevance and interest could end up reproducing the same cognitive biases that they try to eliminate”.

An example of discrimination would be a “waitress wanted” ad. What should be advertised is an offer from “service personnel” or “waiter or waitress”

During the selection phase, companies evaluate candidates – both before and after they are submitted – by analyzing their experiences, skills and personalities.

In this phase, companies often judge applications based on their own prejudices. For example, they may reject women between the ages of 25 and 40 because they are of child-bearing age.

Rieke and his team have also tested AI-based tools to support the selection phase. Are evaluate, score and rank applications based on their qualifications, social skills and other abilities. And thus help hiring managers decide who should move on to the next phase.

These tools help companies quickly narrow their pool of applications so that they can spend more time considering the ones that are considered stronger. A high number of job applicants are automatically or summarily rejected during this stage.

Regarding the selection phase, Rieke concludes that the resulting model will most likely reflect interpersonal biases, institutional and systemic when the selection systems are intended to replicate the previous hiring decisions of a company.

This means that these types of selection instruments are also highly biased. Amazon’s AI recruiting tool was something similar. This was a well-documented case, which demonstrated that the tool replicated the institutional bias against women.

The interview process is an opportunity for companies to evaluate candidates in a more direct and individualized way. However, companies should avoid asking questions that make assumptions about applications based on their protected area, such as their family plans, etc.

Facial analysis systems could penalize people in interviews for visible disabilities

Facial analysis systems could penalize people in interviews for visible disabilities

There are AI-based tools that claim to measure the performance of candidates in video interviews using the automatic analysis of verbal responses, tone and even facial expressions.

In their research, Rieke and his team focused on a tool from the HireVue company. This instrument allows companies to ask candidates for responses recorded in interviews and then rate the responses against those of currently hired people who have been successful.

More specifically, HireVue analyzes videos using machine learning. It extracts cues such as facial expressions, eye contact, vocal cues of enthusiasm, word choice, complexity of words, topics covered, and word groupings.

Using tools like HireVue raises questions on multiple fronts, especially those related to ethics. Rieke finds that speech recognition software can perform poorly, especially for people with regional and non-native accents.

What’s more, facial analysis systems may have difficulty recognizing the faces of women with darker skin. Some interviewees could be rewarded for irrelevant or unfair factors, such as exaggerated facial expressions. Others could be penalized for visible disabilities or speech impairments.

On the other hand, the use of this type of biometric data may not have a legal basis if it is used to predict success at work, to make or inform about hiring decisions.

There are hiring tools that claim to predict whether candidates might violate workplace policies

During the last hiring process, the selection phase, companies make the final hiring and compensation decisions. In this last stage, women consistently present lower salary offers than men.

Today there are hiring tools that aim to predict whether candidates might violate workplace policies, or estimate what combination of salaries and other benefits to offer.

Rieke and her team are concerned that these instruments could widen the pay gap for women and people of color. As he states, “HR data often include broad approximations of a person’s socioeconomic and racial status, which could be reflected in predictions of salary requirements.”

In any case, offer companies a very specific view of the salary requirements of an application increases the asymmetry of information between companies and candidates at a critical moment in the negotiation.

Therefore, it can be concluded that AI tools need to be audited: quantitatively, using labeled demographic data to check the results, and qualitatively, interrogating the actual variables and the relationship to the job.

What’s more, it is very important to work on anti-discrimination in the real world to prevent it from being replicated and concrete in the digital world. Because once algorithms are online and in use, it is very difficult to stop using them.

***Katharina Miller is President of the European Association of Women Lawyers (EWLA)

Reference-www.elespanol.com

Leave a Comment