The advent of ChatGPT has made AI artificial intelligence the keyword in 2022. Artificial intelligence is becoming more and more widely used, and now it may even be used to interview and screen job applicants!

In early June, a job seeker in North Carolina, USA, who applied for a fast-food restaurant clerk, had a different interview experience.

Her name is Olivia, but it's not a company executive, or even a company employee, but a chatbot, more precisely, a controversial chatbot, because Olivia not only gives job seekers the wrong instructions, but also asks a lot of unreasonable questions.

Since the launch of ChatGPT, AI has been a widely discussed topic, in the work, many people call AI a good helper for job search, and now even "interviews" can be handled by AI.

Blunt, inhumane interview robots

HR chatbots like Olivia are increasingly being used in industries such as healthcare, retail, and restaurants to filter out unsuitable applicants and schedule interviews with suitable applicants.

Most recruitment chatbots are not as advanced or complex as chatbots like ChatGPT, and they are mainly used to screen positions with a large number of applicants, such as cashiers, warehouse employees, and customer service assistants. So the questions they ask interviewees are fairly simple, like, "Do you know how to use the tool?" Or "Can you accommodate the schedule on the weekend?" And so on simple questions.

However, these bots still have flaws, the most common of which is that the interviewer's answers do not meet the standard questions preset by large language models, and the chatbot makes incorrect judgments.

Experts say this can make admission more difficult for people with disabilities, people unfamiliar with the language of the interview or older job seekers.

Aaron Konopasky, a lawyer with the U.S. Equal Employment Opportunity Commission (EEOC), is concerned that chatbots like Olivia cannot have a humane feeling, or a more flexible thinking model, and when they see answers that do not meet the so-called standard answers of their database, they will directly give a judgment of non-admission, thus causing interviewers who are already disadvantaged in the job market to fall into a more disadvantaged position.

(Same screening: Meta algorithm bias accused!) How do we deal with the discrimination that lies in the algorithm?


Photo by Sanket Mishra on Pexels

After the introduction of chatbots, is HR layoffs the next step?

Pauline Kim, a professor of employment and labor law, said: "If chatbots focus on how long it takes you to respond, or whether you're using correct grammar and complex sentences, then start worrying about bias." Research from the University of Washington also shows that bias in the interview process can be difficult to detect when companies are not transparent about why they reject interviewees.

Seeing the potential discrimination in interviews with AI applications, New York City enacted a new law in early July that requires employers using automated tools such as chatbot interviews to review the tools for gender and racial bias. In fact, back in 2020, Illinois passed a law requiring employers who apply AI to analyze interview performance to notify interviewees and obtain consent.


Photo by Artem Podrez on Pexels

Still, for companies looking to cut recruitment costs, the introduction of AI into the interview process and the HR department's budget cuts are just around the corner.

Matthew Scherer, a policy advisor on workers' rights and technology at the Center for Democracy and Technology, said that human resources have always been the cost center of the company, never a revenue-generating department, so replacing them with chatbots seems to be the fastest thing to expect.

The RecruitBot team, a startup that launched artificial intelligence recruitment machines, took the opposite view.

They argue that the use of AI in the interview process and the involvement of HR departments are not necessarily mutually exclusive.

On the contrary, artificial intelligence is more like a good helper for the human resources department, through machine learning to screen the data of multiple job seekers, can help companies find similar job seekers to existing employees more quickly.

(Guess what you want: Does OpenAI have implicit discrimination and bias?) Diversity and inclusion will be key in the AI era|AI for DEI)

Artificial intelligence will also be biased, what to do?

However, the bias of artificial intelligence is still a problem to be solved, and there are many discussions in the face of the hidden bias of artificial intelligence, and two actions have been carried out:

Human-machine cooperation creates a positive feedback loop

At this stage, we cannot fully rely on AI to be responsible for recruitment, and when AI is biased, humans must intervene to provide feedback, provide data to train the model, and establish a positive feedback loop through the results of this feedback.

Through continuous feedback, AI will be able to learn data that is more in line with the values of diversity, equality and inclusion, and then make more appropriate decisions.

In short, any major decision involving promotion and admission requires human intervention, and cannot be left entirely to AI technology.


Photo by Edmond Dantès on Pexels

Develop specifications

Before the development of sophisticated technology, the hasty replacement of human resources with artificial intelligence may only exacerbate the occurrence of discrimination and prejudice, so we urgently need to establish relevant norms so that the use of technology and the review of regulatory agencies are followed.

How does AI bias and DEI multi-inclusive dialogue?

After realizing that at this stage, it is not possible to get out of the state of hidden bias of artificial intelligence, we can further think about how an organization that wants to pursue diversity, equality and inclusion (DEI) should achieve a balance between artificial intelligence and DEI, so that the two can maintain a space for dialogue?

The same can be summarized into two items:

Use AI to build organizations that are diverse and proactively identify biases

Research and research agency PwC's did a study on AI artificial intelligence in 2022, and found that up to 62% of companies advertise the value of DEI to recruit and manage employee performance, however, few people use AI to really promote DEI.

To build more inclusive teams, leaders must better integrate DEI strategies into employees' daily experiences through AI, such as the book Beyond D&I: Leading Diversity with Purpose and Inclusiveness on job boardsKay Formanek, author of Kay Formanek, offers an example of a job posting that is looking for ambitious leaders, it is a sign of falling into the framework of masculine work. Even if women are a good fit for the job, they tend not to submit resumes.


Photo by cottonbro studio on Pexels

According to Kay Formanek, women are looking for more feminine language, such as: we're looking for a leader who can lead the business with the team, we're looking for someone who can start the team... and other narratives that will be more likely to attract female job seekers.

Relying on the intervention of artificial intelligence technology, it helps enterprises eliminate any biased language from job vacancies, so as to avoid the presentation of job vacancies, exclude underrepresented job seekers, and then attract more diverse job seekers, so as to achieve the purpose of expanding the talent pool of enterprises.

Enhance employees' sense of belonging

In addition, artificial intelligence can also play a huge role in increasing employee retention.

Given that the factors that drive employees away from the business have a lot to do with employees feeling marginalized, isolated, and unengaged, companies can use AI technology to identify departments or roles at high risk of attrition, employees who are dissatisfied with their work status, and even employees who feel alienated because they work remotely.

This is done by analyzing data such as work location, calendar, performance, salary, and workload to examine the workload of employees and their commitment to the organization. Compared with the traditional survey of employees' job satisfaction through questionnaires, artificial intelligence can achieve more real-time and more detailed analysis, so enterprises can have a deeper understanding of employees' satisfaction with work and environment, thereby enhancing the sense of belonging in the workplace.


Photo by Jopwell on Pexels

As users of AI, we must be aware that while AI bias may perpetuate social bias, its ability may also become a tool to enhance diversity, equality and inclusion.

Technology is neutral, its value judgment of whether it is good or bad depends entirely on how users use it, and there is a responsibility, a conscious admission of bias, strict review of technology, and wise use of artificial intelligence, one day we will finally find a balance with AI and enjoy the convenience of the AI era.