When AI becomes an interviewer and is responsible for checking campus network activities, what potential crises may it bring? Talk about the application of artificial intelligence in people management and how it will affect diversity and inclusion.

Did you know that HR may also be replaced by AI?

When the scheduled interview time arrives, you take a deep breath and prepare for an impressive self-introduction. Unexpectedly, what appeared on the screen in front of me was a virtual chatbot, and a flashing text.

(Guess what you want: ChatGPT-4 is coming!) Can learn your writing style and pass the American Bar Exam with a high score | AI for DEI

A dialog box pops up: "Do you have any food and beverage related work experience?" You tap the "Yes, three to five years" button and hesitate, expecting a real interviewer to step into the meeting room at some point.

It wasn't until the last screen that said, "Thank you, today's interview is over, please wait for the admission result", that you walked out of the interview space in shock, unable to believe that you had just completed an interview with your fingers.

AI Recruitment: A New Trend of Talent Grabbing by Large Enterprises

As the e-commerce giant Amazon and McDonald's, the world's largest restaurant chain, have announced the use of AI to recruit grassroots employees, the industry seems to be ambitious to announce that talent recruitment will usher in a new era of AI.

Since 2014, Amazon has been actively developing a set of algorithms that advertise to be effective in screening talent. According to people familiar with the matter, the computer program automatically scores the resumes of job seekers: "Of course, what we want to do is feed it 100 resumes and let it spit out the 5 candidates with the highest scores, and these are the people we decide to admit." [1]


Photo by Christina @ wocintechchat.com on Unsplash

In 2018, Amazon's decision to stop all of its AI recruitment tools sparked media reports such as Reuters. The reason behind this is that the researchers found that the software has a serious gender bias.

At Apple, Facebook, Google and Microsoft, men hold nearly 80% of tech jobs[2].

So when Amazon uses personnel data to train models, it naturally leads to machine learning results that actively devalue experiences with feminine traits (e.g., "women" soccer captain) and weight more masculine verbs (e.g., "executed" or "captured").

(Read more: Are you interviewing a chatbot?) How does AI bias affect interviews? |AI for DEI

Is AI a new solution to job shortages, or is it creating more problems?

However, multinational companies have not given up on using AI to save human resources management costs and strive for efficiency.

McDonald's released a recruitment program called Apply Thru, which it claims will allow job seekers to apply for entry-level positions through Amazon Alexa or Google Voice Assistant. Under the influence of the epidemic, the global shortage of labor has spread, attracting various fast food operators to follow suit.

(Same screening: AI use must read list!) 10 artificial intelligence books to teach you to automatically generate reports and write resumes!

In 2023, the American business trend newspaper Insider published an article titled "I applied for McDonald's and 4 other fast-food restaurant positions, but chatbots and automated application procedures prevented me from being admitted at all".

Author Amanda describes how he delivered his resume to 5 restaurant chains and encountered recruitment robots. But some of these virtual HR scheduled her for a national holiday interview, causing her to be empty, and some did not reveal salary details until the last minute; Some did not follow up after she filled in the information.


Photo by Austin Distel on Unsplash

These software mistakenly shut out countless job seekers like Amanda who are literate and have the skills to use electronic devices and information, let alone others who are not familiar with electronics.

The application of chat programs is equivalent to directly rejecting the poor, the elderly and some physically and mentally disabled people without scientific and technological capabilities, deepening the information gap. Robots also can't provide tailored alternatives to job seekers with disabilities or special needs like real people.

The risks of AI student tracking software: revealing personal information with gender and racial bias

If school manpower is no longer enough to individually tutor high-risk students and manage students' large amounts of information activities on the Internet, would it be a good way to replace traditional school security personnel with AI to actively track electronic devices and analyze students' backgrounds?

(Read more: Does diversity and inclusion limit your freedom of speech? More than ten states in the United States are planning to clamp DEI, and the diversity and inclusion of public campuses are gradually silenced!

According to a survey by the Center for Democracy and Technology (CDT) in Washington, 88 percent of school teachers reported installing "online activity monitoring software" on school devices, while a whopping 40 percent said that schools also use activity monitoring technology on students' personal devices.

Many detection and reward mechanisms are also linked to social security systems. One-third of teachers answered that their students had been "in trouble" because of digital monitoring of their devices, and were concerned by school security officers and even law enforcement outsiders.


Photo by Brett Jordan on Unsplash

The center's assessment report further found that algorithms used in some schools scan messages sent and received by students, documents viewed and websites visited to look for words related to "gay" (Gay, Lesbian) and automatically flag this information for school teachers.

The report asserts that such behavior amounts to forced coming out without consent and will cause psychological trauma to LGBTQ+ people.

(Guess what: International Transgender Presence Day: When discrimination comes up on campus, do we still push gay education out of textbooks?

Comprehensive studies have found that the impact of artificial intelligence on campus on LGBTQ+ and students with disabilities is the most negative, but ironically, the network abnormal activity warning system was originally built to ensure the safety of students, especially those vulnerable groups.

In addition, almost all teachers (99%) said their schools use "content filtering software" to restrict students' access to online content.

However, the analysis pointed out that these content screenings have made it difficult for people with disabilities and LGBTQ+ students to obtain the guidance they need to live, achieving the effect of banning books in the authoritarian era, which has endangered democratic freedoms and exceeded the scope of content filtering and blocking permitted by law.

Is AI a driver of diversity and inclusion, or is it gender-blind and racist?

Obviously, the nature and execution of the "management crowd", which has long been considered "indispensable", has undergone a revolutionary reversal. In the face of the advent of the era dominated by new technology, we can't help but ask, is artificial intelligence more impartial and selfless than real people, or is it more biased?

(Same screening: Will OpenAI have implicit discrimination and prejudice?) Diversity and inclusion will be key in the AI era|AI for DEI)

Some recruitment platforms believe that AI-assisted recruitment has its irreplaceable function in eliminating bias and maintaining impartiality, such as:

  1. No fluctuating, neutral questions and assistance in eliminating discriminatory language in recruitment copywriting and job descriptions.
  2. Hide job applicants' private information once and for all, excluding the unfair influence of revealing identity
  3. Condition qualifications, skills, etc. into objective judgments, and realize unbiased qualification review
  4. Gather additional information without preset and do not stress or limit job seekers due to tone of voice, body movements, etc

However, the University of Cambridge study[3] scathing the claim that AI can use algorithms to analyze personality traits, language patterns, and facial expressions, tricking companies into believing that these new technologies can read minds or look at faces is misleading and potentially dangerous.


Photo by Hitesh Choudhary on Unsplash

Working with computer science graduate students, the researchers demonstrated that changes in clothing, lighting, and background can lead to very different interpretations of personality. This proves that using artificial intelligence to try to determine whether a candidate's personality traits meet the requirements is still impossible to achieve with today's technology.

This group of researchers worries that even if they claim that big data and machine learning can be used to eliminate racial discrimination, they only use algorithms to balance numbers, not the power systems behind the data that really manipulate race and gender. The researchers suggest that AI should ultimately only be used to assist in achieving the purpose of hiring diversity, and should not be used as a tool for independent judgment.

(Read more: It only takes a few seconds to find a job and write a job vacancy?) Yourator AI tool automatically produces resumes and job vacancies!

Think about it, when the algorithm of screening people, just like the streaming platform recommends you movies and music, who will stand out in the "recommendation list"? From the perspective of promoting diversity, the use of AI in schools and workplaces really needs to be considered more carefully.