You can’t be objective when interviewing a job candidate and the voice in your head is saying…
– He is a man and men are not secretaries.
– She is a woman and women are not engineers.
– Millennials don’t stick around.
– Baby boomers are tech illiterate.
– Older workers are hard to train and resist change.
The problem is you have a filter between you and the candidate. That filter is called a stereotype.
A stereotype is “a fixed, over generalized belief about a particular group or class of people.” Stereotypes are countless and originate from our cultures and/or life experiences. A lot of them are so embedded in our cultures that they seem to be everywhere.
For instance, in a web search engine of your choice, type:
– ‘Secretary’ or ‘administrative assistant’ and go to image; you will predominantly see pictures of women.
– ‘Developer’ and go to image; you will mostly see pictures of men.
In Western culture we tend to think that “People with glasses are smart.” It is so embedded in the culture that many movies portray smart or wise characters with glasses. Some examples are: Dumbledore, the wise Headmaster of Hogwarts in Harry Potter Series; Simon, the smartest of the three Chipmunks in Alvin and the Chipmunks; Doc, the intelligent dwarf from Snow White and the Seven Dwarfs.
This stereotype is so engrained that an extensive research found that Americans most likely vote for politicians who wear glasses because they are seen as more intelligent.
Thus, stereotypes influence our perceptions of others, but no matter their nature (positive or negative), those beliefs put us at risk to ignore differences between individuals. They also stop us from differentiating between generalizations and truth.
So, how can you reduce the risk of stereotyping during the recruiting process?
There are many ways to go about it, but first recognize that it took time for a stereotype to be engrained in your world view. So, be aware that it might require time and some conscious effort to become detached from that belief. In short, be patient with yourself and act based on “I know that I know nothing.”
Second, you can reduce the risk of stereotyping by adopting an objective hiring process. This is a hiring process where candidates are assessed fairly. This can be done when working with a diverse team, and/or using Artificial Intelligence (AI) at the beginning of the screening process.
AI systems could help reduce the risk of stereotypes if they are well built and set up properly. But AI is made by humans and is relatively new, so room for errors still exists.
That said, even though it is challenging to create an Applicant Tracking Systems (ATS) that is 100% free of bias, ATS is an AI system that can help you reduce stereotyping in recruiting. ATS can help you preselect great candidates if they are well-programmed and personalized, and the right questions are entered in the application for pre-screening.
For instance, instead of entering: Do you have five years of experience in employment law? An alternative could be: Please confirm your years of experience with discrimination and/or harassment issues in the workplace.
In other words, without being influenced by their name, gender, appearance, or age, ATS simply asks questions that are relevant for the job. If the right questions are asked, this step should allow qualified candidates to move forward in the recruiting process.
Having said that, using AI just in the screening phase can trigger potential human rights concerns if not appropriately tested and vetted. For instance, ATS algorithms could disfavour candidates from a diverse group, which is what Amazon discovered a few years ago. So, regulations in the area should make the difference.
However, right now Canada lacks regulations to address potential issues related to the emergence of AI in organizations. This is not the case in the United States.
In February 2019, the U.S. established the American AI Initiative via Executive Order 13859. Among many aspects, it addresses technical and ethical implications that AI can create in organizations. Also, in the U.S., AI systems that screen and interview candidates are treated just like any other recruiting practice under Title VII of the Civil Rights Act of 1964, and under the Age Discrimination in Employment Act (ADEA); these are laws that protect employees and applicants against discrimination.
Finally, AI is useful, but using it also presents some risks. For peace of mind, we should find the right balance and ensure the enforcement of regulations/policies about it. After all, Elon Musk himself has called AI the “biggest risk we face as a civilization.”