What recruiters can learn from #blacklivesmatter

Video evidence of police brutality, the Black Lives Matter protests, and recent discussions on social media have raised the nation’s consciousness. These events have also challenged recruiters and hiring managers to rethink their hiring practices.

The bad news is that empirical research shows that resumes with black sounding names are skipped over more often than white sounding names–even when the resumes were identical except for the name. In addition, one of the most biased, flawed, and lazy interviewing techniques is one of the most popular, used every day by many recruiters and hiring managers. It is the unstructured interview, sometimes called the “gut feel” interview.

With unstructured interviews, unconscious biases can go unchecked. The research shows, most interviewers make snap decisions in as little as five minutes and then spend the rest of the interview justifying their decision.

The good news is that the long-standing empirical evidence of how to hire outstanding employees without bias has provided clear and reliable answers as to how companies should change their hiring procedures. There is an added bonus. The same methodologies that protect against racial and other biases also improve the hiring process to assure the hiring of outstanding employees.

In addition, the advent of true artificial intelligence (AI) can significantly improve our chances of hiring great job candidates without bias. Just be wary of facial recognition technology used when a computer asks the questions and judges the interviewee’s spoken answers, tone of voice, facial expressions, and body movement. This type of technology is significantly biased against women and people of color.

Empirical Evidence

The table below shows the most reliable interview techniques from empirical evidence. It is based on a meta-analysis study published in 1988, by Schmidt and Hunter, on the most valid methods for determining which job candidates will succeed in a job.

The most reliable selection methods score the highest on the bar charts and are on the left side of the table, in the orange circle. These include work sampling, General Mental Aptitude Tests (commonly called intelligence tests), structured interviews, peer reviews, knowledge tests (about the knowledge required on the job), and training and education behavior tests (which these days are called job competency tests).

Please note the green diamond with the letters “EI,” which stands for emotional intelligence tests. EI is based on empirical research published after Schmidt and Hunter’s review. EI tests assess how well the test taker understands the impact of his or her words and behaviors on others. It is an exceptionally reliable test for the selection of team leaders, executives, salespeople, and others who interact with the public, such as police officers and customer service representatives.[i]

With these methods, companies make the hiring process more thorough, fair, and unbiased—and hire job candidates that make better employees.

On the other end of the chart, are the most unreliable methods. They are as unreliable as the come. These include age, interest, years of experience, and handwriting. When I report the unreliability of ‘years of experience,’ people are often surprised, because recruiters so frequently describe a job candidate as having “X” number of years of experience—as if it were a meaningful measure. But research consistently shows that after about five years, experience is not a reliable predictor of job success.[ii] Instead, skills, knowledge and behaviors (often called competencies) and the job candidate’s experience with these specific competencies is more predictive.

Please note the “gut feel,” or unstructured interview, is in the middle of the table, highlighted with an orange bar.

But this is not all. It gets better. When methods such as work sampling, structured interviews, knowledge tests, and competency assessments are used with an intelligence test or emotional intelligence test, the probability of hiring outstanding job candidates without bias rises from a score of about 0.5 to close to 0.7. This is revealed in the table below, which shows the original selection methods (blue bars) and their reliability scores when they are used with GMATs (orange bars).

Ideally, employers should use a combination of methodologies to select the most reliable job candidate without bias, such as GMATs with a structured interview, or GMAT’s with a consciousness test.

Whenever you are using an assessment, such as an intelligence test, or to determine how well a software engineer writes code, or if a salesperson has the required resiliency and personality, always ask for the evidence that the assessment was validated to show that it will predict the hiring of great job candidates.

Additionally, ask for evidence that it has a track record of hiring without bias against women and people of color. This is a critically important step. Finally, companies who use an assessment should validate the assessment against a sample of their own employees to assure its validity.

Artificial Intelligence

The advent of AI has provided recruiters with a tool that can significantly improve their ability to find on the internet employees who have the competencies to perform on the job, and to do so without biases against women and people of color. These tools use the unbiased analytics of AI, the search capability of the internet, and they keep learning. These tools are made even more effective when they present job candidates to the recruiter without the job candidate’s name or picture, blinding the recruiter from applying any unconscious bias. ThisWay Global is one example of such a tool, which I have used successfully.

Facial Recognition Technology

Be aware that current facial recognition technology has a poor track record for providing reliable information about job candidates that is not biased. According to Ifeoma Ajunwa, sociologist and law professor at Cornell University, the impact of a widely used algorithm in hiring among hundreds of employers is potentially more damaging than a biased recruiter. In an interview with The Wall Street Journal, she cautions that micro expressions are still developing science, and there are no clear established patterns for what facial expression is needed for specific jobs. Applicants may be incorrectly rejected.[iii] Other researchers have come to similar results.[iv]

If your organization wants to respond to the moment and improve its ability to hire great job candidates without bias, follow the lessons of empirical evidence, use AI that has a demonstrated track record to hire without bias, and stay away from facial recognition technology until it can demonstrate success, with published empirical evidence.

Victor Assad is the CEO of Victor Assad Strategic Human Resources Consulting , managing partner of InnovationOne, and Sales Advisor to MeBeBot. He works with companies to transform their HR operations, remote work, and recruiting, and to develop extraordinary leaders, teams, and cultures of innovation. His highly acclaimed  book is Hack Recruiting: the Best of Empirical Research, Method and Process, and Digitization. Subscribe to his weekly blogs at www.VictorHRConsultant.com. 


[i] Victor Assad, Hack Recruiting: The Best of Empirical Research, Method and Process, and Digitization, Archway Publishing, division of Simon and Schuster, 2019.

[ii] Ibid.

[iii] Hilke Schellmann and Jason Bellini, “Artificial Intelligence: The Robots Are Now Hiring—Moving Upstream: How New Data-Science Tools Are Determining Who Gets Hired,” Wall Street Journal (September 20, 2018, 2:29 PM);

[iv] Victor Assad, Hack Recruiting: The Best of Empirical Research, Method and Process, and Digitization, Archway Publishing, division of Simon and Schuster, 2019.


Leave a Reply

%d bloggers like this: