By Josh Oakhurst on the Bayard Blog
There are a lot of thorny questions in our society at large that don’t have neat workplace answers, but increasingly, companies are where the experiments in D&I execution are happening. Diversity and Inclusion (D&I) initiatives mean different things to different companies. Some executives feel it’s a data and metrics issue, others a sensitivity and training program. Since more businesses have gotten failing D&I grades than passing ones, let’s examine Facebook’s (most recent) slate of bad press courtesy the Washington Post.
Individual managers at Facebook may have had good intentions, but you won’t be surprised to learn that translating corporate goals into action is messy. For example, setting targets such as “30% more people of color in leadership by 2025,” sounds…ok?… although arbitrary. Stranger is how, “…managers primarily instructed recruiters to infer the race and gender of candidates by scouring the Internet, particularly Instagram, Twitter and Facebook…they then formalized those guesses and inputted them into a system that any person who interviewed the candidate could see.” For companies everywhere, the question then becomes, how do we achieve D&I metric goals when we’re legally limited by what demographics data we can gather? Further, if your recruiters, as they were at Facebook, are incentivized based on shoddy data gathering, of course the stats are going to be juked, and gamed, and then best laid D&I plans lead to bad articles in major newspapers.
Other companies have been turning to AI platforms less to “solve” D&I than to separate any individual human or department from such bad press. If the computer said the candidate was a match, and the AI company’s sales materials said their algorithm was unbiased, does that really translate to success? As the VP of Bayard’s new software product division — where we are building AI and recruitment automation — may I please suggest alternative D&I actions that aren’t so reliant on computers, or the people who program them, or shoddy, speculative math problems.
1 Replace Recruiters’ “Cultural Fit Test” with the “Canoe Test”
The worst question recruiters and interviewers have ever tried to answer is “does this candidate fit our culture?” This is exactly where Facebook’s D&I was set up to fail. Diversity is not achieved by matching. Isn’t it, almost transparently, quite the opposite?
So instead of asking, “Would this new candidate share the same Slack memes?” or “Would this person also join happy hour?” ask, “Can I sit in a canoe and row with this person?” And if you’ve never spent a day in a two-person canoe, you shouldn’t be interviewing people.
2 Computers Are For Matching, Which is Counter to D&I
I’ll type it a third time: diversity is not achieved by matching. At Bayard, we haven’t seen one new AI “innovation” that didn’t boil down to “cross-reference and match dataset A with dataset B.” Here, the “unbiased” AI will try to infer “objective” competency by pedigree, again, usually by matching job titles, role descriptions, career path, and education routes between existing and prospective employees. Candidates that look like employees are deemed, quite literally, “matches” or “best fit,” usually via percentage points. By generalization, the AI overlay inside the ATS would say something to the effect of “Jayne Doe has an 87% likelihood of best matching this job description, based on an analysis of your current workforce,” creating the expectation that inclusion and equality, free from bias, has occurred. But since D&I initiatives do not exist outside a society that funds schools through property taxes, and where said circumstances of education signal “matching competence” to the AI, what diversity, exactly, has been achieved?
Instead of false D&I AI, encourage your recruiters to find people who aren’t a match. People with different classes of backgrounds, of education, and of alternative career paths. Diversity, done meaningfully, is about inclusion of thoughts and experiences representative of society at large. If the recruiting-computer and recruiting-human are both looking for matches, one is not less biased than the other.
3 Bring Back Training Programs to Achieve D&I Goals
Apprenticeships of yore have largely been replaced by internships, and landing an internship — especially in certain industries — is still predicated by the same power structures reinforcing the myopic worldviews D&I is supposed to conquer.
One solution to always having the correctly pedigreed people to fill any vacancy that occurs is a willingness to fill that role by explicitly ignoring pedigree, and making managers and departments responsible for talent development, even to an avant-garde degree. Here, you are looking for a candidate’s intention, and potential — and unless it’s doing math problems all day — both of those are hard to quantify.
However, to achieve D&I harmony, we can ask candidates questions about their career aspirations. We can imagine ourselves working alongside them. And we can tell the computer to match and predict candidates’ employment goals with jobs that are available. For example, how many job applications ask goals about weekly hours, or preference for flexible vs. rigid scheduling? How many departments know they need a mix of experts and novices as a strength? How many companies see similar pedigrees as a hindrance, not a benefit?
Like many problems plaguing society at large, effective actions are often obliquely related to stated problems and goals — you can’t always attack ‘em head on. Carbon targets likely require changes to tax policy. Violence prevention is not a byproduct of just criminal law. And diversity and inclusion — in and out of the workplace — will not be achieved by a complicated math problem, nor a singular software system, nor by focusing on ethnic signals or inferences.
To achieve a diverse workforce, leaders must first want a diverse company. To achieve inclusion, the same leaders might want to exclude their current workforce altogether.