slide image

Recruiter’s New Job: Parenting AI

By Peter Weddle CEO TAtech

The story is a not an uncommon one. Mom and Dad are sitting in the living room with the grandparents and grandkids when one of the youngsters proudly – and to their parents’ horror – shouts out a curse word or some other colorful comment they’ve picked up around the house. It’s the predicament of more than a few parents, and now it’s the challenge facing recruiters. How do you keep your kids – and your technology – from repeating your own bad habits?

Now, let’s be clear. Artificial intelligence is already widely used by recruiting teams and will be even more so in the years ahead. However, whether we’re talking about ChatGPT or any other large language model, the technology is not natively intelligent. Not yet, and maybe not ever. Right now, the state-of-the-art is intelligence that’s schooled. These models learn what we humans teach them. And, there’s the rub.

It takes tens of millions or more data points to create a useful large language model. These inputs are the information, comments, opinions, accusations, assumptions, images and ideas – crazy and otherwise – that we humans have posted online. And, just as parents often have a hard time keeping up with their kids’ social media practices, so too do AI developers struggle to keep an eye on what data their particular model is consuming.

Indeed, as I discussed in an earlier post, the interpretation of raw data – in other words, the analysis that tells a machine what a word means – is often performed by individuals with little or no expertise in the subject matter addressed by the data. These individuals can tell the model that a yellow, oblong object is a banana, but in the vast majority of cases, they wouldn’t know (or care) that a particular word, phrase or image could be hurtful or even prejudicial to a certain class of people. That means the models aren’t ignorant, but like many kids, naïve – they don’t have an appreciation for context or, in many cases, the impact they could have.

So, what are recruiters to do?

Well, as much as it will be unwelcome news, recruiters are going to have to parent their AI. I realize they’re already overloaded, not with soccer games and ballet lessons, but with requisitions that just keep on coming. (And maybe with soccer games and ballet lessons too.) But, just as children have to be nurtured into maturity so too do AI models. And the good news is that these models are more attentive than kids. As most parents will attest, youngsters have to be reminded again and again (and again) to correct their bad habits, but once you teach a LLM correct behavior, it will never forget.

But what does parenting AI actually entail?

As with kids, LLMs pick up all kinds of things from their surroundings. So, recruiters are going to have to teach them right from wrong. That means they must:
• Review everything AI-based models produce for the recruiting team, whether it’s job postings, branding messages, career site content, interview invitations, offers or rejections; and
• Correct any words, phrases, or images that are inappropriate, poorly phrased or, worse of all, counter to their culture or policies or state and federal regulations.

It’s not an especially taxing job, but it is an essential one. While kids’ off-color remarks can be embarrassing, the embarrassment usually doesn’t last very long. Inappropriate machine language, on the other hand, can cause long-lasting damage to both a company’s employment brand and its recruiting efforts. Therefore, as companies add more and more AI to their tech stack, it’s important that they give recruiters the time and priority to be good parents for those AI applications.

Food for Thought,
Peter

Peter Weddle has authored or edited over two dozen books and been a columnist for The Wall Street Journal. He is the founder and CEO of TAtech: The Association for Talent Acquisition Solutions.