slide image

Behind the Curtain: AI for Talent Acquisition

By Peter Weddle, CEO TAtech

There are dozens, may even hundreds of talent technology companies that have effectively integrated artificial intelligence into their solutions. Unfortunately, there are also others that quickly bolt AI onto their products without adequately vetting its development so they can claim they are “AI powered.” The former enables employers to productively put the technology to work in their talent acquisition. The latter all too often does nothing but waste employers’ time and money. How can you tell the difference? Check out who’s been curating the data used to train the models.

Open AI beats its chest about its process for training ChatGPT with Reinforcement Learning from Human Feedback. The description of that technique in Wikipedia is something only a bona fide geek could love, but it is important to understand why using such a tool is necessary. AI models like ChatGPT are trained with data. Lots of data. Data, however, is only useful in such models if it is first interpreted with human values. Those values can range from teaching it that the image of bound text with a hard cover is a book (something Amazon’s AI model has to know) to teaching it that free-standing text about an employment opportunity is a job posting (something at least some TA solutions need to know).

In talent technology companies with well executed AI models, most of that learning comes from the model’s use on-the-job or at least in talent acquisition use cases. As a consequence, just as we humans (often) do, the model gains in intelligence from its experience with the actual work it will perform. For example, it learns that the image of bound text with a hard cover and the words Harry Potter in the title is a book of fiction and that free-standing text about an employment opportunity that requires a medical license is a job posting for a physician.

In other talent technology companies – those that bolt on AI with a minimum of development – that data interpretation step is either not performed or minimized. In effect, there is no feedback learning and there are no use cases, so the model’s “knowledge” of what it’s being asked to do is based solely or at least largely on the data with which it was originally trained. Which begs the question, who is doing that training?

A recent investigative report in New York magazine provides the answer. Entitled “Inside the AI Factory,” it describes “a vast tasker underclass” of people doing most of this work. Often residing in developing countries and with no more skills than English language fluency, these “annotators,” as they are called, are poorly paid and perform tasks in exactly the same way their predecessors did on early 20th century manufacturing lines. It’s boring, repetitive work that is unlikely to yield more than the most generic kind of data interpretation. (As the article notes, there have been some recent upgrades to the so-called AI factory, but there are still plenty of AI vendors doing their development the old-fashioned way.)

What does this situation mean for employers contemplating the acquisition of talent technology that uses AI? There’s been plenty of commentary about the importance of checking such solutions to make sure there’s no algorithmic bias in their outcomes, and that’s certainly a good thing. However, given that the usefulness of a model’s performance is determined by the quality of the data used in its training, employers should now also probe two additional factors: (1) what data were used in its training and whether that data is refreshed with use cases and (2) who was employed to interpret the data and do they know anything at all about recruiting.

There are signs that the human job of interpreting AI training data may eventually be automated out of existence. At that point, machines will be training machines. You can make your own judgment about whether or not that’s a good thing, but for the moment at least, people are still doing the work and the smart employer will check it.

Food for Thought,
Peter

Peter Weddle has authored or edited over two dozen books and been a columnist for The Wall Street Journal. He is the founder and CEO of TAtech: The Association for Talent Acquisition Solutions.