slide image

What Is ChatGPT & What Isn’t It?

An excerpt from the forthcoming TAtech book, "ChatGPT Undressed & Unadorned: The Truth About ChatGPT in Talent Acquisition" by Alexander Chukovski, CEO Crypto-Careers.com

In the following chapters, we will cover some straightforward basics that will enable you to understand the use cases and risks of ChatGPT. Don’t worry; nothing technical.

How Does ChatGPT Work?

You have probably already heard that ChatGPT uses a large language model called GPT (currently, ChatGPT can use GPT-3.5 and GPT4). Large language models are trained on a lot of text data (in the case of OpenAI, we don’t know which data and how much, which is a problem, but we will get to that) with a straightforward task - try to predict what is the next word in a given sequence. However, this is just a tiny part of the magic.

To make great predictions - or write excellent text - the model needs two more components:

First, it needs to capture the meaning of the words. GPT uses the transformer architecture to capture as much meaning about a word as possible, looking at the words of an input before and after. This is the first component that made these models great at generating text. When you ask a GPT model to create something, it can contextually consider very long inputs and, thus statistically speaking, generate persuasive texts. This is nothing new - the transformer has been used in Google Bert since 2018 and was one of the leading AI models Google used to understand search intent until last year. (BERT is a method of pre-training language representations. Pre-training refers to how BERT is first trained on a large source of text, such as Wikipedia.)5
However, optimizing for predicting the next token or chunk of input text causes unintended behaviors. That's why GPT often makes up facts, generates biased text, or doesn't follow the user's intentions.

This is one of the critical areas that ChatGPT somehow improved. I am saying this somehow because right now the AI community agrees that solving hallucinations of language models is something no one knows how to fix (yet).

So here comes the second component of ChatGPT.

OpenAI researchers discovered that the model must learn from good and bad predictions. So, they collected a lot of training data where their team queried GPT and provided feedback on the output generated with the model. This is part of the so-called RLFH or Reinforcement Learning from Human Feedback. In RLHF, humans rate multiple questions, answer pairs, and rank them on quality. This generates a reward function to optimize the GPT model with the reinforcement learning algorithm.
This is where a lot of the ChatGPT magic happens and is one of the areas where we need more visibility of what data OpenAI used and who evaluated it.

Finally, it is crucial to note that although ChatGPT is a lot better at generating text, no one in the field has solved the hallucination problems, something that Sundar Pichai, the CEO of Google and Alphabet, told an interviewer recently.

What ChatGPT Is Not

As this book aims to provide you with a hype-free overview of the use cases of ChatGPT, we have to talk about what ChatGPT is not.

Let’s start by clarifying that ChatGPT is not the new Google. This claim was made multiple times after the first release of the technology and of course, got a lot of traction. Let’s deal with a few of them.

Fast forward to today, when language models - LMs - are called large language models - LLMs – such as (GPT-3), and ChatGPT is tearing up LinkedIn and Twitter. ChatGPT is impressive, but we should be conscious of its limitations:

  1. LLMs can generate text which sounds very realistic, but states wrong facts.
  2. Humans can express uncertainty, LLMs cannot. This deficiency becomes very important when transmitting information.
  3. LLMs exist in a closed time frame. Whereas Google can index information in real-time (well, almost), LLMs are trained on large chunks of text created in a fixed time frame. Adding the training process on top of it means that it is improbable that we will have LLMs that have access to recent information any time soon. If you know these limits, this is generally not an issue, but we are far from ChatGPT substituting search.
  4. The knowledge of the model is limited to the data it has been trained on. This includes both the time constraint mentioned above and the quality constraints of the data. Reviewing all training data available is impossible, which will always lead to artifacts or bias in the model results.
  5. LLMs lack logic. Their linguistic finesse is impressive, but the models lack human reasoning and logic.

Some other limitations that are worth mentioning include the fact that models sometimes require additional training or lack training data; the training time can be very long, making retraining with new data expensive and economically inefficient; and token limits for prompts and replies.

Don’t get me wrong - I am not trying to downplay the power of this technology. I've used it in my own work and its contributions have been both substantial and quantifiable. To achieve that outcome, however, it's important to know both what the technology is and can do and what it isn't and cannot do.

Note: All sources are identified and documented in the book.