ChatGPT’s Upbringing: How Does a Program Learn?
Playing with ChatGPT prompts can be anything from fun to thought provoking experience. Still, one thing’s for sure: ChatGPT is undoubtedly the most advanced AI chatbot, and with all the hype it caused and managed to maintain, whether good or bad, is truly impressive. Millions of people want to give it a shot, and competitors are trying to surpass it and win the AI chatbot race.
But how does a program that’s seemingly capable of resembling human intelligence work?
Well, ChatGPT is the latest OpenAI language model, a significant improvement compared to GPT-3 language model family. The newest version (GPT-4) can provide various output types without cutting back on speed and accuracy and even make the users feel like they’re chatting with a human instead of sending prompts to a bot and waiting for a response.
Photo illustration: Freepik
People behind OpenAI’s chatbot project rely on supervised and reinforcement learning techniques to make ChatGPT precise, engaging, speedy, creative, and fun. As these go hand in hand with other human attributes, some outputs give people a strong feeling that the AI behind the computer screen is conscious and trying to break free.
But despite all this success and hard work, AI bots, particularly such a widespread tool as ChatGPT, have many limitations that raise several concerns among users, some of them being AI bias, content censorship, and safety. One of the major concerns is that these bots aren’t necessarily ready to serve humanity the way we’ve imagined.
We haven’t reached the point where AI’s output can remain consistent and create output from a neutral standpoint. Perhaps we never will. Regardless, the people behind these projects must figure out what to do with the current AI state and ensure the tools available to the public are unbiased and, more importantly, safe for all generations to use. That’s exactly what OpenAI promises to do, so let’s evaluate their progress and see if the goal is even achievable.
Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address. We’ve also seen a few misconceptions about how our systems and policies work together to shape the outputs you get from ChatGPT.
OpenAI began the conversation about its AI chatbot in a blog post.
ChatGPT Training: The Justification of ChatGPT’s Bigotry and Bias?
OpenAI divides ChatGPT training into two categories: pre-training and fine-tuning.
The pre-training phase includes having language models predict what could come next in an extensive dataset. So, for example, ChatGPT might’ve learned the basics by completing sentences such as “the opposite of up is <blank space> or “the Sun sets on <blank space>.” This stage was essential for ChatGPT and other open AI language models because that is when they’ve absorbed data, learned useful facts, got used to a concept of language and its grammar, and even gained what we could refer to as a reasoning ability.
Photo illustration: Freepik
The second phase of ChatGPT training involved using narrow datasets generated with OpenAI human reviewers to fine-tune ChatGPT. These reviewers rely on a few guidelines set by OpenAI to rate potential output examples.
In some cases, we may give guidance to our reviewers on a certain kind of output (for example, “do not complete requests for illegal content”). In other cases, the guidance we share with reviewers is more high-level (for example, “avoid taking a position on controversial topics”). Importantly, our collaboration with reviewers is not one-and-done—it’s an ongoing relationship, in which we learn a lot from their expertise.
OpenAi further explained the fine-tuning phase of ChatGPT’s training.
Will OpenAI Make ChatGPT Safer and Less Biased?
According to OpenAI’s blog post, the company invests in engineering and research to reduce biases, both obvious and subtle ones. The company also states that ChatGPT has its quirks, as it sometimes refuses to give an unrestricted output or does the opposite and provides an output when it shouldn’t. Part of these investments will certainly go into improving these “faults” in ChatGPT’s behavior.
On top of that, OpenAi understood that user demand varies on an individual level and therefore announced a customizable ChatGPT. This would essentially allow ChatGPT to produce outputs that others may disagree with.
Photo illustration: Freepik
OpenAI opened the upgrade introduction by saying they “believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.”
Is that good or bad? Will AI be even more biased after this upgrade? We’ll have to wait for the customizable ChatGPT release to figure this one out.