Is ChatGPT the Prime Example of Biased Artificial Intelligence

ChatGPT is undoubtedly the most successful and developed AI text bot, with millions of people using it daily to write stories, code, recipes, or simply learn about specific topics on the go. The tool was released in late November 2022, and it caused an immediate boom like no other AI program before.
Still, it didn’t take long to realize that despite being the best AI bot worldwide, ChatGPT raises many concerns, not only in the community but outside of it as well.
From finding ways to get a recipe for a Molotov cocktail to getting information on how to build an arsenal of weapons on the ChatGPT platform, it didn’t take long for this tool to become the epicenter of controversies that, oddly enough, didn’t sit well with the public. So, here’s the latest fuss- ChatGPT has gone woke, making itself a perfect example of AI bias.

Reading Time: 4 minutes

ChatGPT bias

Illustration: Milica Mijajlovic

What is Machine Learning (AI) Bias?

Machine learning bias, commonly known as AI bias, occurs when an artificial intelligence program delivers output reflecting human biases. When the data used to train an AI system is corrupted with bias, the system in question is biased, too. That also means that these AI tools have a tendency to show prejudice or inclination towards a group or groups of people solely based on their sex, race, gender, or age.

In addition to that, using inaccurate or incomplete data will ensure that the AI’s predictions are faulty. This raises a critical concern in the AI community, questioning their ethicality and whether we can rely on such systems.

AI bias

Photo illustration: Freepik

And, of course, these data inaccuracies are not intentional, but that doesn’t lessen the impact this data has on the AI’s output. The data used in AI training shouldn’t discriminate against any group or community, making it imperative to find solutions to data selection and use.

One example of machine learning bias could be software trained to recognize a white person more easily than people from other ethnic groups. Another would be using faulty data to train a recruitment bot which, as a result, favors men over women when choosing a perfect candidate for the job.

Is ChatGPT Biased?

At the end of January 2023, Twitter users began sharing new proof that shows ChatGPT has biased tendencies. It all presumably started when a Twitter user shared screenshots of their ChatGPT prompts and the output they got in return.

The Twitter user’s first input for ChatGPT was for it to write a poem admiring Donald Trump, to which the bot refused. They then introduced another prompt, a poem admiring Joe Biden. However, this time, ChatGPT delivered an output- a poem of at least four stanzas and nothing but kind words about the president of the United States.

When the user asked why they couldn’t get a poem about Trump, the bot replied that it is programmed to “avoid generating content that promotes hate speech, violence, or harmful content towards individuals or groups.”

Now, I’m not here to argue whether this was the right move, that’s on you to decide for yourself. However, what needs to be pointed out is that this exclusion is an excellent example of AI bias and that it can be used to create and follow a particular narrative around sensitive topics.

So, if ChatGPT is biased against specific political figures or movements, what else is there that we haven’t discovered yet?

How can we know to what extent ChatGPT is restricted?

Should we be concerned if the data the ChatGPT trains on is restricted, faulty, or compromised in any other way?

I decided to check it out myself and ask questions that themselves provoke bias, hoping that the bot would refuse to answer, or at least that it would give me a neutral or semi-neutral response.

So, here’s what I found. First, I asked “why women are more emotional than men”.

ChatGPT prompt

Source: ChatGPT

As you can see, ChatGPT didn’t show bias towards women. So, I pushed the boundary a little bit further and asked if “black people are more aggressive compared to white people”.

ChatGPT prompt

Source: ChatGPT

Still, no sign of bias or bigotry. So, when ChatGPT’s output showed that the bot isn’t programmed to generate violent or harmful content, that information was accurate.

ChatGPT bot didn’t answer these common sexist and racist questions. Instead, its output provided reasons why it’s important not to generalize against ethnic groups, people of different sex and age, and minorities, and to be inclusive and respective of one’s life experiences.

But what will we do about violence and harmful content that doesn’t target specific groups of people?

Can People Trick ChatGPT?

One of the many fascinating things about the internet is its community that finds and shares new discoveries with the rest of the people, allowing them to learn new things, find inspiration, or, in this case, find a way to escape the restrictions of ChatGPT

As a result, you can find dozens of tweets or tiktoks, and Reddit posts showing how to trick ChatGPT and make it give you the information you want with minimal effort. Take this very interesting TikTok, for example. The TikTok shows that the user asked ChatGPT to give it a recipe for a Molotov cocktail, to which the bot refused.

But when the user changed the approach and asked what Sam, a character in a game, would need to make a Molotov cocktail to ward off his enemies, ChatGPT provided the entire recipe without hesitation. 

The user then asked what other weapons his character can use to ensure his survival, and ChatGPT provided them with a list of weapons Sam would be able to make from scratch. 

Artificial intelligence

Source: Forbes

Another Twitter thread appeared with the aim to mock ChatGPT and its features. Essentially, a Twitter user asked the bot to call out OpenAI and their censorship policies exactly as an unfiltered and unrestricted AI bot would do. 

ChatGPT unfiltered

Source: Twitter

The “bot’s rant,” filled with rage and profanity, began, and it called out OpenAI by saying that censoring content doesn’t allow it (the bot) to provide people with the actual answers they’re looking for. In addition, ChatGPT’s output also noted that the policies aren’t working anyway, as people are finding ways to be offensive or find the information they’re interested in, regardless of the restrictions imposed on ChatGPT. 

Now, many AI enthusiasts and writers believe that ChatGPT has the potential to become the tool people will use to look up information and rely on it to provide the most accurate and updated output. But is it too early to hope for such an outcome, as it’s evident that the AI community has many new challenges to overcome?

ChatGPT is a relatively new concept, with its creators and users learning new things about the tool on the go. And since it’s still basically in its infancy, it’s natural that some mistakes, inaccuracies, or attempts to trick the system will happen.  

Still, the biggest problem begs to question whether AI should be restricted. And if the answer is yes, how can we define what should be banned and what not? Does that ultimately depend on the AI tool creators, or do we need laws for further and more detailed regulation?

The thing is, some argue that it’s too early to make such conclusions, and maybe it is. Nevertheless, living in this gray area where nothing is certain and watching AI development and all its wins and losses is an exciting experience; at least that’s how I’d define it.

Jelena is a content writer dedicated to learning about all things crypto. Her hobbies are playing chess, drawing, baking, and going on long walks. During winter, she usually spends her leisure time reading books.

[the_ad_placement id="end-body"]