Sharing Is Caring: How Users Are Unintentionally Jeopardizing Companies and Their Careers by Sharing Business Secrets with ChatGPT

Yes, ChatGPT-4, a multimodal generative AI chatbot, is here, and users are losing their minds over the new features. And who wouldn’t - it recognizes image inputs, produces better outputs, and doesn’t “hallucinate” as much as its predecessor. It can write you an essay, debug your code, heck, it might even be able to sing you a lullaby in the near future.
Yet have you wondered what the consequences of giving people unsupervised access to such a powerful technology are?
Like with any other technology we’ve worked on before, we don’t always fully understand how it works. As a result – at least in the ChatGPT case - we end up sharing trade secrets, for example, with an AI bot.
Human-like or not, the bot knows all about your skeleton in the cupboard, too.

Reading Time: 3 minutes

ChatGPT-4 data

Illustration: Milica M.

ChatGPT, a Brief History

ChatGPT-4 is the newest model in OpenAI’s family of chatbots. It was released in mid-March but is available to premium users only. The free version of OpenAI’s AI bot still operates on the GPT-3.5 language family, and all you need to use it is to create an account.  

The ChatGPT-3.5 version introduced us to the AI benefits and showed us how to utilize this technology to our advantage. However, it was also a prime example of AI bias and security and safety issues that were too serious to ignore.

ChatGPT bot

Photo illustration: Freepik

OpenAI took user concerns seriously and began working on a more advanced chatbot version. And less than half a year after the initial chatbot release, OpenAI introduced ChatGPT-4 to the market. ChatGPT-4 has good features, the most important being the ability to comprehend and analyze image inputs.

However, the new chatbot still has limitations. It occasionally hallucinates, and people will eventually “hack it” and find new ways to trick it or get the output of their desires. But this time, it’s not the user intention that poses concerns in the AI community.

It’s the input itself we might need to worry about now. Speaking of which, you realize that all your prompts and attempts to catch a glimpse of humanity in the eyes of a chatbot are being held in a database, right?

Feeding the AI Beast: Should We Think Twice Before Presenting ChatGPT-4 With Sensitive Input

According to a CyberHaven report from March 2023, 8.2% of employees have used ChatGPT for work, and 6.5% have pasted private company information into ChatGPT when providing input. And while ChatGPT, especially the latest version, could be a helpful addition to your team and an excellent tool for boosting productivity, some companies, including JP Morgan, are in the process of blocking ChatGPT access due to concerns about data leaks.

Is that true, though?

For starters, if you visit OpenAI’s “ChatGPT General FAQ” page, you’ll notice that OpenAI states that it does review your prompts.

image-1

As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.

ChatGPT policy states.

But that’s not the end. ChatGPT is a chatbot, right? And for it to evolve, it has to train on new, relevant data. So, you’ve guessed it, they’ve been relying on user prompts to improve ChatGPT.

ChatGPT FAQ

Source: ChatGPT FAQ page

Although OpenAi changed its policy, which now states that OpenAI won’t use user input as training material unless a user explicitly decides to contribute with their prompts. However, ChatGPT has been on the market for over six months, during which time it collected a considerable amount of sensitive data.

How Many People Share Sensitive Data on OpenAI’s Platform?

The CyberHaven report on ChatGPT usage also states that: “Sensitive data makes up 11% of what employees paste into ChatGPT, but since the usage of ChatGPT is so high and growing exponentially this turns out to be a lot of information. During the week of February 26 – March 4, workers at the average company with 100,000 employees put confidential documents into ChatGPT 199 times, client data 173 times, and source code 159 times.”

Sensitive data

Photo illustration: Freepik

So, you might be asking now what is considered sensitive information in this case.

For starters, employers are sharing trade secrets with ChatGPT. Disclosing this type of information while you’re under a contract can create problems for your employer, but it could also cost you your career.

Ladies and gents, the silent quitting is old-school now, just share your company’s trade secrets with ChatGPT, and you might get fired. Maybe even sued!

Employees also share client data with ChatGPT, which is often highly confidential. Patient medical records, for example, go into that category.

Keep in mind that, as stated earlier, OpenAI’s new policy says that the company won’t use user data to train their AI chatbot. However, they still collect data for misuse and abuse monitoring.

They do delete it after 30 days, though, making their new policy update far from revolutionary. 

That said, be careful about what prompts you create for chatbots. Even if you didn’t allow OpenAI to collect your data for training purposes, it would still be best to exclude sensitive data from your prompts. That includes names, addresses, source code under the wing of your company, or anything else you might not want a stranger to get their hands on, really.

Jelena is a content writer dedicated to learning about all things crypto. Her hobbies are playing chess, drawing, baking, and going on long walks. During winter, she usually spends her leisure time reading books.

[the_ad_placement id="end-body"]

Subscribe to our newsletter and stay updated !