On Friday, March 31, the news that Italy has banned ChatGPT spread like wildfire. Though it seemed like a unique and original introduction to April’s fools joke, it wasn’t. Italian watchdogs were fierce in their intention to protect the privacy of Italian residents by ordering OpenAI to stop processing their data.
This startling decision emerged in the midst of an investigation of a suspected violation of European stern privacy regulations.
What in the name of pasta is happening?
The powerful chatbot that is capable of generating academic essays, summaries, and even poems, is trained on a plethora of information accumulated from the internet. However, the way how ChatGPT processes that information is of particular concern for the Italian watchdog.
The regulatory body alleged “the lack of a notice to users and to all those involved whose data is gathered by OpenAI.” It added that there seems to be “no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”
The ban was announced just days after more than 1,000 supporters of AI, including Tesla CEO Elon Musk, required a prompt halt to the development of giant AIs for at least six months. The reason for such a request is a concern that companies like OpenAI are constructing powerful tools that would be difficult to control.
A data breach leading to a ban
Another reason for issuing the ban that the regulator cited was the data breach Open AI encountered on March 20. On that occasion, users’ conversation and personal information were partly exposed. Those details included email addresses and the last four numbers of their credit cards.
Furthermore, ChatGPT’s inclination to give wrong answers was also cited for the ban. The regulator stated that information that the chatbot provides are not factual, which is not of negligible concern.
Last but not least, the lack of age verification was yet another issue the regulator tackled. As opposed to Google Bard, which only users over 18 can access, OpenAI has no such limitation. This goes against its terms and conditions according to which only users aged 13 and over can use the service. Such absence can be hazardous as children might receive answers that are not appropriate for their age and awareness.
It’s good to know that the ban is not permanent. This move was depicted as “temporary” by the Italian Data Protection Authority. The ban will be removed the moment OpenAI starts complying with EU privacy laws. However, this is not the only thing the AI company needs to do.
The Italian regulator requested OpenAI to submit a report within 20 days outlining the steps it has taken to protect the privacy of users’ data. Failing to do so may result in a fine amounting to up to €20 million (about $22 million), or 4% of its annual global revenue.
A call to regulate AI
On March 30, UNESCO issued a call for all nations to begin implementing its Recommendation on the Ethics of Artificial Intelligence. This normative framework provides all guiding principles for making the most of AI while simultaneously reducing its risks. To date, more than 40 nations joined forces with UNESCO to create AI checks and balances. For UNESCO, however, this is not enough as the organization wants all the countries to implement the Resolution.
While some countries are considering to or have banned ChatGPT, others are diligently working on plans for regulating AI.
Last week, the UK disclosed its plans to regulate AI. Instead of developing new ones, the government requested regulators in various industries to apply current regulations to AI. The plans, which are not specifically related to ChatGPT, lay out a few important tenets for companies to adhere to when incorporating AI into their products. Those include security, transparency, justice, responsibility, and contestability.
Currently, the UK is not working on restricting either ChatGPT or any other form of AI. Rather, it seeks to make sure that companies are creating and utilizing AI tools accordingly, as well as providing users with adequate information about how and why certain decisions are made.
By taking such approach, the UK government will be able to promptly react to AI advancements and make interventions whenever and wherever necessary.
Unlike the United Kingdom, Europe is not anticipated to take such a lenient approach to AI. The EU, which is typically on the front line of tech regulations, has proposed a revolutionary piece of legislation on AI – the European AI Act.
The regulation will severely restrict the implementation of artificial intelligence in vital education, law enforcement, infrastructure, and judicial system. It is supposed to work in tandem with the General Data Protection Regulation of the EU. These guidelines govern the handling and archiving of confidential data by businesses.
Reuters reports that EU’s draft regulation sees ChatGPT as a general-purpose AI system used in high-risk applications. The Commission defines high-risk AI systems as those that may have an impact on people’s safety or basic rights. Such systems are expected to face measures such as heavy risks assessment, as well the requirement to eliminate bias and discrimination emerging from the datasets that feed algorithms.
However, while Brussels negotiates AI regulations, certain countries are considering following Italy’s steps. Ulrich Kelber, the German commissioner for data security, told the Handelsblatt newspaper on Monday that Germany could adopt similar enforcement to that of Italy’s recent ChatGPT ban. The reason for such an action is, again, data security concerns.
Kelber clarified that such a choice would be made by the state, but he made no mention of any present plans to make such a choice. According to him, Germany has requested more information from Italy regarding its temporary prohibition, which prompted OpenAI shut down ChatGPT there.
Other countries that have reached out to Italy as well, regarding the same issue include France and Ireland.
To date, the US hasn’t yet suggested any official regulations to regulate AI technology. However, the US National Institute of Science and Technology issued a national framework that provides businesses using, designing, or deploying AI systems with advice on managing risks and possible harms.
Yet, since it operates on a voluntary premise, businesses would not be penalized for breaking the rules. So far, there has been no mention of steps taken to restrict ChatGPT in the United States.
China and other nations with strict internet control, including North Korea, Iran, and Russia, do not offer ChatGPT. Although it isn’t formally prohibited, users in the nation cannot sign up for OpenAI.
Still, this doesn’t mean that AI is persona non grata in China. Quite the contrary, the country is an undisputed champion in AI research papers. In line with this, several huge Chinese tech companies are developing ChatGPT alternatives. Some of China’s largest internet companies, including Baidu, Alibaba, and JD.com, have disclosed their plans for the ChatGPT counterpart.
Still, the new products that the leading tech companies are developing have to adhere to Chinese stringent rules. Last month, Beijing unveiled a groundbreaking law on so-called deepfakes, artificial intelligence-generated or altered images, videos, or text.
How companies implement recommendation algorithms is governed by rules that Chinese regulators have previously introduced. One of the prerequisites is that companies have to submit information about their algorithms to the cyberspace regulator.