ROME - Italy said on Friday it was temporarily blocking ChatGPT over data privacy concerns, becoming the first Western country to take such action against the popular artificial intelligence (AI) chatbot.
The country’s Data Protection Authority said US company OpenAI, which makes ChatGPT, had no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.
ChatGPT caused a global sensation when it was released in 2022 for its ability to generate essays, songs, exams and even news articles from brief prompts.
But critics have long fretted that it was unclear where ChatGPT and its competitors got their data or how they processed it.
Universities and some education authorities have banned the chatbot over fears that students could use it to write essays or cheat in exams.
And hundreds of experts and industry figures signed an open letter this week calling for a pause in the development of powerful AI systems, arguing they posed “profound risks to society and humanity”.
The letter was prompted by OpenAI’s release in March of GPT-4, a more powerful version of its chatbot, with even less transparency about its data sources.
OpenAI said on Friday that it has “disabled ChatGPT for users in Italy”.
“We are committed to protecting people’s privacy and we believe we comply with… privacy laws. We actively work to reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals,” an OpenAI spokesman said.
“We also believe that AI regulation is necessary – so we look forward to working closely with (the authorities in Italy) and educating them on how our systems are built and used,” the spokesman added.
“Our users in Italy have told us they find ChatGPT helpful for everyday tasks and we look forward to making it available again soon.”
The Italian authority imposed a “temporary limitation of the processing of Italian user data” by OpenAI and said it had launched an investigation.
In addition to a lack of legal basis for data collection, the authority also highlighted a lack of clarity over whose data was being collected.
It said wrong answers given by the chatbot suggested data was not being handled properly, and accused the company of exposing children to “absolutely unsuitable answers”.
The watchdog further referenced a data breach on March 20 where user conversations and payment information were compromised – a problem the company blamed on a bug.
Professor Nello Cristianini, an AI academic from the University of Bath in Britain, said securing user data and enforcing age limits were easy to fix.
But the other two accusations were more problematic – that the model is trained on personal data that is gathered without consent and then not treated properly.
“It is not clear how these can be fixed any time soon,” he said.
The company has been given 20 days to respond and could face a fine of €20 million (S$28.9 million) or up to 4 per cent of annual revenue.
The runaway success of ChatGPT garnered OpenAI a multi-billion-dollar deal with Microsoft, which uses the technology in its Bing search engine and other programs.
It also sparked a gold rush among other tech companies and venture capitalists, with Google hurrying to unveil its own chatbot and investors pouring cash into all manner of AI projects. AFP