
OpenAI, the company behind ChatGPT, is now facing a major privacy complaint in Europe after its AI-generated false and defamatory information about an individual. The complaint, filed with the support of the privacy advocacy group Noyb, highlights one of the most pressing concerns with artificial intelligence: its tendency to fabricate information.
The case began when a Norwegian man discovered that ChatGPT had falsely claimed he had been convicted of murdering two of his children and attempting to kill the third. This shocking inaccuracy raised concerns about how AI systems handle personal data, especially under Europe’s General Data Protection Regulation (GDPR), which requires companies to ensure that the information they process is accurate.
Noyb’s legal expert Joakim Söderberg emphasized that OpenAI’s approach to handling AI hallucinations is not acceptable. OpenAI includes disclaimers stating that ChatGPT may generate incorrect information, but Söderberg argues that a disclaimer does not excuse spreading false and damaging claims about individuals.
“You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” he said.
This situation has once again brought the issue of corporate responsibility to the forefront as AI technologies continue to develop.
Artificial intelligence mistakes under criticism
This case is not an isolated incident. Earlier, an Australian mayor was effectively “digitally erased” from ChatGPT after threatening legal action. The AI falsely claimed that he had been convicted of foreign bribery, leading him to demand OpenAI either correct the false information or remove his data entirely. OpenAI’s solution was to remove all references to his name, but in doing so, it also erased accurate information about him, as well as information about others who shared his name.
These incidents expose a significant flaw in AI-generated content: the inability to guarantee accuracy while handling real people’s personal data. While AI systems like ChatGPT are designed to predict responses based on vast amounts of data, they sometimes generate entirely false statements—known as hallucinations. When these hallucinations involve real individuals, they can lead to serious reputational damage and even legal consequences.
What This Means for OpenAI and AI Regulations
The GDPR is one of the strictest data privacy laws in the world, and violations can result in fines of up to 4% of a company’s global annual revenue. If OpenAI is found guilty of breaking GDPR rules, it could face significant financial penalties, as well as pressure to modify how ChatGPT processes and verifies personal data.
Regulators are now being forced to confront whether existing laws are strong enough to govern AI. While GDPR was created to regulate digital privacy and data protection, it was not originally designed with AI-generated content in mind. This case could set a precedent for how companies are required to ensure AI-generated information is accurate, especially when it involves real people.
Can AI Ever Be Trusted?
As artificial intelligence becomes more deeply integrated into daily life, its reliability is becoming a growing concern. ChatGPT is already used in customer service, legal assistance, healthcare advice, and journalism. If AI continues to generate false but convincing statements, the consequences could extend far beyond personal defamation cases.
The OpenAI lawsuit is a warning sign for the AI industry. If companies want to maintain public trust, they will need to develop stricter safeguards to prevent AI from spreading false information. Otherwise, AI could quickly become a legal and ethical minefield, with devastating consequences for both businesses and individuals.
Prepared by Navruzakhon Burieva
Leave a Reply