OpenAI faces privacy complaint regarding misleading information from AI chatbot in Norway.
Privacy rights group Noyb is backing a Norwegian individual who discovered false information from the ChatGPT AI chatbot that claimed he had been convicted of a heinous crime. Previous complaints about ChatGPT generating incorrect personal data have raised concerns about OpenAI’s lack of correction tools for users.
Under the GDPR, individuals have the right to rectify personal data, which OpenAI’s lack of availability for correction prompts has violated. Noyb highlights the importance of accurate personal data and insists that a mere disclaimer is not sufficient to excuse spreading false information.
Breaching GDPR can result in penalties of up to 4% of global annual turnover, which can lead to changes in AI products as seen in previous cases. Noyb’s new complaint aims to bring attention to the dangers of hallucinating AIs.

The complaint involves the AI chatbot providing false information about an individual, including a fabricated criminal history. Noyb emphasizes that spreading such egregious falsehoods is unacceptable and unlawful under EU data protection rules.
OpenAI claims to be researching ways to improve accuracy and reduce hallucinations in its models but faces scrutiny for not adhering to GDPR regulations. Noyb points out that ChatGPT has fabricated similar false information about other individuals, showing a pattern of misinformation.
An update to ChatGPT’s AI model has stopped the production of dangerous falsehoods about one individual, but concerns remain regarding potentially retained incorrect information. Noyb stresses the importance of AI companies complying with GDPR laws and protecting individuals from reputational damage.
Noyb has filed the complaint against OpenAI with the Norwegian data protection authority and hopes for a thorough investigation. Despite previous GDPR complaints sitting unresolved, Noyb remains dedicated to holding AI companies accountable for their actions.
The DPC in Ireland is currently handling the complaint, but the timeline for concluding the investigation remains unclear. As privacy watchdogs in Europe grapple with the implications of AI tools, it is crucial to address the risks associated with misinformation from AI chatbots like ChatGPT.