The Business & Technology Network
Helping Business Interpret and Use Technology
«  

May

  »
S M T W T F S
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 

OpenAI Privacy Case Shows Misinformation Is Hard to Cure

DATE POSTED:April 30, 2024

In the fast-paced world of artificial intelligence, where chatbots like OpenAI’s ChatGPT are all the rage, a new EU privacy complaint is shining a spotlight on a pesky problem: These programs just can’t seem to stop spinning tall tales.

The complaint, filed by a European privacy rights group on behalf of an undisclosed individual, alleges that ChatGPT created false personal information about the complainant and could not fully rectify the inaccuracies.

This case underscores the technical and ethical hurdles that artificial intelligence (AI) companies must overcome to address “hallucinations” — confidently asserted falsehoods that chatbots can produce alongside factual information. Experts argue that resolving this issue will necessitate understanding the complex inner workings of AI models and the challenges of teaching them to express uncertainty.

“There are a lot of factors that affect AI hallucinations, from poor and outdated training data to misinterpreting prompts,” Chris Willis, the chief design officer of Domo, who helped create the company’s AI technology, told PYMNTS. “But the key reason hallucinations are hard to solve is that they are an intrinsic feature of large language models (LLMs) — not just a bug. They’re unavoidable.”

Misinformation Dilemma

According to the European Center for Digital Rights advocacy group, also known as NOYB, the complainant — a public figure — sought information from ChatGPT regarding his birthday, only to receive consistently incorrect responses rather than being informed that the chatbot lacked the requisite data. Despite requests, OpenAI purportedly declined to rectify or delete the data, citing its inability to correct such information and failing to disclose details about the processed data, its origins, or its recipients.

NOYB has thus filed a complaint with the Austrian data protection authority, urging an investigation into OpenAI’s data processing practices and the measures implemented to ensure the accuracy of personal data handled by its extensive language models.

“Making up false information is quite problematic in itself,” Maartje de Graaf, data protection lawyer at NOYB, said in a statement. “But when it comes to false information about individuals, there can be serious consequences. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

OpenAI did not immediately respond to a request for comment. 

Fighting AI Hallucinations

AI hallucinations can cause three significant privacy issues to emerge, Blake Brannon, chief product and strategy officer at OneTrust, told PYMNTS. First, LLMs may misrepresent individual information, presenting false data as factual. This can lead to severe consequences in sensitive domains such as employment and healthcare.

Additionally, LLMs might produce outputs that appear to be genuine, sensitive information, potentially causing reputational harm or legal issues if acted upon.

Moreover, LLMs may inadvertently disclose personal data without consent, even if the data was intended to be anonymized, resulting in unanticipated privacy breaches. 

“These potential issues highlight the sheer necessity of robust data and AI governance,” Brannon said. “Not only that, but having proper consent mechanisms in place for the use of data in AI applications, adhering to stringent data classification standards, and maintaining compliance with privacy regulations such as the GDPR, CPRA, etc. Effective governance also relies on organizations understanding and managing AI used across their business, including by vendors.”

Making hallucinations go away is more than a simple programming problem. Guru Sethupathy, CEO of FairNow, which makes AI governance software, told PYMNTS that tackling hallucinations in LLMs is particularly challenging because these systems are designed to detect patterns and correlations in vast amounts of digital text. While they excel at mimicking human language patterns, they do not understand true versus false statements.

“Users can enhance model reliability by instructing it not to respond when it lacks confidence in an answer,” he added. “Additionally, ‘feeding’ the model examples of well-constructed question-answer pairs can guide it on how to respond more accurately.

“Finally, refining the quality of training data and integrating systematic human feedback can ‘educate’ the AI, much like teaching a student, guiding it towards more accurate and reliable outputs.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post OpenAI Privacy Case Shows Misinformation Is Hard to Cure appeared first on PYMNTS.com.