Thursday, May 14, 2026

OpenAI Research: Why GPT-5 and Chatbots Still Provide Misinformation


File Image: ChatGPT-5 (Collected)

Staff Reporter | PNN:
OpenAI recently published a new research paper discussing the issue of hallucination or misinformation in large language models (LLMs) such as GPT-5 and chatbots.

The researchers define hallucination as “coherent but false information generated by a language model.” They noted that although the models have improved, hallucination remains a core challenge for LLMs and cannot be completely eliminated.

For example, when researchers asked a chatbot for the title of Adam Tauman Kalai’s PhD thesis, it produced three different answers—all incorrect. Similarly, asking for his birthdate resulted in equally inaccurate responses.

The researchers explained that hallucinations largely occur due to the pre-training process, where the model is trained to predict the next word accurately without verifying factual correctness. High-frequency linguistic patterns, such as spelling or punctuation, are easily learned by the model, but less familiar information—like a pet’s birthday—cannot be inferred solely from patterns.

As a solution, the researchers suggested changing the model evaluation framework. They stated that the current “accuracy-based evaluation” encourages models to make guesses, which increases hallucination. They recommended penalizing incorrect guesses while giving credit for expressing uncertainty.

They also emphasized that merely conducting “some new uncertainty-based tests” is insufficient. Instead, the core accuracy-based evaluation system must be updated to prevent models from making blind guesses.

This research highlights that the key to reducing hallucination is training models to make cautious inferences and express uncertainty when information is not known.

Super Admin

PNN

প্লিজ লগইন পোস্টে মন্তব্য করুন!

আপনিও পছন্দ করতে পারেন