Why language models hallucinate

OpenAI’s latest research into hallucinations show that they’re not weird and random glitches. Instead, they’re probably an outcome of how reinforcement learning is used in training. Like taking multiple-choice tests, not giving an answer guarantees a failure. Guessing at least gives a chance of success.

https://openai.com/index/why-language-models-hallucinate/