Tech
OpenAI admits GPT-5 hallucinates: ‘Even advanced AI models can produce confidently wrong answers’–Here’s why
[ad_1]
OpenAI has outlined the persistent issue of “hallucinations” in language models, acknowledging that even its most advanced systems occasionally produce confidently incorrect information. In a blogpost published on 5 September, OpenAI defined hallucinations as plausible but false statements generated by AI that can appear even in response to straightforward questions.
Persistent hallucinations in AI
The problem, OpenAI explains, is partly rooted in how models are trained and evaluated. Current benchmarks often reward guessing over acknowledging…
[ad_2]
Source link
You must be logged in to post a comment Login