Friday, 20 March

Friday, 20 March2026

OpenAI's New Reasoning Models Show Increased Hallucination Rates

By Isha
OpenAI's New Reasoning Models Show Increased Hallucination Rates
OpenAI's latest reasoning AI models, o3 and o4-mini, exhibit higher hallucination rates compared to their predecessors. Internal tests revealed that o3 hallucinated in 33% of responses on the PersonQA benchmark, while o4-mini's rate was 48%, surpassing earlier models like o1 and o3-mini. The company is investigating the causes, noting that the models' tendency to generate more claims may lead to both accurate and inaccurate outputs.
Read full story at TechCrunch

Download TechShots

IT Trends Move Fast. Stay Faster.

Share your insights

Subscribe To Our Newsletter.

Full Name
Email