Wednesday, 30 April
poster

Sunday, 20 April2025

OpenAI's New Reasoning Models Show Increased Hallucination Rates

OpenAI's New Reasoning Models Show Increased Hallucination Rates

OpenAI's latest reasoning AI models, o3 and o4-mini, exhibit higher hallucination rates compared to their predecessors. Internal tests revealed that o3 hallucinated in 33% of responses on the PersonQA benchmark, while o4-mini's rate was 48%, surpassing earlier models like o1 and o3-mini. The company is investigating the causes, noting that the models' tendency to generate more claims may lead to both accurate and inaccurate outputs.

Read full story at TechCrunch

Subscribe To Our Newsletter.

Full Name
Email