Sunday, 14 December

Sunday, 14 December2025

OpenAI Implements New Safeguards in O3 and O4-Mini AI Models to Prevent Biorisks

OpenAI Implements New Safeguards in O3 and O4-Mini AI Models to Prevent Biorisks
OpenAI has introduced a "safety-focused reasoning monitor" in its latest AI models, O3 and O4-mini, to prevent misuse related to biological and chemical threats. This system, trained to detect and block hazardous prompts, achieved a 98.7% success rate in tests. Despite these measures, concerns persist about the models' potential to assist in creating biological weapons, prompting OpenAI to continue human oversight alongside automated safeguards.
Read full story at TechCrunch

Download the TechShots App

IT Trends Move Fast. Stay Faster.

Subscribe To Our Newsletter.

Full Name
Email