Friday, 29 August, 2025
Anthropic Will Train Claude on Your Chats by Default

Anthropic has updated its policy: as of September 28, Claude users on Free, Pro, and Max plans must actively opt out if they do not want their new or resumed chats and coding sessions used to train AI models. Opting in triggers a five-year data retention, while opting out retains standard 30-day deletion. The interface features a prominent “Accept” button with a subtler toggle for training permissions—raising privacy and consent concerns.
Read full story at TechCrunch