Anthropic has made a significant update to its user data policy, requiring Claude users to choose by September 28 whether their conversations can be used for AI training. Previously, user data was removed within 30 days or kept up to two years if necessary. Now, retention can last five years for those who do not opt out. Enterprise clients remain unaffected.
The company emphasizes user benefits, noting that sharing chat data helps improve Claude’s safety, coding, and analytical capabilities. Beyond that, Anthropic gains access to massive real-world datasets, essential to maintaining a competitive edge over OpenAI and Google. These datasets help refine the model’s performance in real-world scenarios.
This update also reflects broader trends in AI data practices. OpenAI is currently dealing with a court mandate to retain all ChatGPT conversations indefinitely, highlighting tensions between innovation and privacy. Many users may inadvertently consent to these changes due to limited awareness, raising concerns over meaningful consent.
Anthropic’s interface presents new users with an option at signup and existing users with a pop-up showing a large “Accept” button and a smaller, pre-enabled toggle. Privacy experts caution that such design may result in users agreeing without full understanding, exemplifying the ongoing tension between AI development, ethical practices, and user privacy.
