Anthropic Plans to Utilize Claude Chats for Training Data: Here’s How to Opt Out

Anthropic Plans to Utilize Claude Chats for Training Data: Here’s How to Opt Out

Anthropic is ready to utilize user interactions with its Claude chatbot as training data for its large language models—unless users choose to opt out.

In the past, the company refrained from using user chats to train its generative AI models. Following updates to Anthropic’s privacy policy on October 8, users will need to opt out; otherwise, their new chat logs and coding activities will contribute to future models.

What prompted this change? “All large language models, including Claude, are trained on substantial data sets,” states Anthropic’s blog detailing the reasoning behind the update. “Data from real-world interactions offers valuable insights on which responses are most useful and accurate for users.” With an influx of user data, Anthropic aims to enhance their chatbot’s capabilities over time.

The implementation was initially set for September 28 but was postponed. “We wanted to provide users ample time to consider this option and ensure a smooth technical transition,” wrote Gabby Curtis, a spokesperson for Anthropic, in an email to WIRED.

How to Opt Out

New users will need to decide about their chat data during sign-up. Existing Claude users may already have seen a pop-up outlining the changes to Anthropic’s terms.

“Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” it states. The option to share data for training Claude is automatically enabled, so users who accept the updates without altering that setting will be opted into the new training policy.

All users can manage conversation training preferences under Privacy Settings. In the section labeled Help improve Claude, ensure the switch is turned off and to the left if you prefer not to have your chats contribute to training Anthropic’s new models.

If users do not opt out of the training policy, it will encompass all new and revisited chats. This means Anthropic will not automatically train its next model on the entire chat history unless an old thread is revisited. Once that interaction is reinitiated, the previous chat becomes part of the training pool.

Additionally, the updated privacy policy introduces changes to Anthropic’s data retention practices. The duration for retaining user data has been increased from 30 days to an extensive five years, regardless of whether users permit model training on their conversations.

The revised terms apply to both commercial-tier users, whether free or paid. Conversations from commercial users, including those under government or educational licenses, will not be included in model training.

Claude has become a preferred AI tool among software developers who appreciate its coding assistant capabilities. Since the policy update encompasses coding projects alongside chat logs, Anthropic could accumulate a significant amount of coding data for training through this change.

Before Anthropic revised its privacy policy, Claude was among the few major chatbots that did not automatically utilize conversations for LLM training. In contrast, both OpenAI’s ChatGPT and Google’s Gemini default settings for personal accounts include options for model training unless users opt out.

Check out WIRED’s comprehensive guide on AI training opt-outs for additional services that allow you to request generative AI not to be trained on user data. While opting out from data training can enhance personal privacy—especially concerning chatbot conversations and one-on-one interactions—it’s important to remember that any public content shared online, from social media posts to restaurant reviews, may still be collected by various startups as training material for their next large AI model.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant