How to Opt Out of LLM Training Data: ChatGPT, Claude, Grok, Gemini & More (2026)
Every major AI platform trains on user data by default. This is the complete guide to opting out of all of them - platform by platform, with exact steps for 2026.
Why Every Platform Is Different
There is no single setting or universal opt-out that covers all AI platforms. Each company has its own data usage policy, its own settings location, and its own definition of what counts as training data. Some make opting out a single toggle. Others bury it in multiple menus. One major platform makes it nearly impossible.
This guide covers every major LLM platform in 2026 - what data each collects by default, exactly where the opt-out setting is, and what the setting actually protects you from.
AI Training Opt-Out Hub
Direct links to the opt-out settings for 30+ AI platforms in one place. No hunting through menus - just click, disable, done.
Open the Opt-Out Hub →Quick Reference: All Platforms at a Glance
| Platform | Can Opt Out? | Difficulty | Default |
|---|---|---|---|
| ChatGPT (OpenAI) | ✅ Yes | Easy | Opted in |
| Claude (Anthropic) | ✅ Yes | Easy | Opted in |
| Gemini (Google) | ✅ Yes | Easy | Opted in |
| Grok (xAI) | ✅ Yes | Medium | Opted in |
| Microsoft Copilot | ✅ Yes | Easy | Opted in |
| Meta AI | ⚠️ Partial | Very Hard | Opted in |
| Perplexity AI | ✅ Yes | Easy | Opted in |
| Notion AI | ✅ Yes | Easy | Opted in |
| LinkedIn AI | ✅ Yes | Easy | Opted in |
Step-by-Step Opt-Out Instructions
- 1.Log in to chatgpt.com - if you use ChatGPT without logging in, your data is collected for training regardless of this setting
- 2.Click your profile icon in the top-right corner
- 3.Select "Settings"
- 4.Go to "Data Controls"
- 5.Toggle off "Improve the model for everyone"
⚠️ Temporary chats in ChatGPT are not used for training even without this setting. The toggle affects standard conversations only. API usage (via OpenAI API key) is excluded from training by default.
🔌 API: OpenAI does not train on API inputs or outputs by default. No action needed for API users.
- 1.Log in to claude.ai
- 2.Click your profile icon and select "Settings"
- 3.Navigate to the "Privacy" tab
- 4.Disable "Help improve Claude"
⚠️ Conversations in Incognito Mode on Claude are never used for training, even without changing this setting. Claude Pro and Team plan users should still check and disable this setting manually.
🔌 API: Anthropic does not train on API traffic by default. Enterprise accounts are also excluded from training.
- 1.Go to myaccount.google.com
- 2.Click "Data & Privacy" in the left sidebar
- 3.Scroll to "History settings" and find "Gemini Apps Activity"
- 4.Click it and toggle off "Gemini Apps Activity"
- 5.Optionally delete existing activity using the manage activity option on the same page
⚠️ Google may retain conversation data for up to 72 hours after interactions even with this disabled, for safety and abuse monitoring purposes. Workspace accounts (Google One, business) may have separate settings controlled by the account administrator.
🔌 API: Google does not train on Gemini API inputs or outputs by default.
- 1.Setting 1 - X platform data: Log in to x.com, go to Settings and Privacy, select Privacy and Safety, find the Grok or xAI data sharing section, and disable the option to use your X data for Grok training
- 2.Setting 2 - Grok conversations: Open Grok directly on X, find the settings or privacy option within the chat interface, and disable "Improve the model" or the equivalent conversation training toggle
- 3.Both settings must be disabled - disabling only one still allows the other type of data to be used
⚠️ Grok has access to your entire X history including posts, likes, and interactions by default. Even after opting out, data already used in past training cannot be removed. For a detailed walkthrough of both settings, see our dedicated Grok opt-out guide.
- 1.Go to account.microsoft.com
- 2.Navigate to "Privacy" then "Privacy dashboard"
- 3.Find "AI and Copilot" settings
- 4.Disable the option to use your interactions to improve Microsoft AI products
- 5.For Microsoft 365 Copilot in enterprise: your IT administrator controls these settings - contact them directly
⚠️ Microsoft Copilot in consumer products and in Microsoft 365 have separate data controls. If you use both, you need to check settings in both places.
- 1.Search for "Object to your information being used for AI at Meta" - this is a form on Meta's website
- 2.Fill in your name, email address, and country
- 3.In the reason field, state that you object to your personal data being used for AI model training
- 4.Submit the form and wait for Meta to review your request - this is not an automatic opt-out
⚠️ Meta does not offer a simple toggle. Your Facebook and Instagram posts, interactions, and public content are used to train Meta AI by default. Since December 2025, AI chat interactions are also used for ad targeting with no opt-out available for that specific use. Meta reviews objection requests individually and approval is not guaranteed. EU users can invoke Article 21 GDPR rights for a stronger legal basis.
- 1.Log in to perplexity.ai
- 2.Click your profile icon and go to "Settings"
- 3.Find "AI Data" or "Privacy" settings
- 4.Disable the option to use your queries and interactions for model improvement
⚠️ Perplexity's data controls have changed multiple times. If you cannot find the setting at the path above, check the full Settings page for any data usage or privacy toggle.
- 1.Go to notion.so and open Settings & Members
- 2.Click "My profile" or "My settings"
- 3.Find the "Privacy" section
- 4.Disable "Allow Notion to use my content to improve AI features"
⚠️ Notion AI processes your workspace content to generate responses. Workspace admins on paid plans can also control AI data settings at the workspace level - individual user settings may be overridden by admin policy.
- 1.Go to LinkedIn Settings & Privacy
- 2.Click "Data Privacy"
- 3.Find "Data for Generative AI Improvement"
- 4.Toggle it off
⚠️ This setting covers LinkedIn's AI writing assistant, job application tools, and other AI-powered features. It is separate from the general LinkedIn data settings and needs to be disabled independently.
What Opting Out Actually Protects - and What It Does Not
Opting out of AI training stops your future data from being used to train new model versions. It does not remove your data from models that have already been trained. It does not stop platforms from storing your conversation history for other purposes - most platforms retain logs for safety monitoring, legal compliance, and abuse detection regardless of your training preference.
It also does not protect you from platforms using your data for purposes other than training - such as personalisation, ad targeting, or product analytics. Each of those uses has its own separate settings, which vary by platform.
The most important habit regardless of opt-out status: never share genuinely sensitive information - business secrets, personal health details, financial data, passwords - with any AI platform. Opt-out settings can change, companies get acquired, and privacy policies get updated without notice.
EU Users: Your Additional Rights
If you are based in the EU or EEA, GDPR gives you the right to formally object to your data being processed for AI training purposes under Article 21. This is a stronger protection than a standard opt-out - the company must either comply or demonstrate compelling legitimate grounds for continuing. Several platforms have been forced by EU data protection authorities to pause AI training on EU user data entirely.
To exercise this right, send a written objection to the privacy contact for each platform stating that you object under Article 21 GDPR. For platforms with easy opt-out toggles, using the toggle is sufficient. The formal GDPR objection route is most useful for platforms like Meta where no simple opt-out exists.
For a dedicated step-by-step guide to opting out of Grok and xAI data training specifically, see how to opt out of Grok xAI data training.
Cybersecurity professionals building free privacy tools for the 2026 compliance landscape.