How to prevent ChatGPT conversations from appearing in Google Search

Search engines may your ChatGPT conversations, you use these steps to check and delete previously shared links.

ChatGPT delete shared links
ChatGPT delete shared links / Image: Mauro Huculak
  • Google and other search engines have been indexing and making ChatGPT conversations available in search results, which may expose sensitive information online.
  • The problem was caused by an experimental option in ChatGPT that allowed search engines to discover the links, but now the feature has been removed. 
  • You can always control your shared conversations from the “Data Controls” settings in ChatGPT and delete those chats you may no longer want others to access.

Google seems to be indexing ChatGPT chats from shared conversations using a public link, but it’s not so bad. Let me explain what’s happening, and what you have to do is check if you have any public links that can potentially appear in Google Search and other search engines, and how to delete them.

Recently, many reports have been circulating stating that Google is actually crawling those chats (with possible sensitive data) you may have shared using the “Share” option from ChatGPT. 

One of the reasons why this is happening is because of an experimental feature that OpenAI added to ChatGPT, which allows users to make specific conversations discoverable by search engines, such as Google, Bing, and others. The feature also required users to check the “Make this chat discoverable” option when creating a public link.

The (somewhat) good news is that, thus far, Google Search seems to have indexed a few thousand links (as noted by Olaf Kopp from Aufgesang GmbH), which is not a lot, but enough to raise concerns. Also, since the incident happened, OpenAI has already stated that it has removed the feature from ChatGPT, and it’s actively working to remove indexed content from “relevant” search engines.

If you use ChatGPT on the web or with the ChatGPT for Windows 11 app, or if you set ChatGPT as your default browser on Chrome or Microsoft Edge, and you’re concerned, you can always check if any of your conversations are appearing on Google Search (or any search engine), and you can also review your account to delete any potential data leak.

In this guide, I’ll outline the steps to review and delete previously shared conversations from ChatGPT to prevent them from showing up in search engines or to other people.

Delete previously shared conversation links from ChatGPT

To prevent public shared links from conversations with the ChatGPT chatbot from being indexed by Google, follow these steps:

  1. Open your ChatGPT account.

  2. Click on your profile and select the Settings option.

    ChatGPT open settings

  3. Click on Data controls.

  4. Click the Manage button for the “Shared links” setting.

    ChatGPT Data Controls Shared Links

  5. Confirm the shareable links you may have created with ChatGPT.

  6. (Optional) Open Google Search on your browser.

  7. Search for site:chatgpt.com/share + keyword to confirm if any of your conversations have been indexed by the search engine.

  8. Click the Delete button for each ChatGPT conversation you may not want to appear in public search.

    ChatGPT delete shared links

    Quick tip: You can also click the three-dots button and choose the “Delete all shared links” option.

Once you complete the steps, the conversation with the chatbot with potentially sensitive information won’t be indexed by Google.

Whenever possible, avoid using the option to share a conversation using a link. Instead, select and copy the content and paste it into a document or email to share it with other people.

If the conversation has already been indexed, after deleting it, anyone clicking the link from the results page should not be able to access the contents.

Furthermore, Christopher Penn, co-founder and chief data scientist at TrustInsights.ai, recommends not interacting with online chats as there’s a risk of prompt injection.

Prompt injection is a vulnerability where malicious text inputs trick a Large Language Model (LLM) into ignoring its intended instructions or performing unintended actions, often by overriding developer-set guidelines with user-provided commands.

About the author

Mauro Huculak is a Windows How-To Expert and founder of Pureinfotech in 2010. With over 22 years as a technology writer and IT Specialist, Mauro specializes in Windows, software, and cross-platform systems such as Linux, Android, and macOS.

Certifications: Microsoft Certified Solutions Associate (MCSA), Cisco Certified Network Professional (CCNP), VMware Certified Professional (VCP), and CompTIA A+ and Network+.

Mauro is a recognized Microsoft MVP and has also been a long-time contributor to Windows Central.

You can follow him on YouTube, Threads, BlueSky, X (Twitter), LinkedIn and About.me. Email him at [email protected].