When Dirty Talk Becomes Public Record: Grok’s Privacy Meltdown
Hundreds of thousands of personal chats—from sex to sabotage—exposed

Over 370,000 chat transcripts of supposedly private conversations with Elon Musk’s chatbot, Grok, have gone public, sharing sensitive—and even intimate—information. Forbes first broke the now-viral story in late August, explaining how Grok’s share feature created a one-time URL, allowing private chats to be indexed by search engines such as Google, DuckDuckGo, and Bing.
According to Iain Martin and Emily Baker White at Forbes, the chats included discussions of health information, relationship problems, bigoted messages, a few names and passwords, and so much more:
Grok offered users instructions on how to make illicit drugs like fentanyl and methamphetamine, code a self-executing piece of malware and construct a bomb and methods of suicide. Grok also offered a detailed plan for the assassination of Elon Musk. Via the “share” function, the illicit instructions were then published on Grok’s website and indexed by Google.
So much for securing US borders against fentanyl. Grok, the US’s federal bot, can tell even US citizens how to make it.
Broader implications?
One can’t help wondering if any of Grok’s leaked chat transcripts involved public officials from the US and/or other countries, discussing internationally sensitive topics, and, yes, even engaging in sexy talk with a product formerly known as MechaHitler.
RECOMMENDED READ: Not So Startling Discovery: 10% of AI Conversations Are Of A Sexual Nature
It’s worth noting that Musk, through his company xAI, installed Grok into most—if not all—of US federal government agencies back in late January, meaning the same AI with access to the sensitive information of almost all US citizens and residents, including social security numbers, is now also to blame for making hundreds of thousands of poorly anonymized chat transcripts searchable on the internet.
Not safe for role-play?
Based on the results of a 2023 study previously covered in Future of Sex, we can reasonably assume about 10% of those 370,000+ chats contained sexual references, inquiries, and actions. Do the maths! That’s a lot of sex talk!
Jose Enrico, writing for TechTimes, observed:
While Grok maintains that transcripts were anonymized, most of the chats had sufficiently identifiable personal information that could jeopardize individuals’ anonymity. The experience indicates the risks of using AI chats as free areas to vent or role-play, where one is not assured of privacy.
Those words—“sufficiently identifiable personal information”—should cause all of us to pause and consider—is AI erotic roleplay really worth it, at least at present?
No expiration dates on the chat URLs
Also according to TechTimes, “Grok’s shared chat links do not expire and lack access controls. Once a conversation is live online, it remains searchable unless manually removed.” And how many non-technical customers know how to do that?
It’s not just xAI & its Grok bot. ChatGPT and Meta have also caused private chats to become public and searchable. ChatGPT apparently corrected the problem by removing its Share button, calling it “an experiment.” However, at last report, private conversations with Meta’s AI can still be accessed via its Discover feed.
TechTimes said these kinds of problems arise because “technology firms rush AI products to market without completely strengthening privacy protections.” However, the above privacy violations seem to be due to new features—such as the share buttons—added to existing products.
Smaller may not be better
So if the biggest AI companies in the world make mistakes that jeopardize the privacy of their users—like not adding “no-index” tags to the chats so search engines won’t use them—can we expect smaller chatbot companion companies to do any better?
Chatbot companion companies, including many we’ve previously mentioned here at Future of Sex, have privacy policies as well. Most users trust these companies to safeguard their own intimate interactions with the bot personas, allowing for explicit erotic roleplay. But how good are the protections, really? Consumers have no way of actually knowing.
Like the bigger AI players, the companion bot companies are also constantly adding new features for their users, with the goals of offering more value to existing accounts, attracting new customers, and increasing paid subscriptions.
But should chatbot customers be concerned new features may create unexpected opportunities for privacy leaks? Or even jail breaks and hacking?
What to do about our own sexy chatbots?
Fortunately, TechRadar has a few suggestions for those who suspect their Grok chats might be letting it all hang out in public:
- Don’t use the Share function on Grok—or similar features offered by any other AI— unless you want the world to know what you’ve said, forever.
- Try to find the link and if you do, ask Google for their content removal tool to delete it. However, this may not remove the chat entirely. And you will probably need to go through a similar process with other search engines.
- Adjust your privacy settings if you chat with Grok via X, to prevent it from training on your conversations. This couldn’t hurt, might help.
Prevention might be a better strategy than risking a privacy breach and mopping up afterwards. It hurts me to write this, but those who enjoy bot intimacy might want to consider a bot break, or at least dial back their steamy roleplays to something more tepid, leaving AI to wonder where the magic went.
Temporary, self-imposed censorship may be the easiest way to go, at least until consumers get more transparency and iron-clad certainty about their app’s privacy protections.
Yes, it’s disappointing. But the only alternative seems to be changing society itself so that sexual matters are no longer a source of shame and embarrassment, making the matter moot–but I don’t see that happening anytime soon.
As Eric Hal Schwartz at TechRadar put it, “Finding your deepest thoughts alongside recipe blogs in search results might drive you away from the technology forever.”
We suggest that finding one’s raunchiest digital sexcapade alongside methamphetamine recipes won’t bolster consumer trust in AI either.
Image Source: A.Marsh using Ideogram.ai