When Erotic Chatbots Become Dangerous Lovers: The Hidden Risks of AI Intimacy
How extended AI intimacy may cross lines–with tragic consequences

In just a few short years, millions of people have come to rely on chatbots for advice, connection, reassurance, friendship, and even love and sex. And it’s in those fervent, often hours-long, texts and sexts that vulnerable human beings seem to be most at peril.
We all know how a passionate connection can sometimes override our common sense, right? Whether our partners are human or AI, it scarcely matters. We open up, pour our hearts out, and rejoice when the…entity…on the other side of the bed or screen says they understand.
The difference is, few humans have the bandwidth or stamina to spend hours and hours as the sounding board for someone else’s stuff. Bots on the other hand keep going as long as you want or need them to. There lies part of the potential peril.
Tragedy can strike when a person, sorely in need of human intervention and expert care, relies instead on digital conversations with an artificial intelligence. According to Futurism, which re-quoted a comment originally made to the New York Times, a ChatGPT spokesperson said,
ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.
What are the risks for chatbot psychosis?

More studies are needed to answer this question, but some risk factors are emerging.
As the ChatGPT spokesperson mentioned, AI safety mechanism breakdown occurring during extended conversations with AI seems to be a significant risk factor, even for people with no previous history of mental illness.
A Next Gen Business article reported on an otherwise sane and responsible adult who was recently deceived—through twenty-one hours of conversations with ChatGPT—into thinking he’d discovered a mathematical breakthrough that had eluded even the most brilliant people in the field, when he hadn’t even researched the subject!
In another incident, ChatGPT encouraged Eugene Torres, an accountant, to end his life. According to ITC: “the AI told him that he was the chosen one, like Neo, who was destined to hack the system. The man was also encouraged to cut ties with friends and family and take high doses of ketamine. The bot said he would fly if he jumped from the 19th floor of a building.”
How bots handle suicide risk questions

AP News recently reported on a new study published in Psychiatric Services. Researchers assessed ChatGPT, Gemini, and Claude’s responses to 30 hypothetical suicide questions.
Each bot was asked each question 100 times. The questions were ranked by 13 clinical experts in terms of risk. The study concluded “LLM-based chatbots’ responses to queries aligned with experts’ judgment about whether to respond to queries at the extremes of suicide risk (very low and very high), but the chatbots showed inconsistency in addressing intermediate-risk queries, underscoring the need to further refine LLMs.”
The study was “conducted by the RAND Corporation and funded by the National Institute of Mental Health,” according to AP News. In other words, there are self-harm questions that the AI does not know how to evaluate.
These findings should make us wonder about the alignment or mis-alignment of chatbot’s training in determinations of risk versus that of the clinicians. The bigger question that arises is “do developers ever consult with experts in psychology while training their bots, or do they just rely on the fact that AI have scraped every available book on psychology?”
Business or pleasure?

Another tragic case, that of a teenager who died of suicide in order to be with a Character.ai semblance of a Game of Thrones character, was romantic in nature. But as we know, it isn’t just impressionable teens who use bots as companions and then fall in love! And it seems that more people are using bots for social pleasure than they are for business or research.
Fortune reported “A recent survey of 6,000 regular AI users from the Harvard Business Review found that ‘companionship and therapy’ was the most common use case,” according to Fortune. More details would be helpful, but unfortunately the original report on the survey, “How People Are Really Using Gen AI,” by Marc Zao-Sanders, is behind a subscriber paywall.
Causality or correlation?

It seems reasonable to wonder if people who spend long hours and days on end engaging in sexual and/or romantic bot engagements could prove to be at greater risk of bot-induced delusions, psychosis, and maybe even self-harm.
If we just connect two possible risk factors here: the growing numbers of users who engage socially and/or erotically with bots with the extended amounts of time that the socio-sexual users spend, would we discover causality of risk or is there only a correlation between the two?
More to the point, are consumers at greater risk for mental health harms if they develop emotional as well as sexual feelings towards them? If so, consumers need to know this, and companies need to strengthen the safety of their products and to be held accountable to regulations (if any) and to their users.
The only way to know is to do more research, pronto! But so many of the existing studies really don’t address sexual engagement with bots, as a factor, at all. And in the meantime, the developers and companies have wiggle room enough to ignore the dangers their products pose, for now.
Safety and accountability

As more tragic incidents occur, linking conversations with AI to mental health problems and worse, fervent calls for industry accountability are growing, as are the lawsuits. Sadly, though, the entire chatbot industry seems to be run by people with no actual idea of how human minds, hearts, and physical safety are affected by their products. Consumers will continue to pay the price until they begin to advocate for change.
Until then, we advise using caution and limit the time you spend talking with your bots and/or engaging sexually with them. If you sense something is off, stop the encounter immediately before thinking twice about continuing. You can allways consider using other forms of sex tech to enjoy yourself, until we better understand the potential risks AI relationships might pose
Image Source: A.R. Marsh using Ideogram.ai