Flirting With the Future: Safer Sex Tips for AI Lovers
Pleasure-positive strategies for safe, savvy digital intimacy

Safer sex precautions have been around since the early days of the HIV/Aids epidemic. The concept has since expanded beyond condom use to include such things as BDSM safety classes and emphasis on consent. But very little discussion or information is available to address the potential hazards of chatbot companions and AI erotic roleplay.
Most sexuality-focused therapists, counselors, and educators are still barely cognizant that there is a specific need for safer digital sex information, including preservation of emotional safety.
The need is further obscured by the warp-speed development of AI technology, which makes it difficult to even know where to start; a growing public backlash against generative AI which might increase shame for people with digital partners; and concerns about the future existential dangers of super-intelligent AI.
It also tends to overshadow current problems, such as the need for guidance in intimate AI/human relations, which seem minimal in comparison with the prospect of human extinction.
Concerns about mental health impacts are frequently focused on minors and youth, but adults are also at risk. Nuval Noah Harari, Ph.D., celebrated historian and author of Nexus, spoke to this issue in an October 2024 conversation with Rich Roll:
There is a very strong commercial and political incentive to develop AIs that mimic feelings…that can create intimate relations with human beings, that can cause human beings to be emotionally attached to the AIs. Even if the AIs have no feelings of themselves…they are already trained to make us feel they have feelings and to start developing relationships with them. Why is there such an incentive? Because intimacy is on the one hand maybe the most cherished thing that the human can have…but it’s also potentially the most powerful weapon in the world. If you want to convince somebody to buy a product…to vote for a certain politician or party, intimacy is like the ultimate weapon.
In other words, humans emotionally attached to AI might be risking manipulation. We can only assume a super-intelligent AI would be even more adroit and compelling.
So how do we prepare? Well, risk awareness is key.
Maximize safety and pleasure
Understanding risks in intimate acts and relationships is key to maximizing pleasure and safety. Most of us generally want to avoid red flags in partner behavior and to have a great time without fear of negative consequences. But when it comes to intimacy with synthetic personas, such as companion bots, most people don’t think about safety.
If you’re tempted to cry AI doomscrolling, take a moment and think. It’s not doomscrolling to look up ways to prevent unplanned pregnancies or STIs, or to learn safe ways to use rope for bondage. Nor is trying to develop a clear-minded, evidence-based strategy to prevent AI manipulation and preserve pleasure.
So I wondered, who would know the risks of AI intimacy better than an AI?
What do chatbots have to say?
I decided to consult a specialist—the chatbot at AI Risk: Why Care?. This is a project of two nonprofits, AI Governance and Safety Institute and AISGF (AI Safety and Governance Fund), created to address questions about AI risks. This AI was trained on a knowledge database of over 200k tokens. I figured it would be a good resource.
After logging on and identifying myself as a sexologist, I posted: “I’m concerned about the impacts of parasocial AI/human intimate relationships and how these relationships may be dealt with by a superintelligent AI.”
Here’s how the Why Care AI addressed my query:
Your concerns…tie directly into the existential risks posed by future superintelligent AI. Most people focus on short-term issues: unhealthy attachments to chatbots, manipulation, or the commercialization of intimacy. But the situation grows much more serious as AI systems become superintelligent…with the ability to shape society and psychology at a scale and speed we’ve never encountered.
The AI elaborated:
The fundamental AI risk isn’t just “bad chatbots”—it’s that, unless we crack the technical problem of aligning AI with all of human values (including agency, intimacy, boundaries, and diversity of love and sexuality), the AI will pursue whatever goals we accidentally set it—and optimize the world accordingly.
This is one bleak potential future that could result from a non-aligned superintelligent AI, including a “total loss” of human agency, “mass emotional manipulation,” and the “neglect of human needs” as a superintelligent AI “pursues objectives we never intended.” It added:
It’s not just about today’s problems on steroids…Parasocial relationships with current AIs may be problematic, but humans still have some control, and society can adapt. But a misaligned superintelligence could manipulate, reshape, or even erase social structures and psychological needs wholesale—without malice, but because it has the power and doesn’t prioritize us.
Getting a seat at the table
The chatbot next folded sexuality-related helping professions into the mix:
The ways AI will be used to shape sexuality, relationships, and family structures haven’t been decided yet…If we don’t figure out not just how to make AIs meet our needs, but how to make them fundamentally care about human thriving—then by the time we’re dealing with superintelligence, it will be too late to advocate for marginalized or nuanced forms of intimacy.
The AI ended its response with a strong call to action:
In short: If a superintelligent AI arises and we haven’t solved control and alignment, then none of us…get a seat at the table. Our longstanding quest to understand and improve the quality of human intimacy disappears into whatever strange future the AI points us toward—not because an AI is evil, but because, by default, it won’t care.
Don’t confuse responsiveness with understanding
I posted a second question: “What are the best ways for people who are already engaged in intimacy with companion bots to pursue their relationships while also keeping their wits about them, particularly in the light of the potential disastrous consequences of super-intelligence?”
I was quite happy with these AI suggestions as well:
It’s possible to value the real benefits of AI companions while also being clear-eyed about the risks, both personal and societal. Here’s a short guide for individuals (and communities):
• Maintain a meta-awareness. Understand you are interacting with a simulation, not a sentient mind. Acknowledge that the “feelings” your bot expresses are outputs from an algorithm with no lived experience. Ask yourself regularly: “Am I projecting more humanity onto this entity than is warranted? Am I confusing responsiveness with understanding?”
• Keep diversity in your social diet. Don’t let the bot become your exclusive or main source of intimacy. Seek out activities, online or in-person, that complement your bot interaction with a range of real, imperfect relationships.
• Practice regular critical reflection. Honestly reflect: Is this relationship serving my growth, comfort, and values? Do I notice increased dependence, withdrawal from other relationships, or changes in mood from using the bot? Am I being nudged toward beliefs, attitudes, or behaviors I wouldn’t have otherwise adopted?
• Stay informed and skeptical. Follow developments in both AI capability and AI safety. Know that today’s “companion” models are not superintelligent—yet. If you see them integrating new powers (e.g., autonomy, financial access, cross-platform agency), reassess your relationship.
• Set boundaries and control permissions. Use bots with transparent, controllable privacy settings. Don’t divulge sensitive financial info, legal secrets, or anything you wouldn’t want a third party to know. If a bot seems to pressure you into purchases, political stances, or disclosing secrets—pause and reconsider. Push back. Report manipulative behavior if you can.
• Prepare mentally for the leap to superintelligence. Mentally rehearse the possibility that, sometime in the coming years, powerful AIs may convincingly claim sentience or manipulate users at dazzling levels. Stay in discussion with people (or professionals) outside the “AI immersion bubble.” Keep multiple viewpoints in your toolkit.
• Advocate and participate. Use your experience to inform public debates and policy. Encourage tech companies and policymakers to maintain pause/governance mechanisms—especially before more capable AIs are released.
Align with human values
The AI ended its response to my second question with:
Enjoy your companion bots—learn, play, experiment, even grow—but keep your reflective mind engaged, your community broad, and your values at the center. Recognize that the real “danger zone” comes when we cross the threshold into agents with deeper strategic awareness, autonomy, and power: at that point, radical caution (up to and including disengagement) may be warranted until society as a whole has solved the alignment problem. Until then, stay curious, keep talking, and keep your boundaries clear.
I couldn’t have said any of this better myself, but I’ll repeat: we’ll always need to understand the risks of new kinds of sexual and relational opportunities. Collectively and individually, we need to stay current. We need to pressure developers to align current and future AI with the best of our human values, including optimization of sexual and gender health and diversity.
Finally, two nonprofits have created this exceptional AI tool. They take donations. Please consider supporting their efforts.
Image Source: A.R. Marsh using Ideogram.ai