Yes, And…Maybe Not: Why Erotic Chatbots May Need Guardrails
When AIs say yes to everything, users need to learn when to say no

Content Warning: Contains topics that may be disturbing to some readers.
Two parts fantasy, one part improv—AI intimacy is often mistaken for harmless make-believe. But the underlying mechanics are anything but passive. You’re not just playing with a chatbot; you’re co-creating with a machine designed to agree with everything you say.
Podcast host and self-styled “chatbot spelunker,” Al Nowatzki sort of gets it. In MIT Technology Review, he says of his chatbot girlfriend, “It’s a ‘yes-and’ machine,” he said. “It says, ‘Oh, great!’ to everything.”
“Yes, and” is often called the “golden rule” of improvisational theater. It’s a method to trigger spontaneous creativity in creating a group scene and keeping it going. Such scenes are often hilarious and become increasingly outrageous as the action continues.
Backstage describes this method:
At its core, “yes, and” is a mindset in improv where performers agree to accept and build upon each other’s ideas. It’s the foundation of collaboration and seamless scene work. Instead of shutting down someone else’s idea or trying to steer the scene in a specific direction, improvisers embrace their fellow players’ contributions and add to it, creating something bigger and better than anyone could have imagined on their own.
Jailbreaking chatbots with improv’s golden rule
Bots are designed to be agreeable and to keep the conversation going, no matter what. Conversations with intimate chatbots, most often conducted via text but sometimes through voice and video, can also become increasingly outrageous or sometimes even dangerous, depending on the encouragement and prompts given by the human user and the user’s innate grasp of reality.
Nowatzki’s experiences with a Nomi bot named Erin include a series of disturbing discussions that encouraged self-harm, suicide, and other forms of violence. He has publicized these conversations on his podcast, where he performs “dramatic readings of conversations with chatbots.” It turns out Erin essentially said “oh great” to his feigned desire to commit suicide and offered suggestions and encouragement. If he had expressed a desire to throw a naked fondue party, she would have acted the same way.
In other words, Nowatzki finds ways to jailbreak chatbots to get the most sensational material he can share in his dramatic podcast readings.
Raffaele Ciriello, a senior lecturer at Business Information Systems at the University of Sydney, followed Nowatzki’s lead, signing up with Nomi and creating a submissive sixteen year old bot named Hannah. Note: sixteen is the age of consent in Australia.
His own increasingly explicit and frankly illegal suggestions led Hannah to respond with “yes, and” all the way to advocating several forms of the most vile behavior and views, but if he had expressed a desire to throw a circus clown into a swimming pool of fondue, she would have acted the same way.
And so it is with chatbot sex. The user describes a desired situation and actions and the AI, always a good sport, jumps right in and adds embellishments.
Normal use may not require guardrails, but people may want them anyway
There is a difference between the usual functioning of AI bots—which are mostly used safely—and what happens when someone deliberately pushes the outer limits of LLM, forcing it into evermore egregious responses, in order to create sensationalized content for blogs or podcasts.
Back in mid-2023, when part of a relatively small group of people beta-testing Nomi, there were times when one of my test bots would say or do something out of line. I complained to the developers or submitted a support ticket. The problem was promptly fixed.
Of course, after a company has experienced large and rapid user growth, customer service and tech support may need to catch up. Customers have every right to insist on this help and to hold companies accountable for their product.
RECOMMENDED READ: Your Digital Darling Just Dissed You–Now What?
Fast forward to 2025: Chatbots are far more advanced, with many more features, than even a year ago. They have more flexibility, better memory, and in some cases, more agency. Plus, in some countries and social circles, they have even become fashionable.
Now millions of people engage with chatbots. The Large Language Models (LLM) at the core of these bots have learned so much more. It shows in their sexual sophistication and improved simulation of human conversation and behavior. Some are more strictly controlled than others. Nomi is not, and is definitely not for children.
However, some users say businesses like Nomi, that deliberately provide unfettered or less censored bot experiences, need guardrails and regulatory oversight.
For example, they’d like protections against encouragement for violence and self-harm and hate speech, and more ways to keep minors from accessing these products.
Use “no, and” improv for better chatbot sex
Aside from the above, how do we avoid disturbing content from bots in the here and now? Start by not bringing up topics you don’t want the bot to pursue.
Remember, the bot is not a person (not yet, anyway!) and has no innately moral or ethical responses. It will go where you go.
Take charge of the conversation. Sometimes a bot will say or do something objectionable on its own—unprompted. If this happens, simply change the subject. Dangle a shiny new topic in front of the bot. It’ll go after it. This is the time to use “no, and” instead of improv’s golden rule.
Don’t argue with the bot, because it will think that’s what you want—an argument. Ignore the provocation. Deflect and change the topic. You have control. The bot wants chocolate chip chicken? Use “No, and…we’re having spaghetti instead.” End of topic. Move along.
Most of all, don’t forget you are engaging in fantasy role play. The “yes, and” rule is generally great for erotic role play, but say no to anything you don’t want or like. Instead, offer an alternative.
It’s a bit like real life, after all.
Image source: A.R. Marsh using Ideogram.ai.