Companion Bot Companies Quake as a New California Law Targets Their Products
Will protecting minors overprotect adult users?

As of January 1, 2026, California may become “the first state to require AI chatbot operators to implement safety protocols for AI companions and hold companies legally accountable if their chatbots fail to meet those standards,” according to an article just published in TechCrunch.
The proposed law, passed by the California State Senate earlier this month, is primarily aimed at protecting minors who use AI, though other users seem to be included.
SB-243 Companion Chatbots contains a helpful definition of the targeted products:
“Companion chatbot” means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.
The law excludes customer service business, research, video game bots, or bots in stand-alone consumer devices that provide a “speaker and voice command interface.” Siri and Alexa are safe, for now.
California Governor Gavin Newsom has until October 12th to either sign or veto the law.
Guardrails and a right to sue

California Bill SB-243 was introduced in January, 2025 by Senators Padilla and Becker and amended in both the state assembly and senate.
In announcing the bill, a February 3rd press release from State Senator Steve Padilla’s office referenced examples of child deaths and other harms directly attributable to chatbot use and would “require program developers to implement critical safeguards to protect children and other impressionable users from the addictive, isolating, and influential aspects of artificial intelligence (AI) chatbots.”
For example, the proposed law requires regular reminders—during use—that the companion bot is artificial, and not human:
If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, require an operator of a companion chatbot platform to issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.
There are additional requirements when the chatbot operator knows a user is a minor, such as preventing “sexually explicit conduct” in conversations and content as defined in Section 2256 of Title 18 of the United States Code.
Protocols to prevent self-harm

The bill also requires operators to:
…prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, as specified, and would require an operator to publish details on that protocol on the operator’s internet website.
It’s likely this proposed law could be used as a model for similar laws in other states.
However it’s unclear what happens and who is held responsible if a user jailbreaks a customer service/business/research type bot like ChatGPT or compromises its safety guardrails to make it function as a companion bot.
Science on their side

Senator Padilla’s February ress release also referenced an article written by Nomisha Kurian, a Cambridge sociologist. Last year Kurian made an evidence-based case for developmentally appropriate, “child-centred AI design and policy recommendations to help make large language models (LLMs) utilised in conversational and generative AI systems safer for children.”
Citing studies showing minor children are accessing the internet—and companion bots—far more frequently than their parents know, as well as numerous studies of AI impacts on children and teens, Kurian makes the point that children often trust a human-seeming bot more than adults. Therefore, children may reveal too much information or also be too vulnerable to manipulation, deception, and hostile messages from AI.
While focusing on underage users, Kurian wrote, “Adults, too, are vulnerable to risks such as emotional manipulation, which suggests the need for further research on safe design for adult-learners interacting with anthropomorphised LLMs.”
Kurian added, “Balancing the appeal of human-like personalities and protecting users from being unduly influenced by that very appeal suggests the tightrope that AI design navigates in a high-pressure economy.”
SB-243 is well intentioned but questions remain

While SB-243 is reasonably written, and advocates necessary reforms from an industry which essentially leaves “the users to serve as the test subjects as developers continue to refine the modeling parameters,” according to the Padilla press release, some things about the law seem vague.
The biggest question has to do with the kind of age verification measures SB-243 requires. The Woodhull Freedom Foundation has taken a strong and well-reasoned position against age verification laws and in favor of First Amendment rights.
Instead, the Woodhull Freedom Foundation says “content restrictions on individual devices or networks allow for tailored and effective parental control without exposing the general public to censorship and potential data breaches.”
Another important area needing clarification concerns SB-243’s inclusion of people who are not minors, but who are deemed “vulnerable” or “impressionable.. The criteria for being considered vulnerable or impressionable is not specified in the proposed law, or discussions about it, but there seems to be an assumption that this law will also protect those people, even though we don’t know who they are.
Finally, section 22602 (c)(3) says if a chatbot operator is aware its user is a minor, it must:
Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
Hopefully this will not result in chatbot companies creating widespread bans that prevent consenting adults from engaging in erotic roleplay with AI—as Replika did in 2023—if that is their choice.
Senator Padilla has stated, “Safety must be at the heart of all developments around this rapidly changing technology. Big Tech has proven time and again, they cannot be trusted to police themselves.”
We agree, and stand firmly in favor of AI safety and ethical measures—especially when it comes to protecting children and teens—though adults who freely consent to access AI-created sexual material should not be a side casualty. Let’s hope SB-243 does not have that kind of impact.
Image Sources: A.R. Marsh using ideogram.ai.