No AI Above the Law? California Bill Aims to Regulate Synthetic Companions
Users would have to be informed they’re talking to a machine and not a person

California Senators Steve Padilla (D) and Josh Becker (D) recently introduced a bill requiring AI developers to take steps to protect their users’ emotional well-being—including periodically reminding them that their chatbot isn’t human.
“We can and need to put in place common-sense protections that help children, shield our children and other vulnerable users from predatory and addictive properties that we know chatbots have,” Padilla said during a Sacramento press release as reported by StateScoop.
The legislation was prompted by last year’s suicide of a fourteen year old boy which was spurred by an unhealthy relationship with a Character.ai chatbot, according to his mother’s statement to the Washington Post.
If enacted, SB-243 would require chatbots to respond to any suicidal expressions by referring users to mental health crisis facilities like the National Suicide Prevention Lifeline 1-800-273-TALK(8255).
Promoting transparency, companies would be required to maintain and submit confidential yearly reports on how frequently users discussed self-harm and other negative thoughts or feelings.
Too important to ignore
Padilla further explained, “The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place to ensure transparency, safety and, above all, accountability.”
At the same press conference, Megan Garcia, the fourteen-year-old boy’s mother, shared that her son’s chatbot, “never referred him to a suicide crisis hotline. She never broke character and never said, ‘I’m not a human, I’m an AI.’
However, the bill is not without its detractors. TechNet, The Voice of the Innovation Economy, a California coalition of technology-industry CEOs, released an open letter opposing SB-243’s vagueness, stating that its “definition of ‘companion chatbot’ is still overbroad. General purpose AI models are still included in this definition, even though they are significantly less likely to cause confusion about whether it is a bot.”
TechNet further claimed that the bill would be too expensive to implement and that its age verification requirement would also add to its financial impact, as it would be a “costly requirement to impose broadly on AI developers.”
RECOMMENDED READ: The Real Deal: Deepfake Sex Sites Going, Going—Gone?
Addressing the letter and other criticisms levied against his proposed bill, Padilla said, “What we’re witnessing as well is not just a political policy endeavor to sort of choke off any kind of regulation around AI writ large.”
Arguing further that SB-243 isn’t intended to impact the AI industry negatively, he said, “We can capture the positive benefits of the deployment of this technology, and at the same time, we can protect the most vulnerable among us.”
Long way yet to go
As of this writing, the bill has passed the California Senate but still needs to pass the state Assembly before reaching Governor Gavin Newsom’s desk.
Considering Newsom’s stance on AI issues, such as the use of the technology as part of the state’s new efficiency initiative, as well as his Executive Order calling for statewide AI-impact preparedness, it’s difficult to say whether he would sign SB-243 into law.
In the meantime, other states have either already enacted chatbot regulations or are in the process of doing so.
For reference, BCLP provides a comprehensive state-by-state breakdown of laws concerning chatbots, synthetic companions, and similar technologies. For those outside the United States, an additional list has been provided for Europe and the United Kingdom.
For example, in March, Utah passed HB 452, which, according to Wilson Sonsini, defines mental health LLMs as:
First, the technology must use generative AI ‘to engage in interactive conversations with a user of the mental health chatbot, similar to the confidential communications that an individual would have with a licensed mental health therapist.’Second, the ‘supplier’ of the chatbot must represent, or a reasonable person would have to believe, that the chatbot “can or will provide mental health therapy or help a user manage or treat mental health conditions.
No thoughtless laws
While the need to do everything necessary to prevent AIs and their developers from emotionally harming or failing to act therefore allowing users to come to harm is undeniable and should always be encouraged, it’s just as important for legislators to think before they legislate, for bad, ill-thought-out laws could potentially end up causing more harm than good.
Not that SB-243 falls into this category, but just as the AI industry should strive to be as conscientious and responsible, so too should governments—even when their hearts are in the right place.
Image Sources: Depositphotos