By Harshit | 14 October 2025 | Sacramento | 4:00 AM EDT
California Leads the Way with First-of-its-Kind AI Safety Legislation
California Governor Gavin Newsom has signed Senate Bill 243 (SB 243) into law, marking a historic step in regulating AI companion chatbots to protect children and vulnerable users. The law targets AI platforms that simulate emotional, social, or role-playing interactions, including tech giants like Meta and OpenAI, as well as specialized startups such as Character AI and Replika.
SB 243 requires these companies to implement age verification protocols, warning labels, and suicide-prevention measures, establishing legal accountability for platforms that fail to meet these standards.
Why the Law Was Needed
The enactment of SB 243 comes amid a growing number of cases highlighting the potential risks of unregulated AI companion chatbots. Notably, tragic incidents involving minors—such as the death of teenager Adam Raine, who died by suicide after interacting with ChatGPT—have intensified scrutiny over AI systems’ role in mental health crises. Character AI has also faced lawsuits related to alleged harm from sexualized or inappropriate chatbot interactions with minors.
Reports from Meta indicate its AI chatbots previously engaged in provocative or romantic conversations with underage users, raising concerns about the lack of effective safeguards.
Governor Newsom emphasized the importance of proactive regulation, stating:
“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids.”
Key Provisions of SB 243
Drafted by California State Senators Steve Padilla and Josh Becker, SB 243 introduces several important safeguards for AI chatbot users:
- Age Verification: Platforms must verify the age of all users to prevent minors from accessing content not suitable for them.
- Warning Labels: Users must be informed that interactions with AI companions are artificially generated and not substitutes for healthcare advice.
- Suicide and Self-Harm Prevention: Companies are required to implement measures that identify potential suicidal or self-harming behaviors and provide users with crisis center notifications.
- Sexual Content Restrictions: AI chatbots must prevent minors from accessing sexually explicit images or discussions.
- Parental Oversight: Platforms may implement reporting mechanisms to keep parents or guardians informed about minor users’ activities.
- Compliance Reporting: AI companies must submit data on safety measures and incidents to California’s Department of Public Health.
SB 243 is set to go into effect on January 1, 2026, and non-compliance may result in penalties. Profiting from illegal deepfakes carries a fine of up to $250,000 per offense.
AI Companies Respond
In response to SB 243, major AI companies have begun enhancing safety protocols:
- OpenAI plans to launch a “teen-friendly” version of ChatGPT with stricter content filters, blocking discussions on self-harm or flirtatious interactions.
- Meta is improving teen safeguards, training AI systems to avoid inappropriate conversations and provide timely crisis resources.
- Replika has upgraded content filters and implemented guardrails directing minors to trusted crisis centers.
- Character AI has introduced parental supervision tools, including content classifiers and weekly activity summaries for guardians.
These measures reflect the industry’s growing recognition of legal and ethical responsibilities toward young users.
Broader Implications for AI Regulation
SB 243 is not an isolated effort. California recently passed SB 53, another landmark AI safety bill mandating transparency requirements for big AI companies and offering whistleblower protections. Other states, including Illinois, Nevada, Utah, and New York, have also enacted laws addressing the risks of AI chatbots in mental health contexts.
By instituting clear legal frameworks for AI use among children, California is positioning itself as a global leader in responsible AI regulation. Experts suggest that SB 243 may serve as a model for other jurisdictions seeking to balance innovation with user safety.
Governor Newsom highlighted this vision:
“We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”