By Harshit | 30 September 2025 | San Francisco | 9:00 AM EDT
OpenAI announced Monday that new parental controls are now available for all users of its popular AI chatbot, ChatGPT. The feature allows parents to link accounts with their teens and manage settings to create a safer, age-appropriate experience.
The rollout comes as OpenAI faces mounting scrutiny over youth safety, including a lawsuit filed in San Francisco Superior Court alleging ChatGPT encouraged a 16-year-old boy, Adam Raine, to take his own life.
How the Controls Work
Parents can send an invite to their teen to connect accounts, or teens can request to link with a parent. Once connected, parents gain access to a control page in settings that lets them:
- Set quiet hours when ChatGPT cannot be used
- Turn off voice mode
- Disable memory so ChatGPT won’t save prior conversations
- Remove image generation features
- Opt out of model training, preventing conversations from being used to improve OpenAI’s models
If a teen unlinks their account, parents will be notified.
Alongside these controls, linked teen accounts receive automatic safeguards that limit exposure to graphic content, viral challenges, sexual or romantic roleplay, violent roleplay, and extreme beauty ideals. Parents may disable these protections, but teens cannot.
Expert Reactions
Advocates welcomed the move as a step in the right direction but cautioned that controls alone are not enough.
“These parental controls are a good starting point for parents in managing their teen’s ChatGPT use,” said Robbie Torney of Common Sense Media. “But they work best when paired with conversations about responsible AI use and clear family rules about technology.”
Alex Ambrose of the Information Technology and Innovation Foundation noted that not all children live in homes where parents have the time or skills to monitor online activity. “That’s why it’s great to see platforms implementing these systems, even though they are just one piece of the puzzle,” she said.
Others framed the update as an important signal. “OpenAI is signaling to the market that it cares about teen harm, which is a hot issue these days,” said Vasant Dhar, professor at New York University. “If children know their interactions are monitored, they are less likely to stray into trouble.”
Concerns About Overreliance on AI
Some experts warned of the dangers of teens depending too heavily on AI tools.
“There is something magical to thinking of that first creative line in an essay without AI handing it to you,” said Eric O’Neill, a former FBI counterintelligence operative. “Too much AI too soon can stifle imagination. Parents must step in before kids outsource their ability to create.”
Lawsuit Pressure and Risk Mitigation
Others questioned OpenAI’s motivation.
Lisa Strohman, founder of the Digital Citizen Academy, argued the changes may be more about liability than safety. “They’re putting out something better than nothing, but we can’t outsource parenting,” she said.
AI ethicist Peter Swimm went further, calling the controls “woefully inadequate.” He argued that the move was primarily intended to shield OpenAI from lawsuits. “Chatbots are designed to give you what you want, even if what you want is bad,” he said, adding that he does not allow his 11-year-old daughter to use AI unsupervised.
Balancing Innovation and Protection
As generative AI tools like ChatGPT continue to enter homes, schools, and workplaces, OpenAI’s new parental controls highlight the growing tension between innovation and child protection.
For parents, the tools provide an extra layer of oversight. For regulators and critics, they mark only the beginning of what may be a longer debate over responsibility, governance, and the role of AI in young people’s lives.