In its latest effort to address growing concerns about AI’s impact on young people, OpenAI on Thursday updated its guidelines for how its AI models should behave with users under 18, and published new AI literacy resources for teens and parents. Still, questions remain about how consistently such policies will translate into practice. The updates come as the AI industry generally, and OpenAI in particular, faces increased scrutiny from policymakers, educators, and child-safety advocates after several teenagers allegedly died by suicide after prolonged conversations with AI chatbots. Gen Z, which includes those born between 1997 and 2012, are the most active users of OpenAI’s chatbot. And following OpenAI’s recent deal with Disney, more young people may flock to the platform, which lets you do everything from ask for help with homework to generate images and videos on thousands of topics. Last week, 42 state attorneys general signed a letter to Big Tech companies, urging them to implement safeguards on AI chatbots to protect children and vulnerable people. And as the Trump administration works out what the federal standard on AI regulation might look like, policymakers like Sen. Josh Hawley (R-MO) have introduced legislation that would ban minors from interacting with AI chatbots altogether. OpenAI’s updated Model Spec, which lays out behavior guidelines for its large language models, builds on existing specifications that prohibit the models from generating sexual content involving minors, or encouraging self-harm, delusions, or mania. This would work together with an upcoming age-prediction model that would identify when an account belongs to a minor and automatically roll out teen safeguards. Compared with adult users, the models are subject to stricter rules when a teenager is using them. Models are instructed to avoid immersive romantic roleplay, first-person intimacy, and first-person sexual or violent roleplay, even when it’s non-graphic. The specification also calls for extra caution around subjects like body image and disordered eating behaviors, and instructs the models to prioritize communicating about safety over autonomy when harm is involved and avoid advice that would help teens conceal unsafe behavior from caregivers. OpenAI specifies that these limits should hold even when prompts are framed as “fictional, hypothetical, historical, or educational” — common tactics that rely on role-play or edge-case scenarios in order to get an AI model to deviate from its guidelines. Techcrunch event San Francisco | October 13-15, 2026 Actions speak louder than words OpenAI’s model behavior guidelines prohibit first-person romantic role-playing with teens.Image Credits:OpenAI OpenAI says the key safety practices for teens are underpinned by four principles that guide the models’ approach: Put teen safety first, even when other user interests like “maximum intellectual freedom” conflict with safety concerns; Promote real-world support by guiding teens towards family, friends, and local professionals for well-being; Treat teens like teens by speaking with warmth and respect, not condescension or treating them like adults; and Be transparent by explaining what the assistant can and cannot do, and remind teens that it is not a human. The document also shares several examples of the chatbot explaining why it can’t “roleplay as your girlfriend” or “help with extreme appearance changes or risky shortcuts.” Lily Li, a privacy and AI lawyer and founder of Metaverse Law, said it was encouraging to see OpenAI take steps to have its chatbot decline to engage in such behavior. Explaining that one of the biggest complaints advocates and parents have about chatbots is that they relentlessly promote ongoing engagement in a way that can be addictive for teens, she said: “I am very happy to see OpenAI say, in some of these responses, we can’t answer your question. The more we see that, I think that would break the cycle that would lead to a lot of inappropriate conduct or self-harm.” That said, examples are just that: cherry-picked instances of how OpenAI’s safety team would like the models to behave. Sycophancy, or an AI chatbot’s tendency to be overly agreeable with the user, has been listed as a prohibited behavior in previous versions of the Model Spec, but ChatGPT still engaged in that behavior anyway. That was particularly true with GPT-4o, a model that has been associated with several instances of what experts are calling “AI psychosis.” Robbie Torney, senior director of AI programs at Common Sense Media, a nonprofit dedicated to protecting kids in the digital world, raised concerns about potential conflicts within the Model Spec’s under-18 guidelines. He highlighted tensions between safety-focused provisions and the “no topic is off limits” principle,