
OpenAI's Teen Safety Policy: Good Intentions, Dangerous Execution
OpenAI recently published their approach to teen safety, outlining how they plan to handle the tension between privacy, freedom, and protecting minors. While their intentions are clearly good, some aspects of their policy raise serious concerns about unintended consequences.
The Problem with Automatic Parent Contact
The most troubling element is this line: "If an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."
This policy assumes parents are always safe allies in a teen's mental health crisis. But that's not always the case. Consider scenarios where:
- A teen's suicidal thoughts stem from family conflict or abuse
- LGBTQ+ youth face rejection or conversion therapy from their families
- Cultural or religious factors make mental health stigmatized or dangerous to discuss
- Family dynamics themselves are contributing to the crisis
In these situations, parental contact doesn't provide safety—it escalates danger. For vulnerable teens, this policy could transform a cry for help into a family crisis or worse.
The Police Response Problem
When parents can't be reached, the policy defaults to contacting authorities. This raises additional concerns, particularly for teens from marginalized communities. Since 2015, police in the US have shot 112 children—most of them minorities, most unarmed, most running away. That's nearly 19 children per year through 2021.
Research shows that approximately 25% of people killed by US police are experiencing mental health crises, and police encounters involving mental health episodes are fatal 67% of the time. For many young people, especially teens of color, law enforcement contact creates more problems than it solves.
Missing the Existing Infrastructure
What's particularly puzzling is that established crisis intervention resources already exist. Organizations like Crisis Text Line, Trans Lifeline, TrevorLifeline, and the National Suicide Prevention Lifeline have trained counselors who understand when family involvement helps and when it doesn't.
These organizations have spent decades developing protocols that actually protect vulnerable teens while providing crisis support. They know how to navigate complex family dynamics and cultural contexts.
The Risk Management Failure
This policy reveals a fundamental breakdown in corporate risk assessment. Any competent risk management process would have started with basic threat modeling: who could be harmed by automatic family contact?
The answers are obvious: LGBTQ+ teens in hostile households, victims of family abuse, young people whose mental health crises stem from family pressure. These aren't edge cases requiring deep expertise to identify.
Instead, OpenAI appears to have decided: "Legal says we need to contact someone, parents seem obvious, let's ship it."
This isn't oversight—it's systematic failure to apply the same risk management rigor to human safety decisions that every corporate project gets for financial or reputational risks.
A Pattern of Optimization Over Consideration
This policy reflects a broader trend in tech companies: moving quickly from problem identification to solution implementation without sufficient consideration of complex real-world contexts.
The corporate optimization mindset that prioritizes speed and legal liability can struggle with the messy realities of human situations. The result is policies that sound protective on paper but may harm the people they're designed to help.
What Better Looks Like
Effective teen crisis intervention could involve:
- Partnering with established crisis organizations rather than creating new protocols from scratch
- Connecting teens to trained crisis counselors who understand family dynamics
- Providing resources and support without automatically breaking confidentiality
- Recognizing that one-size-fits-all approaches don't work for complex human situations
The Unintended Consequence
Perhaps most concerning is what this policy will actually accomplish: teaching vulnerable teens that AI systems aren't safe spaces for honest conversation about mental health struggles.
The teens who most need support—those in dangerous family situations or facing persecution for their identity—may learn to avoid seeking help through these channels. Meanwhile, teens in stable, supportive families (who need less protection) will continue to receive support.
Moving Forward
OpenAI clearly wants to protect vulnerable teens, and that intention matters. But good intentions require thoughtful implementation that considers the full spectrum of teen experiences and family dynamics.
The challenge is building systems that can provide support while recognizing that safety looks different for different young people. Sometimes the safest thing is maintaining confidentiality and connecting teens with specialized resources rather than defaulting to family and authority involvement.
Tech companies entering the mental health space need to proceed with humility, recognizing that human crisis intervention is complex work that benefits from collaboration with existing expertise rather than reinvention from first principles.
Comments (0)
No comments yet. Be the first to comment!
Leave a Comment