OpenAI adds age prediction to ChatGPT: automatic protections for minors and verification for full access
OpenAI has announced a global rollout of a new feature in ChatGPT: age prediction, designed to identify accounts the system believes may belong to minors — and automatically enable additional safeguards in those cases.
The idea is simple: less reliance on users self-reporting their age, and more “guardrails” when the system suspects a user is under 18.

What changes for users
When the system predicts a user is under 18, ChatGPT automatically applies stronger protections to reduce exposure to sensitive content and high-risk topics.
This fits a broader industry shift: AI platforms are increasingly moving toward “safety by default,” rather than relying on manual settings or self-declared age alone.
What if the system gets it wrong?
OpenAI says users who are incorrectly flagged as underage can restore full access through identity verification — a process that includes confirmation via selfie through a third-party verification provider.
In other words: if you’re placed into a more restricted mode by mistake, there’s a path to fix it.
Why this matters right now
This feature arrives as AI tools become everyday utilities for huge numbers of people, while public and regulatory pressure grows for clearer protections for younger users.
OpenAI also says the rollout will expand to the EU in the coming weeks.
Conclusion
Age prediction in ChatGPT is another sign that AI platforms are moving toward “default safety.” For users, that can mean fewer unwanted situations and clearer boundaries — with one obvious challenge: doing it accurately, without too many false flags.






