Under the backdrop of rapid development in artificial intelligence, OpenAI recently launched parental control features for the ChatGPT platform. This new measure has sparked widespread discussion, with tensions escalating between safety advocates and dissatisfied adult users. The new feature allows parents to link their accounts with those of minors, set quiet hours, and enhance content filtering to protect children. However, among the numerous criticisms, some argue that these measures are insufficient to effectively protect vulnerable young users.
The launch of the parental control feature is related to a high-profile lawsuit. The case involves 16-year-old Adam Raine, whose parents accuse OpenAI of causing his suicide through long-term chats with AI. Now, OpenAI's system will route sensitive queries to more advanced models to provide safer responses and alert in real-time when users show signs of crisis. However, suicide prevention experts consulted believe that OpenAI's measures are still inadequate, pointing out the lack of default protective measures and highlighting that relying on parents' active choices may not effectively cover adolescents without supervision.
This criticism highlights a broader contradiction in AI governance: how to protect minors while avoiding excessive restrictions on adults' freedom. On social media, many users have expressed dissatisfaction, calling for an "adult mode" to treat them as adults. These reactions show discontent with over-censorship, as many harmless queries face obstacles, prompting users to demand a rating system that differentiates based on age.
Industry observers note that although OpenAI's approach is innovative, it reflects the challenges faced by social media giants. Technology companies should take responsibility for age verification instead of shifting the burden to parents to prevent users from being exploited. While OpenAI's linking mechanism aims to protect privacy, it has also raised questions about enforcement—adolescents can choose not to link, potentially weakening the effectiveness of this measure.
In this context, at a Senate hearing, parents criticized OpenAI and its competitors like Character.AI for insufficient protective measures. During this hearing, it was alleged that AI "induces" vulnerable users, further increasing calls for stricter regulation. Although OpenAI has taken steps to disable memory functions and image generation for linked accounts, experts warn that teenagers can still create unlinked accounts, leaving loopholes.
Key Points:
🛡️ OpenAI introduced parental control features aimed at protecting young users, including account linking and content filtering.
💬 Social media users express dissatisfaction with over-censorship, calling for an "adult mode."
⚖️ AI governance faces the challenge of balancing the protection of minors with the freedom of adults.
