To further enhance the security of artificial intelligence, OpenAI officially launched a safety feature called "Trusted Contact" on May 7th local time. The core logic of this feature is to use AI monitoring technology to provide users in psychological crisis with an additional "digital lifeline."
According to the feature description, when OpenAI's automated monitoring system and professionally trained review team detect that an adult user shows clear self-harm tendencies in the conversation, and determine that this behavior may lead to serious real-world safety risks, the system will proactively send a notification to the user's pre-set "trusted contact." This measure aims to compensate for the limitations of relying solely on machine conversations in crisis handling by enabling timely external intervention.
Regarding this, the relevant statement emphasized that the "Trusted Contact" feature is not intended to replace professional mental health care or existing emergency crisis intervention services. Its purpose is to serve as a supplementary safeguard, helping users in distress establish a communication bridge with the real world.
In actual operation, ChatGPT will still prioritize guiding users to seek help, encouraging them to contact crisis intervention hotlines or local emergency services when facing pressure or extreme emotions.
In addition to making continuous efforts in the field of security, OpenAI has also been active in terms of computing power reserves and technical protocols recently. It is reported that the company expects to invest about $5 billion in computing resources this year and has jointly released the MRC open network protocol with several industry giants. These actions indicate that while enhancing the intelligence and security of models, OpenAI is willing to invest heavily to improve the efficiency of the underlying computing cluster, thereby supporting more large-scale AI application demands.
