Recently, The New York Times has requested OpenAI to hand over 20 million private ChatGPT user conversation records. In response, OpenAI quickly stated that it has asked the court to reject this request. In its statement, OpenAI pointed out that The New York Times' request completely ignores safety common sense and privacy protection principles, and may force the company to hand over a large number of extremely private conversation records unrelated to The New York Times.

OpenAI said that The New York Times' motive is to explore how users bypass its paywall in order to access paid articles for free. Such a request, according to OpenAI, is both unreasonable and does not meet the ethical standards expected in the media industry. OpenAI emphasized that it will always do its utmost to protect user privacy and data security.
It is worth noting that The New York Times is not the first time to make such a request to OpenAI. Earlier, the media had once asked OpenAI to revoke users' right to delete conversation records and requested the company to hand over 1.4 billion user conversation records, but these requests were all rejected by OpenAI.
To strengthen user privacy protection, OpenAI also announced that it is developing an encryption feature for ChatGPT user ends. This feature will make users' conversation records completely invisible to others, including OpenAI itself cannot view them. At the same time, OpenAI will establish an automated system to detect potential security risks and organize a strictly reviewed team to handle situations where users face threats to life or cybersecurity.
Key points:
🌐 OpenAI has requested the court to reject The New York Times' request for user conversation records, emphasizing the importance of privacy protection.
🔒 OpenAI has developed a user-end encryption feature, ensuring that conversation records are invisible to anyone.
📰 The New York Times has made multiple unreasonable requests, but they have all been rejected by OpenAI.
