After Anthropic hit a deadlock with the U.S. Department of Defense (DoD, also known as the "War Department" under the Trump administration) over security底线 issues and was blocked, OpenAI quickly took action. The company's CEO Sam Altman announced on Friday evening that OpenAI had reached an agreement with the military to allow its AI model to be used on the DoD's classified network.

This collaboration comes at a sensitive time. Previously, Anthropic was accused by Trump of being "left-wing lunatics" for refusing to lift restrictions on "mass domestic surveillance" and "fully autonomous weapon systems," and was subsequently banned from the national supply chain.

Principles of cooperation emphasized by Altman:

  • Stick to core principles: Altman stated on the social platform X that OpenAI's agreement clearly includes two key security principles: prohibition of use for mass domestic surveillance, and ensuring that the use of force (including autonomous weapon systems) must be accountable to humans.

  • Embedded technical safeguards: OpenAI will build specialized "technical safeguards" to ensure the model operates as intended, and plans to send engineers to the Pentagon to assist in the deployment and security oversight of the model.

  • Have the right to refuse: At an internal meeting, Altman revealed that the government allowed OpenAI to build its own "security stack." If the model refuses to perform a task, the government will not force the company to modify the model's logic.

Altman called on the Department of Defense to apply the same terms to all AI companies and expressed his hope to resolve the tense relationship between the government and tech companies through rational agreements. However, the signing of this agreement has also sparked internal controversy, with more than 60 OpenAI employees having signed a joint letter this week supporting Anthropic's strict stance on military use.