According to court documents, more than 30 employees from OpenAI and Google DeepMind jointly submitted a "friend of the court" brief on Monday, publicly supporting Anthropic's legal action against the United States Department of Defense. Earlier, the U.S. Department of Defense had labeled this artificial intelligence company as a "supply chain risk," a designation typically reserved for foreign hostile entities, sparking widespread debate within the AI industry.

The documents show that the involved employees believe the government's move constitutes an abuse of power, which negatively impacts the entire AI industry. The signatories include Jeff Dean, chief scientist at Google DeepMind. The documents state that if the Pentagon is dissatisfied with the terms of its contract with Anthropic, it could simply terminate the partnership and choose another AI supplier, rather than punishing companies through risk designations.

OpenAI

The incident originated when Anthropic refused to allow its technology to be used for mass surveillance or autonomous weapon systems targeting U.S. citizens. Subsequently, the U.S. Department of Defense classified it as a supply chain risk enterprise and stated that the government should have the right to use artificial intelligence for any "legal purpose," without being restricted by private technology suppliers. In response, Anthropic filed two lawsuits against the U.S. Department of Defense and several federal agencies on the same day. Subsequently, supporting documents from industry professionals were included in the court records, and the event was first reported by Wired.

Notably, shortly after adding Anthropic to the risk list, the U.S. Department of Defense quickly signed a new cooperation agreement with OpenAI, a move that also sparked controversy within OpenAI, with many employees openly expressing their opposition.

In the documents submitted to the court, supporters pointed out that allowing such measures against leading U.S. AI companies could weaken the country's industrial and research competitiveness in the field of artificial intelligence, while also suppressing open discussions within the industry about AI risks and benefits. The documents also emphasized that in the absence of a clear public legal framework regulating AI usage, developers setting "red lines" through contract terms or technical restrictions is an important safeguard against the catastrophic misuse of technology.

In recent days, several of the signatories of the document also participated in a joint open letter, calling on the Pentagon to revoke the designation and urging their company's management to support Anthropic's position on AI safety and usage boundaries. This incident is seen as a significant signal of the ongoing escalation of tensions between AI companies, technology ethics, and government regulation.