The U.S. Department of Justice has made a strong response to the lawsuit against AI startup Anthropic in a recent court filing. The government insists that labeling the company as a "supply chain risk" does not violate its First Amendment rights and predicts that Anthropic's lawsuit will ultimately fail.
Core Dispute: AI Restrictions and Military Needs
This dispute stems from Anthropic's attempt to limit the use of its Claude model in military applications. In response, the government took a firm stance:
Compliance Penalties: The government argues that it is legitimate to classify Anthropic as a risk because of its attempts to restrict how the military uses its AI model.
Security Trust Crisis: The Department of Justice bluntly stated that due to Anthropic's restrictive attitude in military contract negotiations, the company cannot gain trust for use in combat systems.
The Trump administration had previously issued an order to remove Anthropic from the list of government suppliers. This move caused a significant upheaval in the industry:
Risk of Billions in Losses: Anthropic executives have warned that being labeled a "supply chain risk" has already caused several partners to pause collaboration, with potential losses reaching into the billions of dollars.
Support from Peers: Several employees from OpenAI and Google, including Jeff Dean, chief scientist at Google DeepMind, have submitted legal documents supporting Anthropic's resistance to the government's ban.
Anthropic has always positioned itself around "AI safety," refusing to use its technology for autonomous weapons or government surveillance. However, this stance is putting it at risk of being excluded from the lucrative military contract market. Meanwhile, competitors such as Microsoft's OpenAI technology have been confirmed to be undergoing testing by the Pentagon, despite having similar restrictions in the past.
