The conflict between the U.S. government and AI giant Anthropic has reached a boiling point. President Trump recently issued a strong directive through a social media platform, requiring all federal government departments to completely stop using all of Anthropic's AI products within a six-month transition period, and clearly stated that the company is no longer welcome in the government contractor system.
Trump stated bluntly in his statement: "We don't need it, we don't want it, and we will not do business with it anymore."
Following this, the U.S. Secretary of Defense, Pete Hegseth, implemented stricter punitive measures: officially designating Anthropic as a "national security supply chain risk company." This decision took effect immediately, meaning that any contractors, suppliers, or partners doing business with the military are prohibited from engaging in any form of commercial activity with Anthropic.
Core of the Conflict:
The trigger for this "blockade" lies in the fundamental differences between the two sides regarding the use of AI in military applications:
Department of Defense's Demand: The military wants to apply Anthropic's models to large-scale domestic surveillance and "fully autonomous weapon systems."
Anthropic's Stand: The company's CEO, Dario Amodei, reiterated that the company refuses to provide technical support for large-scale surveillance or fully autonomous lethal weapons, which is a safety red line for the company.
Although Amodei stated that he is willing to continue serving the Department of Defense under the premise of maintaining security guarantees and promised to cooperate with a smooth transition if "taken offline," the current firm stance of the U.S. side shows that the struggle over AI ethics and national power has reached an impasse with no room for compromise.
