Microsoft has flown the flag for responsible AI by filing a landmark court brief in a San Francisco federal court in support of Anthropic’s battle against Pentagon overreach. The brief called for a temporary restraining order against the supply-chain risk designation and argued that the designation threatens the technology networks supporting national defense. Amazon, Google, Apple, and OpenAI have also backed Anthropic through a joint filing, amplifying the industry’s collective commitment to responsible AI.
The overreach in question began when the Pentagon labeled Anthropic a supply-chain risk after the company refused to allow its Claude AI to be used for mass surveillance of US citizens or to power autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth formalized the designation following the breakdown of talks, and Anthropic’s government contracts began to be cancelled. Anthropic filed two simultaneous lawsuits in California and Washington DC challenging the designation.
Microsoft’s flag-flying for responsible AI is supported by its direct integration of Anthropic’s technology into federal military systems and its partnership in the Pentagon’s $9 billion cloud computing contract. Additional agreements with government agencies spanning defense, intelligence, and civilian services further deepen Microsoft’s stake. Microsoft publicly argued that responsible AI governance and robust national defense were complementary goals requiring government-industry collaboration.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of ideological punishment for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for its contract demands. The Pentagon’s technology chief publicly foreclosed any possibility of renegotiation.
Congressional Democrats have separately written to the Pentagon demanding information about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school. Their inquiries are adding legislative pressure to an already extraordinary confrontation. Together, Microsoft’s flag-flying, the industry coalition, and congressional scrutiny are creating a powerful challenge to Pentagon overreach and a landmark moment for responsible AI governance.