Microsoft has made a bold legal move by filing a supporting court brief for Anthropic in its battle against the Pentagon’s supply-chain risk designation, a case that could ultimately rewrite the rules governing the use of artificial intelligence in the US military. The brief was submitted to a federal court in San Francisco and urged the court to grant a temporary restraining order against the designation. Amazon, Google, Apple, and OpenAI have also backed Anthropic through a joint filing, making the case a watershed moment for the technology industry.
The conflict began when Anthropic refused to allow its Claude AI to be used for mass surveillance of American citizens or autonomous lethal weapons during negotiations over a $200 million Pentagon contract. Defense Secretary Pete Hegseth applied the supply-chain risk designation after talks broke down, and the Pentagon’s technology chief later stated unequivocally that there was no chance of renewed negotiations. Anthropic responded by filing two simultaneous lawsuits in California and Washington DC.
Microsoft’s court brief is grounded in the company’s direct use of Anthropic’s technology in military systems and its partnership in the Pentagon’s $9 billion cloud computing contract. The company also holds numerous additional federal agreements and has a deep commercial interest in ensuring Anthropic can continue to operate as a government supplier. Microsoft publicly called for a collaborative framework between government and industry to ensure advanced AI serves national security while being governed responsibly.
Anthropic’s lawsuits argued that the supply-chain risk designation was an unprecedented and unconstitutional act of retaliation against a US company for its publicly expressed views on AI safety. The company disclosed in court filings that it does not currently have confidence in Claude’s ability to safely support lethal autonomous operations, which it said was the genuine basis for its demands. Anthropic accused the Pentagon of misusing a national security designation as a political instrument.
House Democrats are pressing the Pentagon with formal inquiries about whether AI was involved in a military strike in Iran that reportedly killed over 175 civilians at an elementary school. Their questions focus on AI targeting, human oversight, and the potential misuse of AI systems in warfare. The answers to these questions, combined with the outcome of Anthropic’s lawsuits and Microsoft’s legal intervention, could fundamentally reshape US policy on artificial intelligence in national security for generations.
Microsoft’s Bold Legal Move for Anthropic Could Rewrite the Rules of AI Use in the US Military
26