×
AI

U.S. appeals court allows Pentagon’s restrictions on Anthropic to remain during legal fight

Written by Kelvin Chan Reviewed by Laura Siemer Last Updated Apr 9, 2026

A U.S. federal appeals court has declined to temporarily block the Pentagon’s decision to restrict the artificial intelligence company Anthropic, allowing the designation to remain in effect while the broader legal dispute continues.

The ruling, issued by a Washington, D.C. appeals panel, denies Anthropic’s request for emergency relief and represents an early procedural win for the U.S. Department of Defense. The decision does not settle the underlying case but ensures that the Pentagon’s classification of the company as a supply chain risk will stay in place for now.

Anthropic had argued that the restrictions would cause immediate and lasting harm to its business by cutting it off from defense contracts and damaging its credibility with government partners. The court acknowledged those concerns but said the company had not met the legal threshold required to justify halting the government’s action before a full review.

The dispute stems from a breakdown in negotiations between Anthropic and defense officials over the use of its AI systems in military contexts. According to government filings, the Pentagon moved to restrict the company after it declined to agree to certain contractual terms tied to operational use and oversight of its technology.

Anthropic has pushed back on that characterization, arguing in court filings that the move was retaliatory and linked to its stance on limiting how its models are deployed, particularly in areas such as surveillance and autonomous defense systems. The company has positioned itself publicly as cautious about high-risk military applications of artificial intelligence.

The Justice Department, representing the Pentagon, has maintained that the decision was based on procurement and national security considerations rather than the company’s public positions on AI safety. Officials have argued that ensuring reliability and compliance in defense supply chains is critical as the military increases its reliance on advanced AI systems.

The case has been further complicated by conflicting legal developments in other jurisdictions. A separate federal judge in California had previously questioned aspects of the Pentagon’s action and signaled that parts of the designation could be unlawful, creating a fragmented legal environment for the company.

That split has left Anthropic in a difficult position, where some government interactions may continue while access to defense-related work remains restricted. The uncertainty has also raised concerns among industry observers about how far the U.S. government can go in limiting domestic AI firms under authorities typically associated with foreign supply chain risks.

The outcome of the case could carry broader implications for the rapidly evolving relationship between AI companies and national security agencies. As demand for advanced AI capabilities grows within the defense sector, the dispute highlights tensions between commercial AI development, ethical constraints, and government requirements.

Further hearings in the case are expected in the coming weeks, with a final resolution likely to take months. Until then, Anthropic will remain excluded from Pentagon-related work as it continues to challenge the designation in court.

Discussion