
Former Judges Back Anthropic in Legal Fight Over US Government Ban
Nearly 150 former judges are backing Anthropic in its legal fight with the US government, raising concerns about AI regulation and corporate independence.
Umar Mayowa | 21 Mar. 2026

Nearly 150 retired judges have stepped in to support Anthropic, raising concerns about how far government power can extend over private AI companies.
A group of nearly 150 former federal and state judges has filed a legal brief supporting Anthropic in its ongoing dispute with the US government. The filing adds momentum to a case that could shape how artificial intelligence companies interact with federal agencies.
The judges, who served under both Republican and Democratic administrations, argue that the government’s decision to label Anthropic a “supply chain risk” could set a troubling precedent for private businesses.
What Triggered the Dispute
The conflict began after the US Department of Defense classified Anthropic as a supply chain risk following failed negotiations over the use of its AI systems in sensitive government operations.
At the center of the disagreement were limits set by Anthropic on how its technology could be used. The company declined to allow its models to support autonomous weapons or large-scale surveillance of US citizens.
In response, the government applied a designation typically reserved for companies linked to foreign threats, a move that has rarely, if ever, been used against a domestic firm.
Why the Label Matters
Being classified as a supply chain risk carries serious consequences. It requires companies working with the military to separate any use of Anthropic’s tools from their government-related systems, which can disrupt existing contracts and partnerships.
The impact goes beyond direct government work. Many private firms that collaborate with defense agencies may reconsider their relationships with Anthropic due to compliance concerns.
According to court filings, the company could lose hundreds of millions of dollars in revenue this year if the restrictions remain in place.
Judges Raise Concerns About Government Overreach
In their filing, the former judges argue that the government did not follow proper procedures when applying the designation. They also question the interpretation of the law used to justify the decision.
The brief emphasizes that Anthropic is not attempting to force the government into a contract. Instead, the company is seeking to prevent what it views as punitive action tied to its refusal to meet certain demands.
This argument reflects broader concerns within legal and policy circles about whether companies can maintain ethical boundaries when working with government agencies.
Support Extends Beyond the Legal Community
The judges join a wider group of supporters that includes technology companies, industry organizations, and former national security officials. Backing has also come from firms such as Microsoft and individuals connected to competing AI companies.
This level of support highlights how closely the industry is watching the case, particularly as governments increase their involvement in AI development and deployment.
Government Response and Ongoing Legal Battle
The White House has defended the decision, stating that it was based on concerns about how Anthropic might act if given continued access to government systems. Officials argue that the designation is necessary to protect national interests.
The administration has also ordered federal agencies to stop using the company’s AI tools, including its Claude model.
Anthropic has challenged the decision in court, with CEO Dario Amodei stating that legal action was unavoidable given the potential impact on the company’s future.
A hearing on the company’s request to block the designation is scheduled to take place soon, marking the next stage in a case that could influence how similar disputes are handled going forward.
A Broader Question for the AI Industry
Beyond the immediate legal fight, the situation raises a larger issue for technology companies. As AI becomes more integrated into government systems, businesses may face pressure to align with official policies even when those policies conflict with internal guidelines.
Experts, including researchers from institutions like Santa Clara University’s Markkula Center for Applied Ethics, have questioned whether companies can maintain independence while still working with government clients.
The outcome of this case could help define the balance between corporate responsibility and government authority in the AI sector.