D.C. Appeals Court Lets Pentagon Blacklist of Anthropic Stand For Now
The U.S. Court of Appeals for the D.C. Circuit refused on Wednesday to block the Pentagon’s decision to blacklist San Francisco–based AI lab Anthropic as a national‑security supply‑chain risk while litigation proceeds, even as a separate federal court in San Francisco has already ordered the Trump administration to remove similar labels. Anthropic sued in both courts last month, alleging the administration is unlawfully retaliating because the company has tried to restrict how its Claude chatbot can be used in fully autonomous weapons and domestic surveillance, while the White House has attacked Anthropic as a liberal company trying to dictate military policy. U.S. District Judge Rita Lin in San Francisco previously held that the administration overstepped by branding Anthropic a supply‑chain risk unqualified to work with defense contractors and directed the government to lift those stigmatizing designations, which it has begun to do according to new filings. The D.C. Circuit acknowledged Anthropic would likely suffer some irreparable harm but said the financial impact was not yet clear enough to justify its own emergency order, and set a fuller hearing for May 19. Tech‑industry groups warn the dueling rulings and shifting labels are creating serious uncertainty for U.S. AI firms vying with OpenAI and Google for military and government work, and raising concern that "national security" designations can be turned into a political cudgel against companies that resist certain Pentagon uses of their systems.
📌 Key Facts
- The U.S. Court of Appeals for the D.C. Circuit denied Anthropic’s request to block Pentagon blacklisting while its case is heard.
- A separate ruling by U.S. District Judge Rita Lin in San Francisco forced the Trump administration to remove national‑security and supply‑chain‑risk labels from Anthropic.
- Anthropic alleges the administration is retaliating because it tried to limit use of its Claude chatbot in fully autonomous weapons and U.S. surveillance, while the White House portrays the firm as a liberal company trying to shape military policy.
- The D.C. Circuit set a May 19 hearing to take more evidence and conceded Anthropic may suffer irreparable harm but said the extent of financial damage is unclear.
- Tech trade‑group head Matt Schruers says the Pentagon’s actions and the D.C. Circuit’s ruling are creating major business uncertainty as U.S. AI firms compete globally.
📊 Relevant Data
Anthropic has refused to allow unrestricted military use of its AI systems, citing fears that such use may harm democracy.
In 2026, the US leads China in AI chip production and market control, but China has surged in large language model global market share from 3% to 13% in recent months.
U.S.-China Competition for Artificial Intelligence Markets — RAND Corporation
The Department of Defense requested $13.4 billion for AI and autonomy in FY2026, representing the largest single-year AI investment in its history.
How Federal Contractors Can Position for $13.4B Pentagon AI Strategy — CCS Global Tech
Autonomous weapons systems using AI pose risks such as undermining moral accountability in war, exacerbating civilian risks, and corroding human agency in lethal decision-making.
The ethical legitimacy of autonomous Weapons systems — Taylor & Francis Online
📰 Source Timeline (1)
Follow how coverage of this story developed over time