Back to all stories
Elevated view of E. Barrett Prettyman United States Courthouse, as seen from the East Building of the National Gallery of Art.
Photo: Toohool | CC BY-SA 4.0 | Wikimedia Commons

D.C. Appeals Court Refuses to Block Pentagon Blacklist of Anthropic Despite Conflicting San Francisco Ruling

The D.C. Circuit declined Anthropic’s emergency request to block the Pentagon from blacklisting the company, refusing to shield it even though a San Francisco federal judge, Rita Lin, had already ordered the administration to remove the supply‑chain‑risk label and allow federal employees and contractors to use Anthropic’s Claude. The D.C. court said Anthropic would “likely suffer some degree of irreparable harm” but found the financial harm “not fully clear,” set a May 19 evidentiary hearing, and drew criticism from industry voices like CCIA CEO Matt Schruers that the split rulings are creating substantial business uncertainty for U.S. AI firms.

AI Regulation and National Security Donald Trump Federal Courts and Pentagon Procurement Anthropic vs. Trump Administration AI and National Security Policy

📌 Key Facts

  • The D.C. Circuit formally rejected Anthropic’s emergency request for an order blocking the Pentagon’s blacklist while the company’s appeal proceeds.
  • The D.C. Circuit acknowledged Anthropic will "likely suffer some degree of irreparable harm" but said the extent of financial harm is "not fully clear," and thus declined to grant the emergency relief.
  • The court scheduled a fuller evidentiary hearing on the case for May 19.
  • A separate ruling by U.S. District Judge Rita Lin in San Francisco ordered the Trump administration to remove the Pentagon’s supply‑chain‑risk label and related directives and said federal employees and contractors may continue using Anthropic tools such as Claude, creating a conflicting outcome between the two courts.
  • Industry groups have raised concerns about the broader impact: CCIA CEO Matt Schruers warned the Pentagon’s actions and the D.C. ruling are producing "substantial business uncertainty" for U.S. AI firms competing globally.

📊 Relevant Data

Autonomous weapons systems powered by AI introduce risks of discrimination against people based on protected characteristics and can lead to large-scale civilian deaths due to unreliable accuracy and minimal human supervision.

A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making — Human Rights Watch

The United States controls an estimated 74 percent of global high-end AI compute capacity, while China holds about 20 percent, highlighting U.S. dominance in AI infrastructure amid growing competition.

The State of AI Competition in Advanced Economies — Federal Reserve

In 2023-2025 studies, 37% of women versus 50% of men in the U.S. reported using generative AI in the past year, indicating a gender gap in AI adoption and engagement.

Women in AI: Numbers Behind the Gender Gap in Tech — SheAI

Only 17 percent of Americans think AI will have a positive impact on the U.S. over the next 20 years, reflecting widespread public skepticism toward AI advancements.

Americans Hate AI. Which Party Will Benefit? — Politico

Four-in-ten U.S. adults say AI designers take the experiences and views of White adults into account at least somewhat well, compared to lower perceptions for other groups, indicating perceived racial biases in AI development.

Key findings about how Americans view artificial intelligence — Pew Research Center

📊 Analysis & Commentary (1)

We could win the AI war and still lose all of our freedoms if we aren’t careful
Fox News April 09, 2026

"The piece is an opinion‑style warning that recent U.S. national‑security‑driven AI mobilization — the same dynamic underpinning disputes like the Anthropic Pentagon blacklist — risks producing domestic surveillance and loss of freedoms unless strong guardrails, oversight and legal limits accompany technological buildup."

📰 Source Timeline (2)

Follow how coverage of this story developed over time

April 09, 2026
7:30 PM
Appeals court decides against Anthropic in latest round of its AI battle with the Trump administration
PBS News by Associated Press
New information:
  • Confirms the D.C. Circuit has now formally rejected Anthropic’s request for an emergency order shielding it from Pentagon blacklisting while the appeal proceeds.
  • Details that Judge Rita Lin’s San Francisco order has already forced the Trump administration to remove the supply‑chain‑risk label and related directives, and that government filings say federal employees and contractors may continue using Claude and other Anthropic tools.
  • Quotes the D.C. Circuit acknowledging Anthropic will "likely suffer some degree of irreparable harm" but finding the extent of financial harm "not fully clear" and therefore insufficient to justify its own emergency order.
  • Sets a specific date — May 19 — for the D.C. Circuit’s fuller evidentiary hearing on the case.
  • Includes a new on‑the‑record concern from Computer & Communications Industry Association CEO Matt Schruers that the Pentagon’s actions and the D.C. ruling are creating "substantial business uncertainty" for U.S. AI firms competing globally.