Back to all stories
Exterior of the Birch Bayh Federal Building & U.S. Courthouse‎
Photo: Paul Sableman | CC BY 2.0 | Wikimedia Commons

Judge Rita Lin Issues Preliminary Injunction Blocking Pentagon Anthropic ‘Supply‑Chain Risk’ Designation and Trump Order Barring Federal Use of Claude

U.S. District Judge Rita Lin granted a preliminary injunction pausing the Pentagon’s novel “supply‑chain risk” designation of Anthropic and temporarily blocking enforcement of President Trump’s directive that federal agencies cease using Anthropic’s Claude model (the stay is paused for seven days to allow an appeal), effectively restoring the pre‑Feb. 26 status quo while the broader lawsuits proceed. Lin called the government’s actions “troubling” and likely arbitrary or punitive — questioning whether the Pentagon could simply stop using Claude rather than broadly blacklisting Anthropic and raising First Amendment and procurement‑law concerns — while the government defends the designation as rooted in national‑security fears of future sabotage and argues public statements by leaders have no legal effect.

AI and National Security Pentagon and Defense Policy Donald Trump Anthropic vs. Pentagon AI Blacklist AI Policy and National Security

📌 Key Facts

  • U.S. District Judge Rita F. Lin granted a preliminary injunction pausing the Pentagon’s “supply‑chain risk” designation of Anthropic and halting enforcement of President Trump’s directive that federal agencies cease using Anthropic’s Claude; the order is stayed for seven days to allow the government to appeal.
  • The injunction provides immediate relief from reputational and contractual harms — federal agencies had begun removing Claude and partners were reconsidering contracts — but it does not require the Department of Defense to use Anthropic or prevent lawful transitions to other AI providers.
  • In hearings and her written order Lin called the Pentagon’s actions “troubling,” questioned whether they were narrowly tailored to legitimate national‑security concerns, and said branding an American company as a potential adversary for expressing disagreement with the government looked like unlawful First Amendment retaliation.
  • Lin observed the Department could protect chain‑of‑command integrity by stopping its own use of Claude rather than broadly blacklisting Anthropic; she noted the “supply‑chain risk” label is normally used for foreign adversaries and terrorists and appears to be the first time it has been applied to a U.S. company.
  • The government defended the designation by saying Anthropic’s contractual limits and negotiating stance create a risk of future sabotage — including fears of a hidden “kill switch” — and argued public posts by President Trump and Defense Secretary Pete Hegseth are not legally binding; DOJ attorneys also conceded the label does not legally bar defense contractors from non‑military commercial use.
  • Anthropic argues the designation violates the First Amendment and procurement law, says the company is likely to succeed on the merits, denies it can remotely alter or shut off Claude once deployed, and has attracted broad institutional support (including amicus briefs from Microsoft, the ACLU and retired military leaders).
  • Reporting revealed related industry dynamics: OpenAI CEO Sam Altman told staff he tried to “save” Anthropic during the dispute and exchanged contract drafts with the Pentagon after an Under Secretary called him; OpenAI’s Pentagon contract includes carve‑outs for intelligence agencies that the Pentagon said it could not offer Anthropic because Claude was already embedded — and the Pentagon has since criticized Judge Lin’s ruling as factually flawed and says it considers Anthropic designated pending appeal.

📊 Analysis & Commentary (3)

Is AI Conscious? It Depends What Consciousness Is
The Wall Street Journal by Stephen Hawley Martin March 25, 2026

"The WSJ commentary uses recent remarks by AI leaders (e.g., Anthropic’s Dario Amodei) to argue that whether AI is conscious hinges on unresolved definitions of consciousness, urging philosophical and scientific humility rather than premature conclusions."

How Natural Tradeoff And Failure Components?
Astralcodexten by Scott Alexander March 26, 2026

"A critical deep dive arguing that the Pentagon’s broad blacklist of Anthropic is an overbroad, market‑crippling response to AI risks and that policymakers should favor narrowly tailored, technical and contractual mitigations to manage failure modes without stifling innovation."

Anthropic and Hegseth Need a Truce
Wsj by The Editorial Board March 27, 2026

"An opinion urging a negotiated truce between Anthropic and Pentagon officials—criticizing the government’s broad, potentially punitive actions (and welcoming the judge’s injunction) while calling for measured, transparent security fixes rather than sweeping bans."

📰 Source Timeline (11)

Follow how coverage of this story developed over time

March 27, 2026
10:13 PM
Judge temporarily blocks Pentagon from labeling Anthropic a "supply chain risk"
https://www.facebook.com/TakeoutPodcast/
New information:
  • CBS segment confirms that a judge has temporarily blocked the Pentagon’s effort to designate Anthropic as a supply-chain risk, consistent with the previously reported preliminary injunction.
  • The piece is framed as a brief TV hit with legal analysis by CBS News legal contributor Jessica Levinson, but it does not add concrete new facts beyond the existence of the temporary block/injunction already captured in the existing story.
6:19 PM
Judge freezes Trump admin move against AI firm, fueling battle over security authority
Fox News
New information:
  • Fox article emphasizes Under Secretary of War Emil Michael’s public response, saying the ruling contains 'dozens of factual errors,' was issued 'during a time of conflict,' and 'seeks to upend the [president’s] role as Commander in Chief.'
  • Michael states the administration still considers Anthropic designated as a 'supply chain risk' pending appeal, signaling the Pentagon disputes how far the injunction actually binds it.
  • The piece details that the March 3 supply‑chain‑risk notice ordered that no contractor, supplier or partner doing business with the U.S. military may conduct commercial activity with Anthropic, underscoring the breadth of the designation Lin paused.
  • New color on the broader dispute: War Secretary Pete Hegseth told Anthropic it would face termination of a $200 million contract or supply‑chain‑risk designation if it did not allow Claude 'for all lawful uses,' while Anthropic refused to allow use for fully autonomous weapons or mass surveillance of Americans.
  • The article highlights partisan and public reaction, with some critics calling the decision 'pure judicial activism' and a bipartisan group of nearly 150 retired federal and state judges backing Anthropic’s challenge as a check on overbroad national‑security powers.
1:05 AM
Judge temporarily blocks Pentagon’s ban on Anthropic
MS NOW by Ebony Davis
New information:
  • The article explicitly states that the preliminary injunction bars 'federal agencies' from carrying out Trump’s directive, not just the Pentagon, and notes the order is paused for seven days.
  • It quotes Judge Rita Lin’s line that 'nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,' sharpening her statutory critique.
  • It reports that Trump and Defense Secretary Pete Hegseth blacklisted Anthropic in February specifically after the company refused to let the Pentagon use its Claude AI model for autonomous lethal warfare and mass surveillance of Americans.
  • Lin’s ruling characterizes the government’s actions as 'classic illegal First Amendment retaliation' for Anthropic bringing public scrutiny to the Pentagon’s contracting demands.
  • Anthropic’s spokesperson statement frames the order as confirming the company is 'likely to succeed on the merits' and reiterates its willingness to work with the government on 'safe, reliable AI.'
12:17 AM
Judge temporarily blocks Trump administration's Anthropic ban
NPR by John Ruwitch
New information:
  • NPR piece directly quotes Judge Rita Lin’s order stating that the 'supply chain risk' designation is usually reserved for foreign intelligence agencies and terrorists and appears aimed at punishing Anthropic rather than serving stated national security interests.
  • Lin wrote that if the concern were the integrity of the chain of command, the Department of War could simply stop using Claude itself instead of broadly blacklisting Anthropic and triggering a government‑wide ban.
  • The Pentagon argued in court that Anthropic’s attempt to limit military uses of Claude rendered the company untrustworthy and that the risk designation stemmed from those contractual limits, not from Anthropic’s public disagreement with Pentagon policy.
  • Lin characterized the designation as 'likely both contrary to law and arbitrary and capricious' and rejected what she called 'the Orwellian notion that an American company may be branded a potential adversary and saboteur' for exposing disagreements over use of its technology.
  • The article details the underlying dispute: CEO Dario Amodei refused to allow Claude to be used for autonomous weapons or surveillance of American citizens, while the Pentagon insisted only the military can decide how it uses tools it buys.
  • NPR notes that a wide range of organizations — including Microsoft, the ACLU, and retired military leaders — have filed amicus briefs backing Anthropic, underscoring breadth of institutional concern about the government’s actions.
12:14 AM
Judge blocks Pentagon from labeling Anthropic AI a "supply chain risk"
https://www.facebook.com/CBSNews/
New information:
  • The article confirms that Judge Rita Lin’s order not only pauses the Pentagon’s 'supply‑chain risk' designation but also halts enforcement of President Trump’s directive that every federal agency 'IMMEDIATELY CEASE all use of Anthropic's technology.'
  • It specifies that the government’s designation was being used to try to stop private federal contractors from using Anthropic’s Claude AI model, not just to limit the Pentagon’s own direct purchases.
  • The piece details the underlying policy dispute: Anthropic has pushed to bar the military from using Claude for domestic surveillance or to power fully autonomous weapons, while the Trump administration insists it needs AI for 'all lawful purposes.'
  • The order is explicitly stayed for seven days to give the government an opportunity to appeal, a procedural nuance not in the earlier summary.
  • The article quotes Anthropic saying the court agrees it is likely to succeed on the merits and reiterating that its focus remains on working with government to ensure 'safe, reliable AI.'
  • The judge’s language clarifies that the injunction does not require the Department of War to use Anthropic or stop transitioning to other AI providers, so long as those actions comply with existing law and regulations.
March 26, 2026
11:36 PM
Judge temporarily blocks Pentagon's ban on Anthropic
Axios by Maria Curi
New information:
  • Judge Rita Lin has now formally granted a preliminary injunction pausing the Pentagon’s 'supply chain risk' designation of Anthropic.
  • The injunction provides immediate relief from the designation’s reputational and contractual effects, as federal agencies had begun removing Claude and partners were reconsidering contracts.
  • Anthropic states the court found it 'likely to succeed on the merits' and reiterates its argument that DoD is violating the First Amendment and procurement law.
  • The Pentagon is arguing that public social-media posts by Defense Secretary Pete Hegseth and President Trump do not have legal standing and therefore do not constitute irreparable harm.
  • The Axios piece notes a parallel case in D.C. and recaps that the DoD designation extended beyond internal use, pressuring any Pentagon contractor to cut ties with Anthropic.
11:15 PM
Scoop: Altman told staff he tried to "save" Anthropic in Pentagon clash
Axios by Zachary Basu
New information:
  • Reveals internal OpenAI Slack messages from Feb. 24–March 2, 2026 in which CEO Sam Altman tells staff he is trying to 'save' Anthropic in its Pentagon dispute even as OpenAI negotiates its own contract.
  • Reports that Defense Under Secretary Emil Michael called Altman on Feb. 24 and that OpenAI and the Pentagon began exchanging draft contract language the next day.
  • Details that OpenAI’s Pentagon contract includes a carve‑out requiring a separate agreement before ChatGPT can be deployed in intelligence agencies like the NSA, and that Pentagon officials told Altman they could not offer Anthropic a similar carve‑out because Claude is already deeply embedded in those agencies.
  • States that Altman told staff he believed the Pentagon thought Anthropic CEO Dario Amodei was 'playing to the press,' and that he found it 'strange' to work to 'save' a rival he felt had tried to undermine OpenAI for years.
March 25, 2026
12:12 AM
Judge says government's Anthropic ban looks like punishment
NPR by John Ruwitch
New information:
  • At the March 24 preliminary‑injunction hearing, Judge Rita F. Lin said the government’s Anthropic ban 'looks like an attempt to cripple Anthropic' and that she was concerned the administration might be punishing the company for openly criticizing its position.
  • Lin stated the Pentagon has a right to choose what AI products it uses, but questioned whether it broke the law by banning all agencies from using Anthropic and by conditioning Pentagon business on cutting ties with the company.
  • Government lawyers argued in court that the action was not retaliatory and that Anthropic is a 'risk' because it could, in the future, update Claude in ways that endanger national security.
  • Anthropic’s counsel told the court this is apparently the first time a 'supply chain risk' designation has been used against a U.S. company, a label normally reserved for foreign adversaries.
  • Lin said she expects to rule within a few days on whether to temporarily pause the ban while the broader lawsuits proceed.
March 24, 2026
11:46 PM
Judge calls Pentagon's moves against AI firm Anthropic "troubling"
https://www.facebook.com/CBSNews/
New information:
  • Judge Rita Lin explicitly called the Pentagon’s actions against Anthropic 'troubling' and said they 'don't really seem to be tailored to the stated national security concern.'
  • Lin suggested the Defense Department could simply stop using Claude itself instead of broadly designating Anthropic a supply‑chain risk and moving to cut it out of military contracting.
  • Under questioning, DOJ attorney Eric Hamilton conceded that the supply‑chain‑risk label does not legally bar defense contractors from using Anthropic on non‑military work and said he knew of no law allowing DoD to cut off all commercial activity with Anthropic, undercutting Defense Secretary Pete Hegseth’s public threat.
  • Anthropic’s lawyer Michael Mongan argued that Hegseth’s widely viewed social‑media post has created 'profound uncertainty' for the company, even if the government now says it is not enforceable, and denied that Anthropic can alter or shut off Claude once it is deployed on government systems.
  • The government justified the 'supply chain risk' designation in court by claiming Anthropic’s negotiating stance created a 'risk of future sabotage,' including fears of a hidden 'kill switch,' while Lin questioned whether that amounted to punishing the firm for being 'stubborn' and 'ask[ing] annoying questions.'
9:11 PM
Judge questions Pentagon's "troubling" Anthropic actions
Axios by Maria Curi
New information:
  • At a March 24 hearing, U.S. District Judge Rita Lin called the Pentagon’s treatment of Anthropic 'troubling' and said, 'I don't know if it's murder, but it looks like an attempt to cripple Anthropic.'
  • Judge Lin criticized three Trump‑era actions — Trump’s ban on Anthropic, Defense Secretary Pete Hegseth’s requirement that Pentagon contractors cut commercial ties, and the supply‑chain‑risk designation — as not well tailored to the stated national‑security concern, noting the Pentagon could simply stop using Claude if the issue were chain‑of‑command integrity.
  • The Pentagon’s lawyer argued that Trump and Hegseth’s blacklist social‑media posts are not legally binding, an argument the judge said she found 'pretty surprising' because the statements are 'front and center' in the lawsuit.
  • Anthropic is asking the court for preliminary relief that would effectively restore the status quo as of Feb. 26 — before the public blacklist announcements — by pausing the designation, blocking enforcement, and rolling back actions already taken.
  • The Pentagon argues in filings that Anthropic is seeking an 'operational veto' over Defense Department decisions and says Anthropic has full control over Claude’s availability and performance in ways it views as dangerous in sensitive operations, a characterization the company disputes.