Trump Orders Federal Agencies to Phase Out Anthropic After 'Supply Chain Risk' Label Over AI Safety Dispute
President Trump ordered all federal agencies to stop using Anthropic’s Claude after the Pentagon designated the company a “supply chain risk” and terminated its classified‑systems contract following a standoff over Anthropic’s refusal to lift contractual guardrails banning domestic mass surveillance and fully autonomous weapons—a confrontation touched off in part by Pentagon use of Claude in the Maduro raid. The administration gave Anthropic a deadline to accept “all lawful uses” or face blacklisting and potential enforcement (including talk of the Defense Production Act), is accelerating deals with alternatives such as xAI’s Grok and talks with OpenAI and Google, and has required agencies to wind down Claude deployments (the Pentagon over six months), sparking market disruption and political and congressional scrutiny.
📌 Key Facts
- The dispute was triggered in part by Pentagon questions about Anthropic’s Claude being used in a U.S. operation to capture Nicolás Maduro, which helped prompt a broader DoD review of the company.
- Pentagon leadership gave Anthropic an ultimatum (a 5:01 p.m. ET Friday deadline) to remove contractual guardrails or face cancellation of its roughly $200 million classified contract, blacklisting and formal designation as a 'supply chain risk.'
- Anthropic’s non‑negotiable guardrails bar Claude from being used for mass domestic surveillance of Americans and for making fully autonomous final targeting decisions in military operations; Anthropic CEO Dario Amodei said the company 'cannot in good conscience' accede to unrestricted use.
- The Pentagon insisted contractors must allow their models to be used 'for all lawful purposes' and argued legal responsibility for wrongful military actions rests with the military as the end user; DoD officials even discussed using the Defense Production Act to compel compliance.
- The Pentagon followed through: it terminated Anthropic’s classified‑systems contract, designated the company a supply‑chain risk, and the White House (Trump) ordered all federal agencies to stop using Anthropic technology — most immediately, with a six‑month phase‑out window for embedded DoD systems.
- The blacklist has concrete downstream impact: active Claude deployments at multiple federal entities (including HHS, OPM, DOE national labs and NASA JPL) and sensitive Palantir military deployments built on Claude must be re‑engineered or wound down, creating operational pain and program disruption.
- Other AI companies moved quickly: the Pentagon reached a deal in principle with Elon Musk’s xAI (Grok) to allow classified use under DoD terms, and reported talks with OpenAI and Google accelerated; OpenAI publicly affirmed the same 'red lines' (no U.S. domestic mass surveillance or fully autonomous lethal weapons) and proposed conditions (cloud‑only use, cleared monitoring researchers) for classified deployment.
- The clash has political and market fallout: Trump framed Anthropic’s safety limits as 'radical left, woke' interference and threatened 'major civil and criminal consequences' during the phase‑out while congressional leaders questioned the motives; Anthropic, valued around $380 billion after recent funding, is deeply embedded in the private sector (Claude used by eight of the 10 largest U.S. companies), and the fight has contributed to market volatility and sector disruption.
📊 Analysis & Commentary (3)
"The essay reframes LLMs as statistical next‑token predictors and critiques the Pentagon’s Anthropic ultimatum, arguing that misunderstanding model nature leads to policy mistakes, that guardrails are engineering‑necessary for safety, and that nuanced governance (not blunt demands to remove protections) is required for responsible defense use."
"The piece argues that recent headlines about an "A.I. backlash" — including the Pentagon’s pressure on Anthropic and related market and regulatory flurries — overstate the case, conflating targeted, tactical governance and media noise with a true retreat from large‑scale AI adoption and investment."
"An opinion/deep‑dive using the Trump administration’s designation and phase‑out of Anthropic as a lens to critique the politicized rush to punish a vendor, argue for nuanced legal/procurement fixes that balance national‑security access and AI safety guardrails, and warn of unintended market and security consequences."
📰 Source Timeline (16)
Follow how coverage of this story developed over time
- Trump has now ordered all U.S. federal agencies to stop using Anthropic’s AI technology, with most required to halt immediately and the Pentagon given six months to phase out embedded systems.
- Defense Secretary Pete Hegseth publicly announced that Anthropic is being designated a 'supply chain risk,' a label usually applied to foreign adversaries, explicitly to restrict military vendors from working with the company.
- Trump threatened 'major civil and criminal consequences' if Anthropic is not 'helpful' during the phase‑out, and senior Trump and State Department officials coordinated a public social‑media pile‑on against the company.
- Senate Intelligence Committee chair Mark Warner publicly questioned whether these penalties are driven by national‑security analysis or by politics, signaling congressional alarm.
- The PBS piece details Anthropic’s account of last‑minute contract language that it says would have allowed the Pentagon to disregard promised safeguards on mass domestic surveillance and fully autonomous weapons.
- Axios reports the Pentagon has agreed in principle to OpenAI's conditions for using its technology in classified settings, although no contract has been signed.
- Sam Altman’s internal memo spells out OpenAI’s own 'red lines': no use of ChatGPT for mass surveillance or fully autonomous weapons; models confined to cloud environments rather than edge or weapon systems; and cleared researchers monitoring deployments and advising on risk.
- The piece highlights the political double standard: Pentagon and Trump officials publicly derided Anthropic’s similar lines as 'ideological' and 'woke' while now moving forward with OpenAI under comparable constraints.
- Confirms concrete downstream impact of the supply‑chain‑risk designation: active Claude deployments at HHS, OPM, DOE national labs, and NASA JPL must now be wound down.
- Adds that Claude participation in GSA’s OneGov agreement gave it broad, cross‑branch availability, magnifying the scale of the wind‑down problem for agencies.
- Introduces the specific example of an AI‑planned Mars rover drive as a high‑profile mission that had already begun integrating Claude before the blacklist.
- The CBS video restates that Trump has ordered a government‑wide cessation of Anthropic AI use but does not materially alter or extend the previously reported blacklist and supply‑chain‑risk designation.
- It reiterates that the triggering dispute was over Pentagon demands to lift guardrails on domestic surveillance and autonomous targeting.
- The Pentagon has now carried through on its ultimatum, designating Anthropic a 'supply chain risk' and terminating its classified‑systems contract.
- Trump extended the retaliation beyond DoD with a directive that all federal agencies immediately cease using Anthropic technology, subject to a six‑month wind‑down.
- Despite prior praise inside DoD for Claude’s classified utility, defense officials must now replace it—even though some concede this will be operationally painful.
- Palantir’s most sensitive military deployments, which are built on Claude, will have to be re‑engineered around competing models like those from OpenAI, Google or xAI.
- Trump’s public justification explicitly casts Anthropic’s safety guardrails as 'radical left, woke' interference with how the U.S. wages war, signaling a willingness to punish AI labs for embedding normative limits.
- Sam Altman, in a CNBC interview, explicitly says OpenAI shares Anthropic’s 'red lines' barring use of their models for domestic mass surveillance or fully autonomous weapons and says he personally does not think the Pentagon should be threatening Defense Production Act action.
- The article specifies that Anthropic’s Pentagon contract is worth up to $200 million and that DoD has given a hard deadline of 5:01 p.m. ET for Anthropic to drop its safeguards or risk losing the deal and being labeled a 'supply chain risk.'
- An internal OpenAI staff note from Altman, first reported by the Wall Street Journal and described here, says OpenAI is seeking a classified‑space deal with explicit exclusions on U.S. domestic surveillance and autonomous weapons without human approval.
- Pentagon undersecretary for research and engineering Emil Michael publicly accuses Anthropic CEO Dario Amodei of lying and having a 'God‑complex' in response to Amodei’s statement that Anthropic cannot 'in good conscience' comply and that domestic mass surveillance and fully autonomous weapons are beyond what today’s tech can safely do.
- Sam Altman told OpenAI staff in a Feb. 27 memo that OpenAI shares Anthropic’s red lines: no use of its models for mass surveillance or for fully autonomous lethal weapons, and humans must stay in the loop for high‑stakes automated decisions.
- Altman said OpenAI will still try to reach a Pentagon deal to deploy ChatGPT in classified environments so long as the contract explicitly excludes domestic mass surveillance and autonomous offensive weapons, effectively rejecting the Pentagon’s current 'all lawful purposes' standard.
- Sources say talks to move ChatGPT from unclassified to classified military systems have accelerated amid the Pentagon–Anthropic clash, and OpenAI has floated enforcement ideas like keeping models cloud‑only (not on edge/weapon platforms), continuous security monitoring, and cleared researchers to track classified use.
- Axios reports that Pentagon officials have so far framed private guardrails of this kind as too much corporate influence over critical government work, suggesting OpenAI may run into the same resistance that led the Pentagon to threaten blacklisting Anthropic as a 'supply chain risk'.
- Employees at OpenAI and Google have signed a solidarity letter urging their companies to resist Pentagon pressure and adopt similar limits, signaling a broader industry push that could complicate the Defense Department’s plan to replace Anthropic’s Claude, currently the only commercial LLM running on U.S. classified systems.
- Anthropic CEO Dario Amodei issued a new public statement about 24 hours before the Pentagon’s deadline, saying the company 'cannot in good conscience accede' to the demand for unrestricted use.
- Anthropic says the Pentagon’s latest 'compromise' contract language was paired with legalese that would allow safeguards on mass surveillance and fully autonomous weapons to be disregarded at will.
- Pentagon spokesman Sean Parnell publicly warned on social media that the department 'will not let ANY company dictate the terms' of operational decisions and set a precise 5:01 p.m. ET Friday deadline.
- Defense undersecretary Emil Michael attacked Amodei on X, accusing him of having a 'God-complex' and trying to control the U.S. military.
- An open letter from tech workers at rival firms OpenAI and Google voiced support for Amodei’s stance, warning the Pentagon is trying to 'divide each company with fear that the other will give in.'
- Retired Air Force Gen. Jack Shanahan, who led the earlier Project Maven AI effort, publicly criticized the Pentagon’s targeting of Anthropic, saying 'painting a bullseye on Anthropic' ultimately hurts everyone.
- Dario Amodei has formally rejected the Pentagon’s latest contract changes, saying Anthropic will not permit Claude to be used for domestic mass surveillance in the U.S. or fully autonomous weapons, and calling those uses 'entirely illegitimate' and 'bright red lines'.
- Amodei’s public statement stresses Anthropic accepts that the 'Department of War, not private companies, makes military decisions' but insists that certain AI uses 'undermine, rather than defend, democratic values'.
- NPR reports new details of a Tuesday meeting where Defense Secretary Pete Hegseth threatened to cancel Anthropic’s roughly $200 million contract, potentially blacklist the company from future Pentagon work, and has discussed forcing Anthropic to let the government use Claude against the company’s will.
- A senior Pentagon official tells NPR anonymously that contractors must allow their models to be used 'for all lawful purposes' and that legality is 'the Pentagon’s responsibility as the end user,' sharpening the clash over who decides guardrails.
- The article underscores that the relationship between Anthropic and the Pentagon has become 'increasingly acrimonious' as the deadline on the ultimatum approaches.
- Pentagon officials sent Anthropic their 'best and final' offer Wednesday night, just ahead of a government-imposed deadline.
- Defense Secretary Pete Hegseth has given Anthropic until Friday evening to sign a document granting 'all lawful use' and 'full access' to its AI model for military operations.
- A senior Pentagon official now says Anthropic will face not only loss of business but formal designation as a 'supply chain risk' if it refuses.
- Pentagon officials are considering invoking the Defense Production Act to force Anthropic to comply, according to CBS sources.
- Anthropic has pushed for contractual guardrails that would bar use of Claude for mass domestic surveillance of Americans and from making fully autonomous final targeting decisions in military operations, citing hallucination risk and reliability concerns.
- Trump officials told CBS that mass surveillance of Americans would be illegal and say they only want a license for 'lawful activities,' disputing the need for Anthropic’s guardrails.
- Anthropic has reached a $380 billion valuation and raised $30 billion this month from major U.S. financial and tech investors, making it one of the highest‑valued AI firms.
- Claude is now reportedly used by eight of the 10 largest U.S. companies, underscoring how disruptive a Pentagon 'supply chain risk' blacklist could be across the contractor base.
- Anthropic has agreed to soften the central commitment of its safety framework, signaling a retreat from some unilateral guardrails under competitive and government pressure.
- Specific timeline: Defense Secretary Pete Hegseth has given Anthropic CEO Dario Amodei until Friday to loosen guardrails or face a blacklist that would bar Claude’s use in all Pentagon‑linked work.
- On Wall Street, at least five distinct Claude‑related product announcements this month have triggered sharp sector‑specific sell‑offs that traders have dubbed the 'SaaSpocalypse,' erasing hundreds of billions in market value across legal services, financial‑data, cybersecurity and legacy‑IT names.
- OpenAI’s next model, ChatGPT 5.3 ('Garlic'), is being rushed under a 'code red' directive from Sam Altman specifically in response to Claude’s perceived lead, while China’s DeepSeek V4 is viewed as the next major competitive shock that could reignite a $1 trillion tech sell‑off like last January.
- Confirms the Pentagon delivered an explicit ultimatum this week: allow unrestricted military use of Claude or face a ban from all government contracts.
- Details that Anthropic wants contractual guardrails barring Claude’s use for mass surveillance of Americans and for fully autonomous final targeting decisions in military operations.
- Reports that Pentagon officials are citing hypothetical urgent scenarios such as intercepting an incoming ICBM as reasons they oppose any vendor‑imposed restrictions.
- Attributes a key quote to Undersecretary of Defense for Research Emil Michael warning that company guardrails could block AI use in emergencies once operators are accustomed to relying on it.
- Clarifies that, in the Pentagon’s view, legal responsibility for any wrongful strikes lies with the military as the end user, not with the AI vendor.
- Links the current standoff to the U.S. operation to capture Nicolás Maduro, reporting that Pentagon use of Claude in that raid helped trigger Anthropic’s push for formal guardrails.
- Includes fresh on‑the‑record framing of Dario Amodei’s concerns about AI enabling domestic mass surveillance and erosion of democratic safeguards, via his recent essay.
- A Defense official confirms the Pentagon has signed an agreement with Elon Musk’s xAI to allow the Grok model to be used in classified systems.
- The Grok deal explicitly accepts DoD’s demand that models be available for 'all lawful purposes', unlike Anthropic’s refusal to drop bans on mass domestic surveillance and fully autonomous weapons.
- Defense Secretary Pete Hegseth will give Anthropic CEO Dario Amodei an ultimatum at a Tuesday Pentagon meeting, with the threat of branding Anthropic a 'supply chain risk' if it doesn’t lift safeguards.
- DoD is accelerating talks with Google (Gemini) and OpenAI (ChatGPT) to move them into classified environments as potential replacements, while insisting both must also accept the 'all lawful purposes' criteria.
- A Defense official disputes New York Times framing that Google is much closer to a deal than OpenAI, saying negotiations are 'ongoing with both' and the department believes both will sign agreements.
- Puts the Pentagon–Anthropic contract fight explicitly in the context of Trump’s December executive order blocking 'onerous' state AI regulations such as Colorado’s algorithmic-discrimination law.
- Quotes AI governance experts Miranda Bogen and Alondra Nelson arguing that AI safety debates are moving into a more public, democratic‑accountability phase, not just a technical one inside labs.
- Highlights a wave of recent high‑profile resignations at two major U.S. AI firms over perceived inadequate safeguards, and cites investor Matt Shumer’s viral 'Something Big is Happening' essay warning about unpredictable AI behavior and job risks.
- Includes Anthropic safety researcher Mrinank Sharma’s public resignation letter warning that commercial and national‑security pressures are pushing companies to set aside safety priorities.
- Fox confirms Axios’ earlier reporting that questions over Claude’s use in the Maduro raid were a direct catalyst for the review, citing senior administration officials.
- Adds on‑record confirmation from Pentagon spokesman Sean Parnell that 'The Department of War’s relationship with Anthropic is being reviewed.'
- Quotes an unnamed senior War Department official floating the idea of compelling all DoD vendors and contractors to certify they do not use Anthropic models.
- Provides more detailed Pentagon framing that the department wants commercial AI models available for 'all lawful purposes' and portrays Anthropic as an outlier compared with OpenAI, Google and xAI.
- Anthropic responds on the record that its discussions with the Pentagon have focused on hard limits around fully autonomous weapons and mass domestic surveillance, not specific operations.