Pentagon Blacklists Anthropic Claude From Classified Military Systems as AI Targeting Role in Iran War Grows
The Pentagon has ordered Anthropic’s Claude removed from classified Defense Department systems within six months after an internal memo showed it was being used in sensitive national‑security areas — including nuclear weapons, ballistic missile defense and cyber warfare — and CBS reports Claude is, so far, the only large‑scale AI operating on DoD classified networks. Sources say Claude and similar AI tools are being used in current U.S. operations against Iran to sift imagery and sensor data, build and assess targeting packages and help process roughly 1,000 potential targets a day, prompting a tech‑industry rally behind Anthropic and renewed debate over balancing rapid AI adoption with oversight.
📌 Key Facts
- The Pentagon has ordered Anthropic’s AI technology (Claude) removed from military operations within six months after an internal memo showed it was being used in key national-security areas, including nuclear weapons, ballistic missile defense and cyber warfare.
- Anthropic’s Claude is, so far, the only large-scale AI system operational on the Defense Department’s classified systems.
- Sources say Anthropic and other AI programs are likely being deployed as part of the current U.S. operation against Iran; Adm. Mark Montgomery estimates the military is processing roughly 1,000 potential targets a day, striking the majority, with turnaround times potentially under four hours.
- The military is using AI to sift battlefield video, imagery and documents to build targeting packages, assign weapons, and assess damage almost instantly—experts liken the role to Israel’s AI-assisted missile defense when hundreds of threats arrive simultaneously.
- Federal Acquisition Service commissioner Josh Gruenbaum said the broader goal is to get agencies comfortable using AI for research, policy development and procurement while maintaining an 'evenhanded' approach to American innovators.
📊 Relevant Data
Black Americans comprise 17.6% of active-duty US military personnel in 2023, compared to approximately 13.6% of the total US population, indicating overrepresentation.
2023 Demographics Profile of the Military Community — Military OneSource / Department of Defense
Hispanic or Latino individuals comprise 19.5% of active-duty US military personnel in 2023, compared to approximately 19% of the total US population, showing slight overrepresentation.
2023 Demographics Profile of the Military Community — Military OneSource / Department of Defense
As of 2024, there are approximately 750,000 Iranian Americans in the US, making up 0.2% of the total population.
7 facts about Iranians in the U.S. — Pew Research Center
Bias in military AI systems can arise from skewed training data reflecting historical inequalities, leading to misidentification of targets based on racial or ethnic characteristics, with potential humanitarian consequences like disproportionate civilian harm.
📊 Analysis & Commentary (3)
"An opinion piece reframes LLM 'hallucinations' as predictable, fixable 'shameless guesses' and criticizes the Pentagon’s punitive blacklisting approach (as in the Anthropic case), arguing policy should emphasise clear technical standards, procurement safeguards and rule‑of‑law oversight rather than ad hoc bans."
"A critical take arguing that the Anthropic‑Pentagon conflict illustrates a broader pattern: Silicon Valley defends narrow corporate interests and legal privileges instead of mobilizing its power and innovation to materially improve life for most Americans, so policy and incentive changes are needed to redirect tech toward public benefit."
"A skeptical, precautionary critique arguing that recent moves to embed commercial AI (like Anthropic’s Claude) into classified, real‑world military operations expose the technology’s brittleness, governance gaps, and the risks of rushing opaque models into life‑and‑death roles."
📰 Source Timeline (2)
Follow how coverage of this story developed over time
- Confirms the Pentagon has ordered Anthropic’s AI technology removed from military operations within six months and that an internal memo says it was being used in key national‑security areas including nuclear weapons, ballistic missile defense and cyber warfare.
- Reports that Anthropic’s Claude is, so far, the only large‑scale AI system operational on the Defense Department’s classified systems.
- Cites sources saying Anthropic and other AI programs are likely being deployed as part of the current U.S. operation against Iran, with Adm. Mark Montgomery estimating the military is processing roughly 1,000 potential targets a day, striking the majority, with turnaround times potentially under four hours.
- Details how AI is being used to sift battlefield video, imagery and documents to build targeting packages, assign weapons, and assess damage almost instantly, with experts likening it to Israel’s AI‑assisted missile defense decisions when hundreds of threats arrive simultaneously.
- Quotes Federal Acquisition Service commissioner Josh Gruenbaum on the broader goal of getting agencies comfortable using AI for research, policy development and procurement while maintaining an ‘evenhanded’ approach to American innovators.