'We Cannot In Good Conscience Accede To Their Request': Anthropic CEO Dario Amodei Refuses Department Of War's Demands To Remove AI Safeguards

Anthropic CEO Dario Amodei refused to remove safeguards blocking mass domestic surveillance and fully autonomous weapons from its Claude AI system, despite pressure from the US Department of War. Officials reportedly threatened to cut ties and invoke the Defense Production Act. Amodei said the company “cannot in good conscience” comply.

Add FPJ As a
Trusted Source
Tasneem Kanchwala Updated: Friday, February 27, 2026, 10:08 AM IST
'We Cannot In Good Conscience Accede To Their Request': Anthropic CEO Dario Amodei Refuses Department Of War's Demands To Remove AI Safeguards |

'We Cannot In Good Conscience Accede To Their Request': Anthropic CEO Dario Amodei Refuses Department Of War's Demands To Remove AI Safeguards |

In a remarkable public standoff between a leading artificial intelligence company and the United States military establishment, Anthropic CEO Dario Amodei has flatly refused to remove safety restrictions on its Claude AI systems - even in the face of serious legal and contractual threats from the Department of War.

In a public statement, Amodei made Anthropic's position unambiguous, "These threats do not change our position: we cannot in good conscience accede to their request."

The confrontation centers on two specific uses the Department of War has sought to unlock within Claude's capabilities - mass domestic surveillance and fully autonomous weapons systems - both of which Anthropic has long excluded from its government contracts. Now, the Pentagon is demanding that those exclusions be removed.

Anthropic says 'we have gone farther than most'

The standoff is striking in part because of how far Anthropic has already gone to cooperate with the national security establishment.

Amodei noted in his statement that Anthropic was the first frontier AI company to deploy its models in the US government's classified networks, the first to work with the National Laboratories, and the first to develop custom models for national security customers. Claude is already deployed across the Department of War for intelligence analysis, operational planning, cyber operations, and more.

Anthropic has also taken hits to its bottom line in service of US national security goals - forgoing hundreds of millions of dollars in revenue by cutting off access to companies linked to the Chinese Communist Party, some of which had been designated Chinese Military Companies by the Department of War itself.

"Anthropic understands that the Department of War, not private companies, makes military decisions," Amodei wrote. "We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."

Two lines Anthropic says it will not cross

Despite that extensive cooperation, Amodei drew two firm lines.

- On mass domestic surveillance: Amodei acknowledged that some such surveillance may currently be legal, but argued the law has not kept pace with AI's rapidly growing capabilities. Powerful AI, he warned, can now stitch together scattered public records - location data, web browsing history, personal associations - into a comprehensive, automated portrait of any individual's life. "Using these systems for mass domestic surveillance is incompatible with democratic values," he wrote.

- On fully autonomous weapons: Amodei stopped short of ruling out such systems forever, acknowledging they "may prove critical for our national defense." But today's AI, he argued, is simply not reliable enough for weapons that entirely remove humans from decisions about selecting and engaging targets. Anthropic offered to work with the Department of War on research to improve reliability. The offer was declined.

Department of War's threats

The Department of War's response has been aggressive. According to Amodei, officials have threatened to remove Anthropic from their systems entirely if the safeguards remain in place. More dramatically, they have threatened to designate Anthropic a "supply chain risk" — a label previously reserved for foreign adversaries — and to invoke the **Defense Production Act** to compel removal of the safeguards.

Amodei noted the inherent contradiction in those two threats: one frames Anthropic as a security threat to the United States; the other treats Claude as so essential to national security that it must be commandeered under wartime powers.

"It is the Department's prerogative to select contractors most aligned with their vision," he wrote. "But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider."

Published on: Friday, February 27, 2026, 10:08 AM IST

RECENT STORIES