“Incompatible With Democratic Values”: Anthropic Says No To US Government Over Mass Surveillance, Military Help
Anthropic has publicly defied the US Department of War. The artificial intelligence firm, widely regarded as a leader in “AI safety,” has formally rejected government demands to remove safeguards that prevent its technology from being used for mass domestic surveillance and the development of fully autonomous weaponry.
In a candid statement released on February 26, 2026, Anthropic CEO Dario Amodei warned that acceding to the government’s latest requirements would be “incompatible with democratic values.” The defiance marks a rare point of friction in what has previously been a symbiotic relationship between the frontier AI company and the American national security establishment.
A Partnership Under Strain
Until now, Anthropic has been a cornerstone of the US military’s AI strategy. It was the first “frontier” AI firm to deploy models within classified networks and National Laboratories. Its flagship AI, Claude, is currently integrated into the Department of War for mission-critical tasks including intelligence analysis, operational planning, and cyber operations.
Mr Amodei’s statement was careful to reaffirm the company’s commitment to Western interests. He highlighted that Anthropic has previously foregone “several hundred million dollars in revenue” by cutting off firms linked to the Chinese Communist Party and has actively lobbied for export controls on high-end chips to maintain a “democratic advantage” in the global AI race.
“I believe deeply in the existential importance of using AI to defend the United States and other democracies,” Amodei stated. However, he drew a firm moral and technical line at two specific applications that the Department of War now insists must be permitted.
The Red Lines: Surveillance and Autonomy
The first point of contention involves mass domestic surveillance. Anthropic argues that while AI is vital for foreign counter-intelligence, its application against American citizens poses “serious, novel risks to fundamental liberties.”
The company pointed to a legal loophole where the government can purchase detailed records of citizens’ movements and web browsing from public sources without a warrant. While this data is currently scattered, Amodei warned that powerful AI could assemble these fragments into a “comprehensive picture of any person’s life—automatically and at massive scale.” He argued that current laws have simply not caught up with the “rapidly growing capabilities” of the technology.
The second “red line” concerns fully autonomous weapons—systems capable of selecting and engaging targets without a human “in the loop.” While acknowledging that such weapons may one day be necessary, Anthropic argues that current frontier models are “simply not reliable enough.”
“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” the statement read. Amodei noted that the Department of War had rejected an offer from Anthropic to collaborate on R&D to improve the reliability and judgment of these systems, opting instead to demand the removal of all restrictions.
“Supply Chain Risks” and Legal Threats
The dispute has escalated into what appears to be a breakdown in diplomatic relations. According to Anthropic, the Department of War has threatened to designate the firm a “supply chain risk”—a label traditionally reserved for adversarial foreign entities like Huawei or ZTE.
Furthermore, the government has reportedly threatened to invoke the Defense Production Act to force Anthropic to hand over its technology without the built-in safeguards. Mr Amodei described these threats as “inherently contradictory,” noting that the government is simultaneously labelling the company a security risk while claiming its technology is so essential to national security that its provision must be legally compelled.
A Precedent for the Industry
The stand-off raises profound questions about the autonomy of private tech firms in the age of “AI-driven warfare.” If the Department of War follows through on its threat to “offboard” Anthropic, it would necessitate a massive transition to other providers, potentially disrupting ongoing military simulations and cyber-defence operations.
Anthropic has stated it will facilitate a “smooth transition” to other providers if necessary, but it refuses to blink. “Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei concluded