OpenAI's Sam Altman Takes Jab At Anthropic, Says 'It’s Bad For Society'

OpenAI's Sam Altman Takes Jab At Anthropic, Says 'It’s Bad For Society'

OpenAI CEO Sam Altman criticised Anthropic during a conference, saying it is “bad for society” when companies defy government authority. The remarks came after Anthropic refused a Pentagon demand to allow unrestricted use of its Claude AI models, citing risks around autonomous weapons and mass surveillance of Americans.

FPJ Web DeskUpdated: Friday, March 06, 2026, 04:44 PM IST
article-image
Sam Altman | X

OpenAI CEO Sam Altman took aim at rival Anthropic during remarks at the Morgan Stanley Technology, Media & Telecom Conference, declaring it is “bad for society” when companies abandon their commitment to the democratic process - a pointed broadside aimed squarely at Anthropic’s public clash with the Pentagon.

'Government more powerful than private companies'

“The government is supposed to be more powerful than private companies,” Altman told the conference, adding that it undermines the democratic process when corporations resist government authority because “some people don’t like the person or people currently in charge.” The remark was widely interpreted as a critique of Anthropic CEO Dario Amodei’s decision to refuse the Department of Defense’s demands over the use of its Claude AI system.

Conflict between Anthropic and Pentagon reach boiling point

The comments come as the conflict between Anthropic and the Pentagon reaches a boiling point. The Department of Defense gave Anthropic an ultimatum - allow its AI models to be used for all “lawful purposes” without limitation, or face severe consequences. Anthropic refused. Defense Secretary Pete Hegseth subsequently declared Anthropic a “Supply-Chain Risk to National Security” - a designation typically reserved for companies with ties to foreign adversaries. President Trump then ordered every federal agency to immediately cease all use of Anthropic products.

The core sticking point: Anthropic’s insistence that its Claude models not be used for fully autonomous weapons or domestic mass surveillance of Americans. The company argued that current frontier AI models are simply not reliable enough for such applications, and that bulk collection of Americans’ public information - including geolocation data and financial records purchased from data brokers - constitutes a fundamental rights violation. The Pentagon refused to codify those exclusions in the contract.

Hours after Anthropic was blacklisted, Altman announced that OpenAI had reached its own deal with the Department of Defense. The swift move drew immediate criticism, and even Altman acknowledged it “looked opportunistic and sloppy.” He insisted the intent was to de-escalate the situation, not exploit it. “It is complicated, we are busy with other things,” Altman said. “But last week, when things started to get into a fight, it became increasingly clear to us that there was a chance things were going to go very badly.”

Despite his public jabs, Altman offered a conciliatory note when speaking directly to CNBC. “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” he said. OpenAI’s own deal with the Pentagon was built around the phrase “all lawful purposes” - the same standard Anthropic rejected - though Altman maintained that OpenAI had secured adequate safeguards within that framework.

Amodei-Altman feud

The feud between the two AI leaders has deep roots. Amodei is a former OpenAI executive who left to co-found Anthropic, and the companies have become fierce commercial rivals. In a leaked internal memo, Amodei accused Altman of providing “dictator-style praise to Trump” and called OpenAI’s safety messaging “mendacious” and “safety theater.” Amodei has since publicly apologized for the tone of the memo, calling it a rushed reaction. “It does not reflect my careful or considered views,” he wrote.

The industry-wide implications are significant. More than 70 OpenAI employees signed an open letter titled “We Will Not Be Divided,” expressing solidarity with Anthropic. Google employees sent their own letter to management urging the company to adopt the same red lines. Independent experts have described the standoff as unprecedented in the world of Pentagon contracting.

The financial stakes are also considerable. OpenAI’s annual revenue run rate has recently crossed $25 billion, while Anthropic has surpassed $19 billion. The loss of the $200 million Pentagon contract is a setback for Anthropic, but some observers note the company may have won something more durable - a brand identity as the AI lab that told Washington no.

Anthropic’s refusal centers on two objections: AI is not yet reliable enough for fully autonomous weapons, and bulk data collection of Americans’ public records violates civil liberties.