Anthropic said Thursday that “virtually no progress” had been made in the company’s talks with the Pentagon over the terms of use for its AI models ahead of a Friday afternoon deadline.
The Department of Defense (DOD) delivered its last and final offer to the company on Wednesday night, asking the firm to allow the department to access its AI model Claude for “all lawful purposes.”
It is unclear what changes the Pentagon has proposed as part of its latest offer to the company.
“The contract language we received overnight from the Department of War made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons,” an Anthropic spokesperson told The Hill in a statement Thursday.
“New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” they added. “Despite DOW’s recent public statements, these narrow safeguards have been the crux of our negotiations for months.”
The Pentagon has threatened to cancel Anthropic’s contract if it does not agree to the department’s terms by Friday afternoon. The AI firm was one of several companies that received a $200 million contract with the DOD last summer.
Anthropic’s usage policy bars its AI model from being used for mass surveillance or lethal autonomous weapons. These two issues have been the company’s red lines in its weeks-long negotiations with the Pentagon.
Amid this dispute, the DOD has also threatened to label Anthropic as a “supply chain risk” and to invoke the Defense Production Act (DPA) against the company. The DPA gives the president broad authority to control domestic industries in the name of national defense.
The Pentagon has raised the pressure in recent days, reaching out to defense contractors about their reliance on Anthropic’s Claude in what appears to be an initial step toward labeling the company a supply chain risk.
Anthropic CEO Dario Amodei accused the DOD of “inherently contradictory threats” in negotiations with the the AI giant.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei said in a lengthy statement.
“It is the Department’s prerogative to select contractors most aligned with their vision,” he added. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.”
The Pentagon said earlier Thursday that it has “no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Sean Parnell, chief Pentagon spokesperson wrote in a post on the social platform X. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions.”
The Pentagon’s latest proposal, which came less than 48 hours before the proposed deadline, included some of Anthropic’s suggested language around narrow exceptions to the autonomous and surveillance provisions, a person familiar with the negotiations told The Hill on Thursday.
But substantively, the DOD offer reverts to permitting any lawful use while adding qualifiers where they see fit to the autonomy provision and all other applicable guidelines to the surveillance provision, the source briefed on the talks said.
The source argued that the additions are made to look like concessions, but they ultimately give the other side the ability sideline restrictions whenever necessary.
They also accused the Pentagon of strategically leaking information to news outlets to make it appear to be engaging in good-faith negotiations.