
The Pentagon is pressuring major artificial intelligence developers to permit U.S. military applications of their technology for “all lawful purposes.” This demand is directed at Anthropic, OpenAI, Google, and xAI. An anonymous Trump administration official reported that one of these firms has agreed to the terms, while two others have demonstrated flexibility. Anthropic remains the most resistant entity in these negotiations, prompting the Department of Defense to threaten the termination of a $200 million contract with the company.
Friction between the AI firm and government officials is not a recent development. In January, the Wall Street Journal documented significant disagreement regarding the permissible scope of Anthropic’s Claude models within defense contexts. The report further noted that the technology was utilized during the U.S. military operation to apprehend former Venezuelan President Nicolás Maduro. When contacted by TechCrunch, Anthropic did not provide an immediate response to inquiries regarding the contract dispute.
Addressing the nature of the dispute, an Anthropic spokesperson told Axios that the company has “not discussed the use of Claude for specific operations with the Department of War.” The spokesperson clarified that current discussions are “focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance.” This statement distinguishes the company’s policy concerns from the Pentagon’s request for broader access to its models.