What a weekend! Anthropic is now poised to sue the Pentagon, after being labeled a “supply chain risk,” while OpenAI has its own agreement allowing the agency to use OpenAI’s models in “classified environments.”
There are still plenty of unanswered questions, including how big of a business risk being designated a “supply chain risk” poses for Anthropic (for more on that, see here). But what’s now become clear is that OpenAI’s deal with the Pentagon seems to accept at least some terms that Anthropic rejected. That raises new questions about how the Pentagon will be able to deploy AI in military missions—and how OpenAI employees who supported Anthropic’s red lines will respond.
Over the weekend, OpenAI disclosed more information about its contract, including language governing how OpenAI’s models can be used in autonomous weapons systems and for surveillance. OpenAI said its red lines were no use of OpenAI technology for mass domestic surveillance, to direct autonomous weapons systems or for high-stakes automated decisions like a “social credit” score. But, according to the contract, OpenAI’s models can be used for “all lawful purposes,” language Anthropic had rejected.
OpenAI’s “contract language raises more questions than answers, largely because the law is ambiguous, unclear and loophole-ridden,” said Amos Toh, senior counsel at New York University’s Brennan Center for Justice studying artificial intelligence and national security.