The Washington Post reports that Anthropic rejects Pentagon terms for lethal use of its chatbot Claude. The dispute has arisen over the terms and conditions for the Pentagon’s use of Claude.ai for military and surveillance purposes.
Defense Secretary Pete Hegseth said the Pentagon must be able to use the technology for the full range of warfighting — a broad remit that left too many questions for Anthropic to be comfortable with.
On Thursday, Anthropic CEO Dario Amodei said in a lengthy statement that the firm was holding firm to its red lines — and hoped the Pentagon would reconsider.
“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei wrote. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now,” he said, citing specifically autonomous weapons use and mass surveillance.
“We cannot in good conscience accede to their request,” Amodei wrote.
This comes on top of earlier reports that:
Anthropic’s artificial-intelligence model Claude was used in the U.S. military’s operation to capture former Venezuelan President Nicolas Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter.
Claude’s deployment came via Anthropic’s partnership with data firm Palantir Technologies , whose platforms are widely used by the Defense Department and federal law enforcement, the report added.
It’s not hard to imagine where this could end up, with ultra-smart AI technology, possibly including robotic agents in future, allied with Palantir’s deeply-embedded surveillance technology. Terminator springs immediately to mind.
Secretary Hegseth, true to form, is fuming and threatening to altogether cut Anthropic from government contracts among other things.
I think Amodei’s stance here deserves applause, as he’s up against an authoritarian and unprincipled administration who have already engaged in many questionable acts and demonstrated disregard for law and principle. As CEO of one of the leading AI firms, I think it’s encouraging that he’s taking a principled stand, at some political and business risk. Let’s hope his peers at other AI companies demonstrate similar commitment in future.