PENTAGON, ANTHROPIC CLASH OVER USE OF AI IN US MILITARY OPERATION

admin
3 Min Read
Spread the love

By Micah Jonah, January 30, 2026

The United States Department of Defense and artificial intelligence firm, Anthropic are locked in a growing dispute over how advanced AI tools should be deployed for military and national security purposes, according to sources familiar with the matter.

The disagreement centres on safeguards Anthropic wants in place to prevent its technology from being used for autonomous weapons targeting and domestic surveillance within the United States. Pentagon officials, however, insist that the military should be free to deploy commercial AI systems as long as their use complies with U.S. law, regardless of company specific usage restrictions.

Sources said discussions between both sides have reached a standstill despite months of negotiations under a contract reportedly valued at up to $200 million. The impasse is seen as an early test of whether Silicon Valley firms can meaningfully influence how the U.S. military adopts and applies rapidly advancing AI technologies.

Anthropic, a San Francisco based startup and one of several major AI developers awarded Pentagon contracts last year, has expressed concern that its systems could be used to assist weapons targeting without sufficient human oversight or to monitor American citizens. The company has maintained that its AI models are designed to minimise harm and require close collaboration with its engineers to modify them for sensitive deployments.

Pentagon officials have reportedly pushed back, arguing that military and intelligence agencies must retain flexibility in deploying AI technologies to meet national security needs. A spokesperson for the Department of Defense, recently renamed the Department of War by the Trump administration, did not immediately respond to requests for comment.

The standoff comes at a delicate time for Anthropic, which is preparing for a potential public listing and has invested heavily in expanding its footprint within the U.S. national security sector. The company has also sought a role in shaping government policy on artificial intelligence.

Anthropic’s Chief Executive Officer, Dario Amodei, recently warned that while AI should support national defence, it must not be used in ways that undermine democratic values or mirror the practices of authoritarian states. His comments reflect broader concerns among some technology leaders about the ethical implications of government use of powerful AI tools.

As the U.S. military accelerates its adoption of artificial intelligence, the outcome of the dispute is expected to shape future relationships between defence agencies and private technology firms, particularly on the limits and governance of AI in warfare and surveillance.

Share This Article
Leave a Comment