Claude developer pushes back on accusations of being a supply chain risk
Pro
Dario Amodei, Anthropic
A federal judge has sided with artificial intelligence company Anthropic in its legal dispute with the Trump administration. Judge Rita F. Lin issued an injunction prohibiting the government from designating Anthropic as a “supply chain risk” and banning federal agents from using the Claude AI model. The decision follows the announcement by President Donald Trump and Defence Secretary Pete Hegseth in February that they would cut ties with Anthropic because the company refused to allow unlimited military use of Claude.
Anthropic voiced concerns about potential applications such as lethal autonomous weapons without human oversight and mass surveillance of US citizens. In response, the administration branded Anthropic a “national security risk in the supply chain”.
Judge Lin, however, characterised the government’s actions as an attempt to “hobble Anthropic” and “freeze public debate”, suggesting that it amounted to “classic First Amendment retaliation”.
She considered the measures taken against Anthropic to be arbitrary and capricious, particularly Hegseth’s reliance on a rarely used military authority that is generally reserved for foreign adversaries. In her ruling, Lin stressed that branding an American company as a potential adversary simply because it disagrees with the government is an “Orwellian idea” that is not supported by law.
Anthropic has filed two lawsuits against the government: one seeking reconsideration of the “supply chain risk” designation, and another for violations of the First Amendment. The preliminary injunction ensures that Anthropic’s technology remains accessible to government agencies and external companies working with the Department of War, pending the outcome of the case.
Business AM


