Trump Administration Labels Anthropic AI a Supply Chain Risk

The Pentagon labels AI firm Anthropic a supply chain risk, impacting its chatbot Claude and sparking legal challenges.
Pentagon labels AI company Anthropic a supply chain risk 'effective immediately' : NPR

Anthropic Faces Government Backlash Over AI Technology

In a move that has sparked widespread debate, the Trump administration has designated the artificial intelligence company Anthropic as a supply chain risk. This unprecedented action could compel other government contractors to cease using Anthropic’s AI product, Claude.

The Pentagon announced on Thursday that Anthropic and its products are now considered a supply chain risk, effective immediately. This decision follows accusations from President Donald Trump and Defense Secretary Pete Hegseth that the company poses a national security threat, leaving little room for negotiation.

Last week, on the brink of the Iran war, Trump and Hegseth unveiled plans to penalize Anthropic, whose CEO, Dario Amodei, resisted demands to alter the company’s AI products amid concerns about their potential use in mass surveillance and autonomous weaponry.

Amodei responded by stating the company intends to challenge the decision legally, arguing that the action is not legally sound. He emphasized that the exceptions sought by Anthropic were limited to high-level usage areas, not operational decision-making.

The Pentagon, however, insists the core issue is about ensuring military technology can be used for all lawful purposes without vendor-imposed restrictions. The military has been given six months to phase out Claude, currently integrated into various defense platforms, to avoid depriving warfighters of vital tools during combat.

Some defense contractors have already started distancing themselves from Anthropic. Lockheed Martin announced its compliance with presidential directives, noting it does not rely on any single large language model provider.

Anthropic’s designation has drawn criticism for potentially misapplying a federal rule meant to address supply threats by foreign adversaries. Critics argue the rule’s use against a domestic company deviates from its intended purpose.

Pentagon’s Decision Sparks Criticism

U.S. Sen. Kirsten Gillibrand criticized the Pentagon’s decision, describing it as a “dangerous misuse” of a tool designed for adversary-controlled technology. Former officials, including ex-CIA director Michael Hayden, echoed concerns over the precedent set by this action, which they argue should only target foreign threats, not American companies.

Neil Chilson, of the Abundance Institute, labeled the move as “massive overreach,” potentially harming U.S. AI innovation and the military’s access to cutting-edge technology.

Anthropic’s Consumer Popularity Rises

Despite losing key defense partnerships, Anthropic has seen a surge in consumer downloads as people support its ethical stance. The company reports over a million daily sign-ups for Claude, surpassing competitors like OpenAI’s ChatGPT in several countries.

The controversy has intensified Anthropic’s rivalry with OpenAI, a tension that began when former OpenAI leaders, including Amodei, founded Anthropic in 2021. Following the Pentagon’s action, OpenAI secured a deal to replace Anthropic in military environments, prompting regret from both sides over past actions.

This article was originally written by www.npr.org

Author

Share:

More Posts

Send Us A Message

Subscribe