LocalNet
  • Start Page|
  • My Account|
  • Webmail|
  • Help
  • Top Stories
  • US News
  • International
  • Sports
  • Entertainment
  • Business / Finance
  • Health
  • Science
  • Technology
  • Offbeat News
New
LocalNet
Webmail!
High Speed DSL. As Low as $19.95 per month, click to learn more!

AI company Anthropic sues Trump administration seeking to undo 'supply chain risk' designation

By MATT O'BRIEN  -  AP

Artificial intelligence company Anthropic is suing to stop the Trump administration from enforcing what it calls an “unlawful campaign of retaliation” over its refusal to allow unrestricted military use of its technology.

Anthropic asked federal courts on Monday to reverse the Pentagon’s decision last week to designate the artificial intelligence company a “ supply chain risk.” The company also seeks to undo President Donald Trump's order directing federal employees to stop using its AI chatbot Claude.

The legal challenge intensifies an unusually public dispute over how AI can be used in warfare and mass surveillance — one that has also dragged in Anthropic's tech industry rivals, particularly ChatGPT maker OpenAI, which made its own deal to work with the Pentagon just hours after the government punished Anthropic for its stance.

Anthropic filed two separate lawsuits Monday, one in California federal court and another in the federal appeals court in Washington, D.C., each challenging different aspects of the government's actions against the San Francisco-based company.

“These actions are unprecedented and unlawful," Anthropic's lawsuit says. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”

The Defense Department declined to comment Monday, citing a policy of not commenting on matters in litigation.

Anthropic said it sought to restrict its technology from being used for mass surveillance of Americans and fully autonomous weapons. Defense Secretary Pete Hegseth and other high-ranking officials publicly insisted the company must accept “all lawful" uses of Claude, threatened punishment if Anthropic did not comply and condemned the firm and its CEO Dario Amodei on social media.

Designating the company a supply chain risk cuts off Anthropic's defense work using an authority that was designed to prevent foreign adversaries from harming national security systems. It was the first time the federal government is known to have used the designation against a U.S. company. Hegseth said in a March 4 letter to Anthropic that it was “necessary to protect national security,” according to Anthropic's lawsuit.

Trump also said he would order federal agencies to stop using Claude, though he gave the Pentagon six months to phase out a product that’s deeply embedded in classified military systems, including those used in the Iran war.

Anthropic's lawsuit also names other federal agencies, including the departments of Treasury and State, after agency officials ordered employees to stop using Claude.

Anthropic makes several strong First Amendment and due process arguments in a case that has “escalated beyond comprehension,” said Michael Pastor, a professor at New York Law School who previously worked as a New York City general counsel helping to craft its technology contracts.

“I’ve never seen a case like this,” Pastor said. “It would never have struck our minds that, when we were having difficulty in a negotiation, we would threaten the company essentially with destruction.”

Even as it fights the Pentagon’s actions, Anthropic has sought to convince businesses and other government agencies that the Trump administration’s supply chain risk designation is a narrow one that only affects military contractors when they are using Claude in work for the Department of Defense.

Making that distinction clear is crucial for the privately held Anthropic because most of its projected $14 billion in revenue this year comes from businesses and government agencies that are using Claude for computer coding and other tasks. More than 500 customers are paying Anthropic at least $1 million annually for Claude, according to a recent investment announcement that valued the company at $380 billion.

Anthropic said in a statement Monday that “seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners."

The lawsuit positions AI safety and "positive outcomes for humanity” as critical to Anthropic's mission since its founding in 2021 by Amodei and six other former OpenAI employees.

Its usage policy always prohibited "lethal autonomous warfare without human oversight and surveillance of Americans en masse,” the company said in its lawsuit. Anthropic said it has never tested Claude on those applications and doesn't have the confidence its products could “function reliably or safely if used to support lethal autonomous warfare.”

At the same time, it has enabled the military to use Claude in ways that civilians could not, including military operations and in analyzing “lawfully collected foreign intelligence information.”

Until recently, Anthropic was the only of its tech industry peers approved to supply its AI model to classified military systems. The dispute has led the Pentagon to look to shift Claude's work to Google's Gemini, OpenAI's ChatGPT and Elon Musk's Grok.

Anthropic's lawsuit alleges the Trump administration's actions are impugning its reputation, “jeopardizing hundreds of millions of dollars” in contracts with other businesses and attempting to “destroy the economic value created by one of the world’s fastest-growing private companies.”

Conversely, the fight has also boosted Anthropic's reputation among some customers and tech workers who sided with the company's refusal to budge to pressure from the Trump administration. Amodei's moral stance was further distinguished when his bitter rival, OpenAI CEO Sam Altman, sought to replace Pentagon's Claude with ChatGPT in a move Altman later admitted was rushed and seemed opportunistic.

Consumer downloads of Claude surged, lifting its popularity for the first time over better-known ChatGPT and Gemini.

The controversy also continues to have repercussions in the competition to retain AI industry talent, such as the resignation of OpenAI's head of robotics, Caitlin Kalinowski.

“This wasn't an easy call, " Kalinowski wrote on social media over the weekend. "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

...

----------
Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

 
News content provided by the Associated Press. Weather content provided by AccuWeather
© 1994-2026 LocalNet Corp. All Rights Reserved