Judge temporarily halts Pentagon blacklist of AI company Anthropic
listen to this article
estimated 3 minutes
The audio version of this article has been generated by AI-based technology. There may be incorrect pronunciations. We are working with our partners to continually review and improve results.
A US judge on Thursday temporarily blocked the Pentagon’s blacklisting of Anthropic, the latest twist in the company’s high-profile battle with the military over AI safety on the battlefield.
Anthropic’s lawsuit in federal court in California alleges that U.S. Secretary of War Pete Hegseth overstepped his authority by designating Anthropic as a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential intrusion or sabotage by adversaries.
Anthropic alleged that the government violated its free speech rights under the First Amendment by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, violating its Fifth Amendment right to due process.
US District Judge Rita Lynn, appointed by former US President Joe Biden, agreed with the company in a 43-page decision, but said it would not take effect until seven days to give the administration a chance to appeal.
Hegseth’s unprecedented move followed Anthropic’s opposition to allowing the military to use its AI chatbot cloud for US surveillance or autonomous weapons, blocking Anthropic from some military contracts. Anthropic officials have said it could cost the company billions of dollars in lost business and reputational damage.
Amid the rapid global advancements and deployment of artificial intelligence technologies, the federal government has invested millions to combine the brains of three existing institutions into one that can keep an eye on potential threats ahead.
Anthropic says AI models are not reliable enough to be safely used in autonomous weapons and it opposes domestic surveillance as a rights violation. The Pentagon says private companies should not be able to hinder military action, but also says the Pentagon has no interest in those uses and will only use the technology through legal means.
In Thursday’s decision, Lynn said the administration’s actions were not directed at the government’s stated national security interests, but rather to punish Anthropic.
Lynn wrote, “The record supports the inference that Anthropic is being punished for criticizing the government’s contracting position in the press.”
“Punishing Anthropic for publicly investigating the government’s contracting situation is classic illegal First Amendment retaliation.”
Anthropic spokesperson Danielle Cohen said the company is pleased with the decision.
“While this matter was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, trusted AI,” Cohen said in a statement.
Anthropic’s designation was the first time a U.S. company was publicly designated as a supply-chain risk under an obscure government-procurement statute intended to protect military systems from foreign sabotage.
Anthropic’s March 9 lawsuit says the decision was unlawful, not supported by the facts and inconsistent with Cloud’s previous praise for the military.
The Justice Department countered that Anthropic’s refusal to lift the ban could create uncertainty at the Pentagon about how it can use the cloud and risk disabling military systems, according to the court filing.
The government said the designation was due to Anthropic’s refusal to accept the terms of the contract, and not because of its views on AI safety.
Anthropic has a second lawsuit pending in Washington over a separate Pentagon supply-chain risk designation that could exclude it from civilian government contracts.
front burner31:13Iran and AI on the battlefield