AI company Anthropic has revised its core security principles amid increasing competition in the sector.

AI company Anthropic has revised its core security principles amid increasing competition in the sector.

Anthropic, the AI ​​company behind cloud chatbots that was founded with a focus on secure technology, appears to be reducing its security commitments to keep the company competitive.

The company said Tuesday that it has changed its Responsible Scaling policy, a set of self-imposed guidelines aimed at preventing the development of AI that could be potentially dangerous and lead to situations like large-scale cyberattacks.

When updated guidelines Says Anthropic will still need “strong arguments that catastrophic risk is inherent” when developing AI, now it says it will only delay development “until we believe we have a significant edge” – meaning it will keep developing if it doesn’t believe they have an edge over their competitors.

The company said that it has taken this step because concerns about the security of AI in America have outweighed its economic potential.

“Despite rapid advances in AI capabilities over the past three years, government action on AI safety has moved slowly,” the company said. Said in a blog post.

“The policy environment has shifted toward prioritizing AI competitiveness and economic development, while security-oriented discussions have not yet gained meaningful momentum at the federal level.”

The change in Anthropic’s security guidelines comes as the Pentagon threatened to withdraw its contract with the company unless its technology is allowed to be used for all legal military purposes — though Anthropic says the guideline change is unrelated.

AI companies have historically sold themselves by putting security first.

was anthropic established in 2021 by former OpenAI employees, who were concerned that the company was prioritizing development before security. CEO Dario Amodei has also expressed apprehension about Negative potential of AI including mass humanitarian destructionand ensured that security will remain the “highest-level focus” for Anthropic December interview with Fortune.

A man in a blue-grey suit and white collared shirt talking to someone off camera
Dario Amodei, CEO and co-founder of Anthropic, speaking during the 56th annual World Economic Forum (WEF) meeting in Davos, Switzerland on January 20, 2026. (Denis Balibous/Reuters)

The blog post states that the company’s security practices were always intended to be updated, and this new iteration improves the company’s “transparency and accountability” with new commitments to regularly publish reports and security goals.

But Heidi Khalaf, chief AI scientist at the independent research group AI Now Institute, says that despite Anthropic’s safety-first reputation, it has always fallen short when it comes to its efforts to prevent human harm.

Khalaf says that from its first security policy, Anthropic has focused heavily on the possibility of catastrophic events.He calculates run-of-the-mill errors, such as with chatbots, rather than calculating the probability of harm caused by current AI technology.

Cloud chatbot has been in the past misuse In attempts to create fraudulent schemes and malware, and more recently Used to steal Mexican government data According to cyber security researchers.

She says the company is now taking off the “veil of security” it previously used to market itself because it has become clear it is not in its best interests.

“This is a strategic announcement to show that they are open for business,” Khalaf said.

Look Canada launches AI watchdog to monitor development of technology:

Canada launches AI watchdog to monitor safe development and use of technology

Amid the rapid global advancements and deployment of artificial intelligence technologies, the federal government has invested millions to combine the brains of three existing institutions into one that can keep an eye on potential threats ahead.

The announcement comes at a time of intense competition among top AI companies like Anthropic, OpenAI and Google, which all have competing chatbots and are striking lucrative deals with businesses and government departments to integrate their technologies.

The administration of US President Donald Trump has also indicated that it is fully working on AI development, and it has also done so Threatened to stop funding From states that enact laws that prevent American dominance in the industry.

Teresa Scasa, Canada Research Chair in Information Law and Policy at the University of Ottawa, says the US government’s no-regulation attitude makes it hard for companies to prioritize security, “because if they do, they’ll be left in the dust.”

It also puts Canada in a difficult position, she says, because regulation here could push back domestic AI development compared to the US, or encourage Canadian companies to move south of the border. Will be less Restrictions on their technology.

“And I think the understanding is that we can’t afford that in Canada right now. So you can see how it’s having that kind of impact on AI regulation here,” Scasa said.

She says since it’s from Canada Artificial Intelligence and Data Act expires in 2025The Canadian government, like the US, has not tried to implement any comprehensive AI regulation.

Company says security change unrelated to Pentagon controversy

The change in Anthropic’s security guidelines comes as the company comes under pressure from the Pentagon.

anthropic signed an agreement with the US Department of Defense The price, valued at up to US$200 million in July, allows the government to use its technology for military purposes, but within the company Usage Guidelines – Anthropic has its own set of rules for how customers can and cannot use its products, including the chatbot cloud.

Look How AI chatbots can impact companionship sentiments, voting intentions:

How AI chatbots can impact feelings of companionship, voting intentions

A growing number of people around the world are turning to AI chatbots for collaboration and connection. Meanwhile, a new study indicates that these chatbots may also influence people’s voting intentions. Study co-author Gordon Pennycook, an associate professor at Cornell University in the US, discusses his research and explains how AI chatbots can influence potential voters differently than traditional TV advertising.

Those guidelines prevent anyone, including the US government, from using Anthropic’s AI tools for many things, including designing or developing weapons.

but according to reportsUS Defense Secretary Pete Hegseth issued an ultimatum to CEO Amodei in a meeting on Tuesday – giving the company until Friday to allow the military to use its AI tools for all legal military purposes, or risk losing its government contracts.

listen Will AI agents take over the workplace?

front burner21:47Will AI agents take over the workplace?

In its confrontation with the government, Anthropic said it would not allow the government to use its technology in autonomous weapons systems – which allow AI to fire at targets alone – and mass surveillance systems.

But Pentagon officials told the media The controversy does not involve autonomous weapons and potential uses of AI in mass surveillance, and emphasizes that the government has always followed the law.

Anthropic says the update to its responsible scaling policy and the Defense Department’s demands are unrelated. According to Anthropic, Hegseth’s problems stem from the company’s usage policy rather than its scaling policy.

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus ( )