Anthropic, a leading AI company, recently refused to sign a Pentagon contract that would allow the United States military “unrestricted access” to its technology for “all lawful purposes.” To sign, Anthropic CEO Dario Amodei required two clear exceptions: no mass surveillance of Americans and no fully autonomous weapons without human oversight.
The very next day, the U.S. and Israel launched a large-scale offensive against Iran.
This leaves many wondering: how different would a war with fully autonomous weapons look? How important an ethical decision was it, when Amodei referred to fully autonomous weapons and mass surveillance as AI “red lines” that his company would not cross? What do these red lines mean for other nations?
The decision cost Anthropic immensely. U.S. President Donald Trump ordered all American agencies to stop using Anthropic’s AI family of advanced large language models (LLMs) and conversational chatbots, Claude. Pete Hegseth, U.S. defence secretary, designated Anthropic as a “supply chain risk,” which could impact other contract possibilities for the company. And rival company OpenAI swiftly struck a deal with the Pentagon instead.
The risks of fully autonomous weapons
AI chatbots are typically not weapons on their own, but they can become part of weapons systems. They do not fire missiles or control drones, but they can be plugged into the larger military systems.
They can quickly summarize intelligence, generate target shortlists, rank high-priority threats and recommend strikes. A key risk is that of a process going from sensor data to AI interpretation, target selection and weapon activation with minimal to no human control or even awareness.
Fully autonomous weapons are military platforms that, once activated, independently conduct military operations without human intervention. They rely on sensors such as cameras, radars and AI algorithms to analyze the environment, search for, select and engage targets.
Advanced helicopters, for instance, already operate with no human intervention. With fully autonomous weapons, human control and oversight disappear and AI makes final attack and battlefield decisions.
This is concerning, given recent research in which advanced AI models opted to use nuclear weapons in simulated war games in 95 per cent of cases.
(Don Feria/AP Content Services for Anthropic)
The risks of mass surveillance
Frontier AI models can promptly summarize huge data sets and auto-generate patterns to look for signals of suspicious people and activity through even weak associations. In his statement on Anthropic’s discussions with the Department of War, Amodei argued that “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”
They can analyze records, communications and metadata to scan across populations. They can produce briefings and lists of people that flag automatically who gets questioned, denied entry into a country, refused a job, etc. These systems create risks to privacy because they can analyze data from multiple sources, such as social media accounts, and combine these with cameras and facial recognition to track people in real time.
AI models can also make mistakes. Even a small erroneous association can scale up dangerously if the system is run over millions of people.
AI models are also opaque: how they analyze data and reach their conclusions cannot be fully comprehended, which adds to the difficulty of challenging the output.
‘All lawful purposes’
The label “all lawful purposes” sounds like a safety limit. Yet, this language means that the government can use AI for all purposes that it deems legal, with few limits in the contract.
This matters because legality is a moving target, laws can change and are often ill-equipped to deal in real time with fast changing innovations, and interpretations can shift.
This is what made Anthropic, a company that was founded by former OpenAI employees with an explicit focus on AI safety and ethics, argue that AI-enabled mass surveillance was a novel risk and that lawful purposes could not provide stable guardrails.
Anthropic has famously developed an internal lab to understand how Claude works, interprets queries and makes autonomous decisions. Given the opacity of LLMs as well as the speed with which their capacities develop, such efforts matter.
Project Maven with higher stakes?
In some ways, this story is familiar. Technology companies have long been at the forefront of innovation, with great promises of progress but also risks of misuse and negative consequences. The closest historical comparison is Google’s Project Maven in 2018.
Google had a contract with the Pentagon for the company to help analyze drone surveillance footage. Four thousand Google employees protested the project, arguing that surveillance should not be part of the company’s mission. Google announced it would not renew Maven and later issued AI principles that included commitments around weapons and surveillance.
The situation became a landmark case in the power of employee activism and public pressure.
The Project Maven example, however, also reminds us that company ethics and AI safety are fluctuating matters. In early 2025, Google discreetly dropped its pledge not to use AI for weapons and surveillance in an attempt to gain new lucrative defence contracts.
Anthropic’s current situation is in some respects similar to Google’s Project Maven one: it shows a company and its leaders trying to place limits on military uses of AI. It illustrates tensions that emerge when espoused corporate values collide with governments and national security demands.
The Anthropic case is also distinct because generative AI in 2026 is much more powerful than it was just a few years ago. Project Maven was only about analyzing drone footage. Today’s models can be used for many tasks, so the spillover risk is larger.
LLMs like Claude can self-improve by learning from user corrections and refining actions through iterative feedback loops. What an unrestricted Claude and its client, the Pentagon, could have done is therefore worrisome.
Who sets the limits?
These events are neither about Anthropic being uniquely principled nor about the Pentagon being uniquely demanding. They are about a critical issue that will keep coming back as AI becomes more powerful: who sets the limits regarding AI use when national security is involved?
If “all lawful purposes” become the default, the guardrails will depend on politics and legal interpretation. For Canada and other nations, the safeguards matter. Ethics cannot be left to contract negotiations and corporate conscience.
These events illustrate the complexities of engaging in AI ethics in practice. AI ethics principles and declarations are important and abound. At the same time, in practice, AI ethics are set through contracts, procurement rules, various parties’ actual behaviour and oversight.
Canada’s defence and public sectors are building AI capacity and Canada operates closely with the U.S. defence and intelligence. This means that procurement language and standards can travel. If “all lawful purposes” becomes the standard language in the U.S. national security market, this could put pressure on Canada and other nations to adopt similar terms.
The reassuring news is that Canada has governance tools in place it can strengthen and extend. The Directive on Automated Decision-Making is designed to ensure that systems are transparent, accountable and fair. It requires impact assessment and public reporting.
The Algorithmic Impact Assessment is a mandatory risk-assessment tool tied to the directive.
But Canadians should be mindful of ongoing developments to check that procurement standards name prohibited uses, to call for audits and for independent oversight so that safeguards do not depend only on particular governments and companies at the top.




