OpenAI has announced changes to a controversial agreement with the US government following criticism over the potential military use of its artificial intelligence systems.
On Monday, OpenAI chief executive Sam Altman said the company would amend the deal to include stricter safeguards—most notably an explicit ban on using OpenAI’s technology to spy on Americans.
The agreement, which allows OpenAI’s systems to be used in classified military operations, became public on Friday and quickly sparked concern about how artificial intelligence is deployed in warfare and surveillance.
Safeguards Strengthened
Altman said the updated contract would make clear that OpenAI’s tools cannot be “intentionally used for domestic surveillance of U.S. persons and nationals.” He also confirmed that intelligence agencies such as the National Security Agency would require additional approval before gaining access to OpenAI’s systems.
Posting on X, Altman acknowledged that the company had mishandled the rollout.
“The issues are super complex and demand clear communication,” he wrote.
“We were trying to de-escalate a difficult situation, but it came across as opportunistic and sloppy.”
Fallout From Rival AI Dispute
The controversy follows a breakdown in talks between the US Department of Defense and OpenAI’s rival Anthropic. Anthropic reportedly refused to allow its AI model, Claude, to be used for mass surveillance or fully autonomous weapons—principles it considers non-negotiable.
OpenAI initially defended its Pentagon deal, saying it contained “more guardrails than any previous agreement for classified AI deployments.” However, public reaction was swift.
Data cited by US media showed day-over-day uninstalls of the ChatGPT mobile app jumping by nearly 300% over the weekend, compared with a typical daily change of around 9%. At the same time, Anthropic’s Claude climbed to the top of Apple’s App Store rankings, where it remains.
Despite Anthropic’s stance, reports later emerged suggesting Claude had been used in the US-Israel conflict with Iran—shortly after it was blacklisted by the administration of Donald Trump. The Pentagon declined to comment on its relationship with Anthropic.
How AI Is Already Used in War
Artificial intelligence is already embedded in military operations, from managing logistics to analysing vast quantities of battlefield data.
The US, Ukraine and NATO all use software from Palantir, which provides tools for intelligence gathering, surveillance and military planning. The UK Ministry of Defence recently signed a £240m contract with the firm.
Palantir’s AI-powered defence platform, Maven, integrates data from satellites, sensors and intelligence reports. According to Palantir UK head Louis Mosley, commercial AI models such as Claude can then be used to support “faster, more efficient—and, where appropriate, more lethal—decisions.”
Human Oversight Still Key
Military officials stress that AI does not operate independently. Lieutenant Colonel Amanda Gustave, chief data officer for NATO’s Task Force Maven, said humans remain firmly in control.
“We always introduce a human in the loop,” she said, adding that AI systems would never be allowed to make final decisions on their own.
Unlike Anthropic, Palantir does not oppose autonomous weapons outright but argues that meaningful human oversight must be maintained.
Still, some experts remain uneasy. Professor Mariarosaria Taddeo of Oxford University warned that with Anthropic stepping back from Pentagon work, “the most safety-conscious actor” may no longer be part of critical discussions around military AI use.

