It is no longer about massing hundreds of thousands of boots on ground.
Contemporary warfare and military operations have transformed into hi-tech matters where nations with the tiptop best artificial intelligence (AI) tech tools can easily subdue those still exercising the old order.
For instance, a fresh disclosure now shows that Claude, the AI model developed by Anthropic, was used by the US military during its operation, 3 January this year, to kidnap then-president Nicolás Maduro from Venezuela, the Wall Street Journal(WSJ) revealed yesterday -a high-profile example of how the US Defence Department is using AI in its operations.
The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela’s Defence Ministry.
Anthropic’s terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.
In interviews following Maduro’s capture, President Donald Trump revealed that US forces used a secret, advanced weapon he nicknamed ‘The Discombobulator’ to disable Venezuelan equipment during the operation.
Trump described it as a top-secret tool that made enemy equipment stop functioning. He had told the New York Post, “They never got their rockets off. They had Russian and Chinese rockets, and they never got one off. We came in, they pressed buttons and nothing worked.”
The weapon reportedly disabled defence infrastructure, including radars and air defence systems, and may have used advanced electronic jamming, directed energy, or sound waves to incapacitate personnel, thus effectively making enemy troops sitting ducks.
With the report by WSJ, it is now confirmed that in addition to the ‘Discombobulator’, the US also used AI tools, specifically Anthropic’s Claude, to support planning and execution of the operation.
Anthropic was the first AI developer known to be used in a classified operation by the US Department of Defence.
It was unclear how the tool, which has capabilities ranging from processing PDFs to piloting autonomous drones, was deployed.
A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the AI tool was required to comply with its usage policies. The US Defence Department did not comment on the claims.
The WSJ cited anonymous sources who said Claude was used through Anthropic’s partnership with Palantir Technologies, a contractor with the US Defence Department and federal law enforcement agencies. Palantir refused to comment on the claims.
The US and other militaries increasingly deploy AI as part of their arsenals.
Israel’s military has used drones with autonomous capabilities in Gaza and has extensively used AI to fill its targeting bank in Gaza.
The US military has used AI targeting for strikes in Iraq and Syria in recent years.
Critics have warned against the use of AI in weapons technologies and the deployment of autonomous weapons systems, pointing to targeting mistakes created by computers governing who should and should not be killed.
AI companies have grappled with how their technologies should engage with the defence sector, with Anthropic’s CEO, Dario Amodei, calling for regulation to prevent harms from the deployment of AI.
Amodei has also expressed wariness over the use of AI in autonomous lethal operations and surveillance in the US.
This more cautious stance has apparently rankled the US Defence Department, with the Secretary of War, Pete Hegseth, saying in January that the department wouldn’t “employ AI models that won’t allow you to fight wars.”
The Pentagon announced in January that it would work with xAI, owned by Elon Musk, world’s richest man and leader in hi-tech.
The Defence Department also uses a custom version of Google’s Gemini and OpenAI systems to support research.












