Department of War Using Palantir AI to Pick Targets in Iran

By Kyle Anzalone | The Libertarian Institute | March 11, 2026
The Pentagon is using an Artificial Intelligence system developed by Palantir to select targets in Iran. Some members of Congress are considering legislation to place restrictions and safeguards on the Department of War’s use of AI.
Two Congressional sources speaking with NBC News confirmed the use of Palantir’s AI for targeting and the potential bill. “AI tools aren’t 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them,” said Rep. Sara Jacobs.
“We have a responsibility to enforce strict guardrails on the military’s use of AI and guarantee a human is in the loop in every decision to use lethal force, because the cost of getting it wrong could be devastating for civilians and the service members carrying out these missions,” she added.
None of the members of Congress who spoke with NBC News wanted to prevent the Pentagon from using AI in the targeting process. The Department of War has claimed that humans remain the final decision makers on what the US military will target.
AI targeting has become increasingly common in warfare. The US has used AI to help Ukraine identify targets, and Israel relied extensively on AI systems to determine who and what to bomb in Gaza.
Central Command (CENTCOM) claims that US forces have hit more than 5,500 targets in Iran. The US has already bombed a number of civilian targets in Iran, including Shajarah Tayyebeh elementary school. That strike killed at least 175 people, mostly children and their parents.
Palantir’s targeting system, dubbed Maven, relies on Anthropic’s Claude. Anthropic has demanded that its AI not be used for targeting or mass surveillance, leading to retaliation from the White House.
Anthropic and Palantir did not comment on the story.
Sorry, the comment form is closed at this time.
