1. AI Goes from Labs to Battlefields
The wars in Ukraine, Gaza, and now Iran have shown the world that AI is now actively affecting the battlefield — being used to locate and identify targets, pilot drones, and support tactical decision-making. Hoover Institution
Current military AI applications span three main areas: planning and logistics, cyber warfare (including sabotage, espionage, and information operations), and — most controversially — weapons targeting, which is already in active use in Ukraine and Gaza. MIT Technology Review
2. The US–Israel–Iran Conflict: AI’s Most High-Profile Deployment Yet
When the Trump administration bombed Iran, it did so with the assistance of generative AI. The military heavily relied on Maven Smart System — a joint project using surveillance-gathering AI from Palantir and Anthropic’s Claude — throughout the planning of the attack, which suggested hundreds of targets and offered commanders “video-game-like abilities to oversee battles.” Commonweal Magazine
In a striking contradiction, the US government sidelined Anthropic as one of its main AI suppliers over an ethical disagreement — and then used Claude via CENTCOM within hours of that very decision. Daily Sabah
3. The Anthropic–OpenAI Split Over Military Use
The dispute between Anthropic and the US Department of Defense became an open example of the ethical fault lines in AI development. Anthropic’s Dario Amodei took a different approach to military cooperation than OpenAI’s Sam Altman, placing the long-running rivals on opposite sides of this debate. A boycott campaign called “QuitGPT” has since spread across the US, with some users switching to Claude and Anthropic in protest. Daily Sabah
4. The Ukraine–Russia AI Arms Race
Ukraine and Russia are locked in an AI-driven drone race. In that conflict, drones now account for roughly 70–80% of battlefield casualties. AI-powered targeting systems allow drones to identify and strike with minimal human intervention, even in heavily jammed environments. War Room
AI-based targeting can now be added to drones for as little as $25, dramatically lowering the cost barrier for autonomous lethal systems. War Room
5. Regulation Falling Behind the Pace of Development
A major concern now is the growing gap between international dialogue on military AI — which emphasizes risks and constraints — and the accelerating efforts of militaries worldwide to integrate AI. Traditional multilateral bodies like the UN continue at a slow pace, while states are already deploying AI capabilities in active conflicts. Council on Foreign Relations
The Trump administration dismantled Biden-era AI ethics initiatives, including stripping the AI Safety Institute at NIST of any role in determining AI ethics and responsible use, while simultaneously pursuing aggressive deregulation of the AI industry. Commonweal Magazine
6. Israel’s “Lavender” AI System
The Israel Defense Forces developed an AI-assisted decision support system called Lavender, which helped identify around 37,000 potential human targets within Gaza — raising serious concerns about bias in AI-trained datasets being applied to life-or-death targeting decisions. MIT Technology Review
Key Takeaway for Your Coverage
The war context has fundamentally accelerated AI’s militarization while simultaneously fracturing the ethical and regulatory consensus around it. For Data Business Central, this is a goldmine of content angles — the Anthropic–DoD tension alone touches agentic AI, AI governance, enterprise AI ethics, and the future of AI partnerships. Worth developing a cluster around.