17. March 2026
Geneva, Switzerland

When military AI runs on commercial cloud infrastructure, both civilian systems and human judgment come under new strain. Read Anne-Marie Buzatu’s new analysis on the legal risks at the intersection of AI, cloud, and conflict.
As military AI systems increasingly rely on commercial cloud infrastructure, the legal and human stakes are changing fast. This
new article examines two linked developments from the March 2026 Gulf fighting: Iranian strikes on Amazon AWS facilities in the region, and U.S. operations in Iran in which AI-assisted systems reportedly helped identify and prioritise around 1,000 targets in just 24 hours. Together, these episodes highlight a double challenge for international humanitarian law. Commercial data centres that support military AI may, in some circumstances, become plausible military objectives. At the same time, AI-assisted targeting can accelerate and obscure decision-making, leaving humans formally in the loop while weakening the quality of judgment the law requires. As military and civilian functions become more deeply entangled in shared cloud systems, both civilian protection and lawful attack decision-making become harder to preserve.