Abstract
<jats:p>Imagine this: You take a swarm of very small drones, load them with an AI-algorithm trained to recognise a certain type of military uniform, add 5 grams of high explosives, and send them out to hunt for enemies to kill.2 After deployment of the drones there is no human involvement, and the drones themselves decide who to target and attack. However, one of the drones decides to target and subsequently kills a soldier that is surrendering. This will be a clear violation of International Humanitarian Law (IHL), as expressed in the Geneva Conventions.3 Or what about a situation where an AI-powered LAWS decides to acts on a false positive and erroneously engages a similar enemy system, in a “clash of the machines”? Whereas the enemy system responds and also calls in reinforcements. Then, we might have an unintended war on our hands in seconds. Such weapons are not science fiction. The necessary technology is to a large degree already available and the remaining challenges are engineering problems of miniaturisation and systems integration. The potential inherent in LAWS regarding death and destruction and a new arms race cannot be overestimated. While a total ban on certain types of LAWS may be a possibility, a far-reaching ban is difficult to foresee. The possible military advantage posed by such systems will probably be far too great and too tempting for the governments of the world. If anything, you will not want to be the only one without them. So, they need to be regulated and governments and armed forces need to be held accountable for the research, development, procurement, deployment and use of such weapons. “Accountability” is the cue for the SAIs of the world to enter the scene.</jats:p>