Prohibiting Autonomous Weapons: Put Human Dignity First

Image credit: Killer Robots via Flickr (CC BY-NC-SA 2.0)

In addition to its successful mobilization in stigmatization and norm‐setting processes on anti‐personnel landmines and cluster munitions, the principle of distinction as enshrined in International Humanitarian Law also figures prominently in the debate on lethal autonomous weapons systems (LAWS). Proponents of a ban on LAWS frame these as indiscriminate, that is, unable to distinguish between civilians and combatants, and thus as inherently unlawful. The flip side of this particular legal argument is, however, that LAWS become acceptable when considered capable of distinguishing between combatants and civilians. We thus argue, first, that this particular legal basis for the call for a ban on LAWS might be rendered obsolete by technological progress increasing discriminatory weapon capabilities. Second, we argue that the argument is normatively troubling as it suggests that, as long as civilians remain unharmed, attacking combatants with LAWS is acceptable. Consequently, we find that the legal principle of distinction is not the overall strongest argument to mobilize when trying to stigmatize and ban LAWS. A more fundamental, ethical argument within the debate about LAWS – and one less susceptible to ‘technological fixes’ – should be emphasized instead, namely that life and death decisions on the battlefield should always and in principle be made by humans only.