Deciding on Appropriate Use of Force: Human‐machine Interaction in Weapons Systems and Emerging Norms

Image credit: Nicodemus Roger via Flickr Public Domain Mark 1.0

This article considers the role of norms in the debate on autonomous weapons systems (AWS). It argues that the academic and political discussion is largely dominated by considerations of how AWS relate to norms institutionalised in international law. While this debate on AWS has produced insights on legal and ethical norms and sounded options of a possible regulation or ban, it neglects to investigate how complex human‐machine interactions in weapons systems can set standards of appropriate use of force, which are politically normatively relevant but take place outside of formal, deliberative law‐setting. While such procedural norms are already emerging in the practice of contemporary warfare, the increasing technological complexity of AI‐driven weapons will add to their political‐normative relevance. I argue that public deliberation about and political oversight and accountability of the use of force is at risk of being consumed and normalised by functional procedures and perceptions. This can have a profound impact on future of remote‐warfare and security policy.