Early View Article - Does AI Affect the Democratic Conduct of War? Analyzing US and Israeli Military AI Deployment

Does AI Affect the Democratic Conduct of War? Analyzing US and Israeli Military AI Deployment

This study examines how the use of decision-support military Artificial Intelligence (AI) systems can affect the democratic conduct of warfare. AI can challenge the democratic conduct of warfare by introducing systemic risks such as reduced oversight, opacity, and automation bias. The inquiry focuses on two case studies: US Project Maven and Israeli AI targeting systems. Available reporting and official statements portray the United States as emphasizing procedural oversight, containing AI risks, while investigative reporting on Israel alleges prioritization of speed, looser scrutiny and higher tolerated error. The accounts reviewed suggest that democracies are not exempt from risks of democratic erosion in war when adopting AI. Yet, the divergence in practices hints that AI does not autonomously drive erosion; rather, it amplifies contextual pressures such as conflict intensity, institutional culture, and threat perception. The conclusion underlines the necessity for regulatory and procedural frameworks for better aligning AI use with legal and ethical norms. This research contributes to deepening the understanding of the risks posed by AI in military operations with a novel, case-based analysis, aiming to inform policy discussions on the responsible deployment of AI systems in contemporary and future conflicts.

Policy implications

  • Establish incremental international norms to enforce human oversight on military AI
    • International institutions should adopt an incremental approach, starting with non-binding guidelines on human oversight and accountability evolving into binding norms.
    • Regulation should avoid blanket bans, focusing instead on thorough and enforceable standards regulating how AI is used in military operations.
    • Regulation should guarantee human oversight at model development stage and within operational decision-making to prevent the erosion of democratic conduct.
  • Create accountability mechanisms for AI-generated decisions
    • Building on the previous point on oversight, this implication extends it beyond technical domains to the broader societal sphere. Regulation needs to be complemented by advocacy from civil society and complementary mechanisms from independent agencies.
    • Civil society, NGOs, and independent agencies should advocate and act for greater transparency and independent scrutiny over military AI systems, ensuring that states cannot obscure operational practices.
    • Translating procedural safeguards into practical accountability ensures that algorithmic decision-making remains subject to public scrutiny, reinforcing accountability and preventing normalization of opaque military AI practices.
  • Strengthen public–private cooperation to develop technical standards and independent verification
    • Safeguards should also be reinforced through public–private cooperation to jointly develop interoperable technical standards that ensure transparency and accountability across the AI development and deployment lifecycle.
    • Independent expert committees composed of representatives from academia, civil society, and private sector should conduct regular audits and assessments on meaningful human control, bias and evolution of autonomy in AI systems.
    • Cooperative verification ensures that democratic conducts in warfare are not eroded under operational or political pressure.

 

Photo by Markus Winkler