From the Journal
Latest Issue
The November 2014 edition contains special sections on ‘The Arms Trade Treaty and Global Civil Society’, and ‘International Education’. It also features articles on human rights, the new wars debate and LGBT laws.

Toward a Ban on ‘Killer Robots’

Matthew Bolton - 22nd November 2012
Toward a Ban on ‘Killer Robots’

See video

“‘[W]e live to-day in a time of accelerated inventiveness and innovation, when a decade modifies the material of inter-communication far more extensively than did any century before, in range, swiftness, and intensity alike.”

These words, which seem so relevant to our own era of unprecedented technological transformation, were actually penned in 1919, as the science fiction writer H.G. Wells reflected on the disturbing impact of new military technologies in World War I. He continued: “It is the most obvious wisdom to set ourselves to anticipate as far as we can, so as to mitigate and control, the inevitable collisions and repercussions of mankind [sic] that are coming upon us.”

Human Rights Watch released a much anticipated report Monday -"Losing Humanity" - calling for a preemptive ban on the development, production, and use of fully autonomous robotic weapons or ‘killer robots’. Echoing H.G. Wells’ prescient sentiments, Steve Goose, Arms Division director at Human Rights Watch, said, “Action is needed now, before killer robots cross the line from science fiction to feasibility.”

“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Goose. “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.”

The increasing proliferation and use of remotely operated robotic weapons such as aerial drones (also called unmanned aerial vehicles or UAVs) has raised concern that trends in military technology reduce human oversight and control of the use of violence. Particularly unsettling are arms development efforts to increase the 'autonomy' of military robots, enabling them to navigate, process information, target and even kill without direct human involvement.

Though they are the latest - and potentially most influential - voice in support of such a ban, Human Rights Watch is not alone.

"I believe that civil society has a right and obligation to take action when they believe that a government or military is behaving incorrectly in their name," said Jody Williams who won the Nobel Prize in 1997 for coordinating the International Campaign to Ban Landmines. “That was part of what helped inspire the campaign to stop landmines, the coalition to stop cluster munitions, both of which were successful in achieving treaties. I know we can do the same thing with killer robots.”

Landmines are like analogue, static, predecessors to the autonomous armed robots currently under development. They respond to a certain stimulus (like a victim stepping on a pressure plate or triggering a trip wire) with violent force independent of direct human control. The 1997 international treaty banning antipersonnel landmines is establishing a norm against the application of violence by an autonomous device.

Early concerns about autonomous armed robots arose in the academic community, among roboticists, computer scientists, ethicists, social scientists and arms control experts who saw the potentially disturbing implications of military robotics research. In 2009, a small group of these scholars came together to organize an International Committee for Robot Arms Control (ICRAC), calling for bans on fully autonomous military robots, arming robots with nuclear weapons and robotic weapons in space. A year later, ICRAC members were among a broader number of academics who, meeting in Berlin, called for the development of an “arms control regime” for “armed tele-operated and autonomous robotic weapons.”

“There’s nothing in artificial intelligence or robotics that could discriminate between a combatant and a civilian,” said Dr. Noel Sharkey, Professor of Artificial Intelligence and Robotics at the University of Sheffield and chairman of ICRAC. “It would be impossible to tell the difference between a little girl pointing an ice cream at a robot or someone pointing a rifle at it.”

This growing “epistemic community” of concerned scholars has been joined, in the last few years, by several influential voices in the global policymaking arena. In 2010, the UN Special Rapporteur on Extra-Judicial Killings, Philip Alston, raised concerns about the use of drones in extra-judicial killings in his report to the UN Human Rights Council. Similarly, in 2011, Dr. Jakob Kellenberger, President of the International Committee of the Red Cross (ICRC) called for greater “discussion” about the “potential humanitarian consequences [and] impact of developing technologies.”

Then, in March 2012, Article 36, who had played a significant role in the cluster munitions campaign, called for a complete ban on autonomous armed robots, saying “Decisions to kill and injure should not be made by machines and, even if at times it will be imperfect, the distinction between military and civilian is a determination for human beings to make.”

The increasing concerns about US drone strikes, particularly Pakistan, Yemen and Somalia, notably following critical high profile reports from NYU and Stanford, Columbia University and Medact, have, in the last few months, intensified attention on the potential dangers of roboticizing warfare.

The developing network of people and organizations calling for a global regime to regulate robotic weapons, and specifically prohibit fully autonomous ones, has not yet formalized into an organized campaign. However, there is a mounting pressure from diverse corners of civil society, media and academia to safeguard humanity from our own destructive technological potential.

As we face an increasing roboticized and computerized future, we should consider again the early 20th Century reflections of H.G. Wells. Science and technology, he said, “give us powers novel in history and bring mankind [sic] face to face with dangers such as it has never confronted before…. [O]n the one hand, they render possible such a reasoned coordination of human affairs as has never hitherto been conceivable, … on the other, they so enlarge and intensify the scope and evil of war….”

In the development of robotic autonomy we cannot only hope that “reasoned coordination of human affairs” with win out over the “evil of war.” It is something we must work for, call for, campaign for, advocate for.

Dr. Matthew Bolton is assistant professor of political science at Pace University in New York City. He is a member of the International Committee for Robot Arms Control and has written widely about humanitarian and disarmament issues, particularly landmines and cluster munitions.