In early December, as governments met at the United Nations in Geneva to decide whether to open negotiations to establish new international standards on autonomous lethal weapons systems (known as “killer robots”), members of the Campaign Against Killer Robots demonstrated under the slogan “Stop Killer Robots”. They are calling on governments to oppose the development of weapons systems that can select and attack targets without any human intervention. Concerns and legal uncertainty about this issue point to a need for regulation.
Autonomous lethal weapons systems without real control
A report entitled “Crunch Time on Killer Robots: Why New Law Is Needed and How It Can Be Achieved” released this month calls on governments to open negotiations for a new treaty to maintain human control over the use of force. The 23-page report was written by Human Rights Watch and the International Human Rights Clinic at Harvard Law School. It finds that international law should be strengthened and clarified to protect humanity from the dangers posed by autonomous lethal weapons systems. These weapons select and aim at targets without any real human control.
Bonnie Docherty, senior researcher with the Weapons Division at Human Rights Watch and associate director for armed conflict and civil defense at the Harvard Human Rights Clinic, states:
“After eight years of discussion about the grave consequences of abandoning human control of the use of force, countries should now decide how to respond to these threats. A treaty is urgently needed to address the gaps in international humanitarian law, and to update it to address the legal, ethical, and societal issues associated with current artificial intelligence and emerging technologies.”
Protecting humanity from killer robots
The Sixth Review Conference of the Convention on the Use of Certain Conventional Weapons (CCW), held December 13-17, is a key turning point in the discussions on killer robots. At the last meeting in September, most countries called for the adoption of a new binding instrument on autonomous lethal weapons systems. Chile, Mexico and Brazil urged the parties to the convention to agree to begin negotiations to that end. They were joined by other states, including the “Group of 10” (Argentina, Costa Rica, Ecuador, El Salvador, Palestine, Panama, Peru, the Philippines, Sierra Leone and Uruguay) and members of the Non-Aligned Movement.
In addition to the CCW, other negotiating forums include an independent process, such as the one that led to the treaties banning anti-personnel mines and cluster munitions, and the UN General Assembly.
Currently, as Human Rights Watch and the Harvard Clinic point out, international humanitarian law is inadequate to address the problems posed by autonomous lethal weapons systems. There is broad support for the development of a new norm that would address the international humanitarian, ethical, human rights, accountability and security concerns raised by these weapons systems.
The new treaty should cover weapon systems that select and aim at targets based on information from sensors, not humans, i.e., that rely on machine learning algorithms that produce unpredictable or unexplainable effects. Some countries have also expressed a desire to ban weapons systems that rely on biometric profiles and other sensor data that identify, select and attack individuals or categories of people.
These bans should, according to some countries, be complemented by regulations to ensure that all other autonomous weapons systems are used only if they are effectively controlled by humans. The term “meaningful human control” for a technology means that it must be understandable, predictable, and limited in space and time.
Progress toward negotiations in the CCW is unlikely since this body makes its decisions by consensus and several military powers are opposed to it, particularly India, Russia and the United States, believing that current international humanitarian law is sufficient to address all the problems raised by these weapons systems. These countries, and others such as Australia, China, South Korea, Israel, and Turkey, are investing heavily in military applications of artificial intelligence and related technologies to develop autonomous air, land, and sea-based weapons systems.
Bonnie Docherty stated:
“An independent process to negotiate a new standard on killer robots would be more effective and inclusive than the current diplomatic talks and alternatives being considered. But accelerating this process will only be possible with the active support of political leaders.”
Toward an outright ban?
There are growing calls for a ban on killer robots from citizens, countries, institutions, and private companies, as the International Committee of the Red Cross did in May 2021, calling on states to negotiate an international treaty that bans unpredictable or people-targeting autonomous weapons systems, and to adopt regulations to ensure meaningful human control over the remaining systems. Since 2018, UN Secretary-General António Guterres has called on states to ban weapons that target and attack humans on their own, calling them “politically unacceptable and morally revolting.”
Human Rights Watch is a co-founder of the Stop Killer Robots campaign, which brings together more than 185 nongovernmental organizations from 67 countries to advocate for a treaty that requires the maintenance of meaningful human control over the use of force and bans autonomously operated weapons systems.
Bonnie Docherty concludes:
“Much of this opposition is based on moral repugnance to the idea of machines deciding the lives and deaths of people. A new treaty would fill the international legal vacuum with a new treaty and protect the principles of humanity that our conscience dictates in the face of emerging military technologies.”
Translated from Face aux robots tueurs, un rapport demande la négociation d’un nouveau traité pour protéger les populations