Ban or Boundaries? Civil Society initiatives on regulation autonomous weapon systems
di Miklai Kamilla
Tempo di lettura 5'
The introduction and development of the military technology known as Autonomous Weapon Systems (AWS) has created an entirely new warfare paradigm, which demands answers about human involvement in fatal combat choices during military operations. Why is it so concerning? Once an AWS is activated, it can detect, identify, and track targets, and even perform attacks without any kind of human supervision or control, in the air and on the ground. This introduces a radical shift in the logic and conduct of warfare, raising ethical, legal, and strategic questions about the role of humans in life-and-death decisions on the battlefield. The rapid technological advance begs the question: how can we ensure that these new weapons fit within the international framework and the rules of war? In this article, I explore the anticipated treaty negotiations and the continuous efforts of the civil sector.
One possible avenue for regulating autonomous weapon systems (AWS) is traditional arms control. It functions as a conventional tool of international humanitarian law which aims to minimize war-related human casualties, safeguard non-combatant populations, and limit human suffering. One of its main multilateral forums is the Convention on Certain Conventional Weapons (CCW), a modular treaty structure regulating the use of particular weapon types, such as landmines and blinding lasers. Since 2013, the CCW has addressed AWS through the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS), and since 2016 the GGE has scrutinized AWS through legal, ethical, and technical lenses. However, due to opposition from major military powers, the CCW’s consensus-based decision-making process has failed to be effective. Attempts to define meaningful human control or to adopt a binding protocol have consistently remained unsuccessful.
Recognizing these institutional barriers, the UN has explored new regulatory approaches. As early as May 2013, the UN Human Rights Council held its first debate on lethal autonomous robots, where Special Rapporteur Christof Heyns called for a global moratorium on these systems, prompting more than 20 states to speak on the issue for the first time and marking the beginning of serious UN-level engagement. Since then, the Human Rights Council has continued discussing the human rights implications of AWS, and in 2021 the Special Rapporteur on Minority Issues called for a total ban on fully autonomous weapons. A major milestone was reached in 2023, when the UN General Assembly adopted Resolution 78/241, calling on member states to start negotiations and a debate on a legally binding treaty by 2026. This resolution gained momentum in the international sphere, among member states and even debates on the political level, as – although it was not legally binding – it had a strong symbolic power in highlighting the existing legal uncertainty and supporting long-standing civil society demands for a ban.
The UN Secretary-General’s 2023 New Agenda for Peace dedicates an entire section to AWS regulation, supporting immediate political actions and the establishment of new legal frameworks. This agenda builds on his earlier warning: at the Paris Peace Forum in 2018, he took a strong stance, arguing that “For me there is a message that is very clear – machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law.”
As aforementioned, the political deadlock in intergovernmental negotiations about AWS has led civil society organizations to take charge of the debate and defend human rights and humanitarian principles. Two organizations, Human Rights Watch and the Campaign to Stop Killer Robots, have become the most influential actors in this movement. Human Rights Watch (HRW) is a globally active, non-governmental organization founded in 1978. Its aim is to expose systematic human rights violations. Its researchers draft reports from field interviews, official documents, satellite imagery, war debris, and other evidence, which are published in multiple languages to shape public opinion and put pressure on decision-makers. Besides reports, HRW also makes concrete, immediately actionable policy recommendations.
The report titled Losing Humanity: The Case Against Killer Robots, published in November 2012 by Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC), was the first to call for a preventive ban on fully autonomous weapons systems. This 50-page report outlines concerns about fully autonomous weapons lacking the qualities – such as human judgement, accountability, empathy, and moral reasoning – that provide checks on the authority to use lethal force, which can result in killing innocent civilians. The report calls on states to adopt national and international legal measures that prohibit the development, production, and use of fully autonomous weapons, ensuring that humans remain involved in lethal decision-making. In addition, the report also urges governments to conduct reviews for transparency of new and modified weapons as well as relevant technologies, in order to make sure they are in compliance with international humanitarian law. This rule is also stated in Article 36 of Additional Protocol I to the Geneva Conventions.
The Campaign to Stop Killer Robots – launched in 2013 – brings together civil society organizations with the goal of achieving nothing less than a complete international ban on autonomous weapons. It argues that AWS are fundamentally incompatible with human rights standards, as they violate basic human dignity by mechanizing warfare. The Steering Committee of the campaign includes Human Rights Watch, Amnesty International, and PAX, among others. Its visions and values include human dignity, meaningful human control, accountability, and technological responsibility, as it wishes technology to serve humanity by fostering peace while respecting human rights, equality, serving justice, and promoting respect for the law.
The Campaign claims that fundamentally autonomous weapons fundamentally threaten human dignity by reducing people to data points, reinforcing algorithmic biases, and removing meaningful human control from life-and-death decisions. It argues that such systems pose a threat to civilians and, on a larger scale, to global security, as they create severe risks, including a lack of accountability and unpredictability, which can lead to an easier entry into conflict. Therefore, the Key Elements of a Treaty on Fully Autonomous Weapons was drafted, proposing a legally binding international instrument that focuses on keeping and reinforcing meaningful human control. To achieve that, it combines general obligations, clear prohibitions, and positive requirements for states within the Treaty.
In addition, the International Committee of the Red Cross (ICRC) serves as the main guardian of international humanitarian law and plays a vital role in preventing emerging military technologies from undermining human dignity or civilian legal safeguards. The ICRC worked tirelessly to adopt specific measures to encourage states to develop better regulatory frameworks. In March 2014, the ICRC organized its first expert meeting on autonomous weapons systems to examine humanitarian risks, followed by a second expert meeting in March 2016 to deepen the legal and ethical analysis. In August 2022, the ICRC published a statement calling on states to take action and to take steps toward creating a new treaty. Most recently, on 26 April 2023, the ICRC President, Mirjana Spoljaric, made an appeal at the Luxembourg Autonomous Weapons Systems Conference, demanding that states demonstrate political leadership and negotiate a legally binding instrument to regulate autonomous weapon systems. Later, the President and the UN Secretary-General repeated this call in a joint appeal, urging states to set clear prohibitions and restrictions and to conclude such negotiations by 2026.
From all these political declarations, resolutions, articles, and international efforts, we can see that the international system currently finds itself in a regulatory interregnum. The technological development of autonomous weapons has advanced faster than international law and ethics have been able to follow, creating a gap – a vacuum – between what weapons are capable of and what the international framework is able to govern. This gap entails risks of digital dehumanization, algorithmic and other forms of bias, human rights violations, and a lack of legal accountability, especially in the context of war. This void has pushed civil society to take on a more active role in urging states to address the growing regulatory gap, while also articulating and reinforcing the principles of humanity and ethics that should guide future negotiations and treaty drafting. We observe a new momentum toward change in 2025, particularly with the first-ever UN General Assembly meeting dedicated to autonomous weapons and the joint statement by 42 states declaring their readiness to move forward with negotiations. The coming year may bring greater clarity to the many questions and concerns surrounding the regulation of autonomous weapons systems, if states choose to seize this opportunity. This will determine whether autonomy in warfare remains anchored in human judgment and legal safeguards, or whether it is allowed to evolve beyond meaningful human control, thereby fundamentally reshaping the boundaries of war and global security.
Immagine: Creata con OpenAI