Lethal Autonomy: Machine Learning, operatione and predictability. Can we truly control it?
di Miklai Kamilla
Tempo di lettura 6'
In recent years, technological advances have reached a level where the use of autonomous weapon systems is no longer confined to science fiction. Autonomous weapon systems (AWS) not only increase battlefield efficiency, but also fundamentally change the structure and dynamics of modern warfare. One example of this is the 2020 Libyan civil war, in which Libyan government forces and rebel units led by Khalifa Belkassim Haftar fought for control of the country. During the conflict, both sides used modern weapon systems, including armed drones. One such weapon was the Turkish-made STM Kargu-2 autonomous combat drone, which was used by the Libyan National Army (LNA). In March 2020, a malfunction of a Kargu-2 autonomous combat drone resulted in civilian casualties. The autonomous drone was originally designed to recognize and attack enemy combatants, in this case Khalifa Haftar's soldiers retreating from Tripoli. However, due to a system error, the drone identified targets in a populated area and launched an attack. This was one of the first cases in which an autonomous drone carried out a lethal attack independently, without human intervention. This naturally raises the question of how to ensure the predictability and controllability of autonomous systems' decisions, as well as their future training in situations where human intervention is not an option. I seek to explore and answer this question in this article.
Let's examine the components of an AWS. Designers of autonomous systems typically represent these systems as consisting of two main components: the machine or process to be controlled and the device that directly controls the behavior of the machine or process (typically referred to as a "controller" or "control system"). The familiar software and hardware concepts from computer science provide a good illustration of this structure. While the "software" is an AI-supported algorithm created from computer codes, which is merely a target acquisition system, the "hardware" is the executor, carrying out the attack mechanically. The latter is also complex in that it no longer contains mechanisms that allow for human intervention. Thus, a machine gun becomes a weapon when we remove the trigger that allows humans to fire it and replace it with software, so that the algorithm takes over the place and role of humans. However, it is important to note that the absence of real-time interaction with the operator during the operation of a weapon system does not mean that the behavior of the machine — in this case, a machine gun — is not determined by a human being. Rather, the task to be performed is defined prior to the system’s activation, and the machine then executes it autonomously with the support of a control system that is connected to, or communicates with, the weapon. This control system monitors the machine’s operation and intervenes when necessary, instructing it in order to realize the behaviour predetermined by the human operator — namely, that the machine gun does not fire meaningless, continuous shots, but instead fires only when required and in pursuit of a defined military objective.
Systems supported by artificial intelligence are able to quickly analyze a situation using large amounts of data and then act on that basis, modeling different combat situations, targets, and environmental conditions. In addition, they are able to use machine learning (ML) to find statistical correlations and patterns to facilitate individual decisions, and they are extremely adaptable. ML is a subfield of artificial intelligence that enables computers to learn without being explicitly programmed. It evolved from the study of pattern recognition and explores the idea that algorithms can learn from data and make predictions. Predictive modeling overlaps significantly with the field of ML. The algorithms used in ML produce two types of predictive models: classification models, which predict class membership, and regression models, which predict a number. The system designer decides which type of ML algorithm – classification or regression – is appropriate for a given application. For example, a regression model would be appropriate for predicting the success of a given tactical operation, while a classification model could be used to classify targets within an image into certain classes. In order for ML algorithms to create predictive models, the models must have data from which to learn. This data is commonly referred to as training data, and it consists of past examples collected from an application over time and contains predictive variables along with the target variable to be predicted. Knowing the correlation between the predictor and target variables, an ML algorithm can be used to search for and determine an optimized predictive model. However, for future predictions to be accurate, the new data presented to the model must have similar characteristics to the training data. Collecting training data for ML is often a tedious process of selection and compilation, in which predictive variables are conditioned and linked to the corresponding output variable that is to be predicted. Predictive models can be trained or updated over time to respond to new data or values, which keeps the models accurate as circumstances potentially change over time. An important element of this is when the system "selects and attacks targets." The term "select" is generally understood to mean a decision within a given group, while the term "engage" in a military context is generally understood to mean "participation in combat," but requires a more precise definition. In the context of autonomous weapon systems, "attack" (or "engage") can refer to at least three different points in time: (1) when the system is activated; (2) when the system operationally selects a target; or (3) when the system actually uses a device designed to destroy, injure, or kill the selected target.
The distinction between automation and autonomy is often unclear, as both types of systems are "capable of independently selecting and attacking targets" based on predefined programming, as pointed out by the ICRC. This raises the question of how much "freedom" a system must have in order to be considered autonomous, without human intervention. According to the ICRC, the difference lies in the degree of "autonomy" that the weapon system has in selecting and attacking targets. Christof Heyns, the UN Special Rapporteur, makes a distinction based on the nature of the system's operating environment: "automated systems operate in a predetermined, structured environment, while autonomous systems are capable of operating in a dynamic, less predictable environment." It is clear that the concepts of autonomous (without human intervention) and automated (with human intervention) operation are becoming complex. When working with autonomous systems, humans mostly play supervisory or collaborative roles, and they can switch between these roles while performing tasks. For example, one taxonomy lists five possible roles for humans when working with robots: supervisor, operator, mechanic, companion, and observer. These roles are not necessarily static, as mixed-initiative response refers to a flexible interaction strategy in which each agent can contribute to the task in the way it knows best, so the roles of humans and computers (or robots) are often not predetermined but negotiated in response to changing circumstances.
The ability of the weapon designer or operator to predict how a weapon system will behave when activated is essential to fulfilling legal obligations. Given the nature and purpose of AWS — namely, to remove humans from positions of direct execution or potential oversight of combat operations — predictability of behavior becomes a particularly pressing issue. It is critical for armed forces to understand the extent to which, and the circumstances under which, the limited ability to predict the combat behavior of AWS may affect their lawful modes of use.
Controlling autonomous weapon systems carries the risk that if an AWS makes a mistake, the scale and consequences of that mistake could significantly exceed the scale and consequences of a similar mistake made by a human operator. Weapon systems are mass-produced, so the hardware and software are virtually identical and shared among a large number of systems, which means that hardware and software errors can occur at the system level. When considering the takeover of certain human operator roles in weapon control systems, it is not the risk of individual systems that needs to be assessed, but the aggregate risk arising from the combined use of all such systems. Humans have unique characteristics, so even in the same situation and with the same training, they may behave differently. To prevent erroneous decisions, a number of safety mechanisms are built into the operation of AWS. First, the system is continuously calibrated and tested to ensure its accuracy. In addition, multiple sensor systems and various data sources are used to prevent the system from making fatal decisions based on a single faulty sensor or piece of data, and in some systems, human operators are also involved in decision-making, especially when the system shows uncertainty in a given situation. Furthermore, continuous updates and regular reassessments ensure that the systems are adapted to current environmental and tactical conditions.
We can see that weapon systems have evolved and transformed from automated tools into ML-driven agents that are capable of operating independently, which often has lethal consequences for the civilian population due to mistakes like sensor failures, misclassification errors, data-driven biases, and unpredictable system behavior. Data is a map of reality – it represents a reduction of reality – therefore it is essential for algorithmic decision-making, as the weapon systems used for military operations run on databases and sensors. However, the sensors cannot read and the database cannot recreate reality as a whole: they only take a smaller part of it and transform it into data. Since data is not a perfect copy of reality — including all the details, the factors, and the coordinates — it is unthinkable that a machine can make error-free choices. Machines can only make perfect choices if we develop a perfect database for them to use. We must search for what lies behind the algorithm to understand its way of working; what lies behind a team of software developers? Behind the team of developers always lie investors who expect to make profits. And what lies behind each user? There is a possible civilian or individual who was affected by the decision made by hundreds of thousands of layers of artificial neuron links and data-connecting schemes. Therefore, we must (re)think carefully whether we trust these weapon systems with the responsibility and authority to make crucial decisions on the battlefield, decisions that carry consequences for civilians and combatants alike.
Immagine: Creata con OpenAI