Why Regulating Lethal Autonomous Weapons is The Better Option

In today’s society, one cannot take Artificial Intelligence (AI) out of the equation. In fact, AI is getting more and more ingrained into our society every day. Some examples of these AI developments are early stroke detection in healthcare, protecting crops from weeds using a robot, and discovering new solar systems. These are helpful developments, progressing the technological maturity of our society. However, like any complex software tool, AI makes its way to other areas as well: autonomous weapon systems. In this essay, we will focus on Lethal Autonomous Weapons (LAWs), colloquially called ‘killer robots’. The Convention on Certain Conventional Weapons (CCW) is an agreement composed by the United Nations dating from 1980 that prohibits or restricts several conventional weapons. The CCW has expert sessions where LAWs are discussed. Throughout the literature, various definitions of LAWs are used which makes it such a complex topic to reason about. Especially the word ‘autonomy’ is seen as a difficult concept. A common misconception about the word is that it is unidimensional: a single notion that everyone comprehends. However, this is not the case. The three most commonly used definitions of autonomy in terms of AI are:

  1. Robots that have self-governance and create their own beliefs, reasons, motives, and can learn.
  2. Robots that are capable of unsupervised operation.
  3. Robots which are automated in such a way that they follow a set of pre-programmed rules and base their decisions on those rules. 

As can be seen, these definitions show a slight overlap, but in the scientific and public debate, they are used interchangeably, which complicates the discussion. To apply some form of simplification, we will define autonomy as a term that encapsulates all three definitions. Furthermore, there is the notion of semi- and fully autonomous weapons, but we will refrain from these definitions. LAWs are thus robots in the form of a weapon or that carry a weapon, and that have the sense of autonomy as defined above. One can imagine that the global use of LAWs will come with implications for warfare and that it would violate fundamental principles of International Humanitarian Law (IHL). These principles comprise the law of distinction and the law of proportionality. The former requires that parties must be able to make a distinction between combatants and civilians, whereas the latter requires that collateral damage to civilians should not be excessive. Therefore, at least 28 governments are in favor of a ban on LAWs. In this opinion article, we argue that, instead of banning LAWs, they should be regulated. Regulated LAWs can bring many benefits to our society. Compared to human soldiers, autonomous weapons comply with the rules, have elevated accuracy, and are replaceable. We surmise that autonomous weapons could compose more precise and humane warfare with fewer casualties and damages, and that banning them would require an unrealistic global effort. In the following sections, we will delve deeper into the arguments and explain why we are of the opinion that LAWs should be regulated.

Human dignity & trust

Apart from the difficulties of global security and accountability, two reasons are mostly used by the supporters that call for a ban on LAWs. Autonomous weapons would undermine human dignity and would be untrustworthy in the complex contexts of warfare. Robots and computers can be used to make decisions that affect humans in many aspects of life. LAWs represent an extreme example of such a decision-making tool, as they could select and attack targets without human interference. In other words, autonomous weapon systems can make the decision to execute human lives. The governments and scientists calling for a total ban, believe that LAWs, therefore, are a danger to losing the fundamental consideration of humanity. Life and death decisions should not be made by algorithms, since every human has the right to be treated with respect. The deployment of LAWs would be dehumanizing, as taking a human life should only be done as a last resort, and only by meaningful human decision-making. 

Moreover, the supporters of a ban are of the opinion that LAWs cannot be trusted, as they do not show strong predictability in the complex and changing contexts of war. Although LAWs would be able to detect and distinct humans, the fear exists that autonomous weapons would not reliably distinguish combatants from non-combatants, or even innocents. The appropriate training and machine learning needed would be difficult due to the harsh and precarious conditions of warfare, causing the lack of human trust. Proponents argue that the chance for accidental incidents and escalations of conflicts is realistic, and that military operators need sufficient trust to use LAWs in combat. Trust is critical to a warfighter in a hostile environment. The supporters of the ban believe that LAWs are not predictable enough to ensure the needed trust and the intended results, and should therefore be banned to ensure global safety. 

Other weapons, training & IHL

We do not consider these arguments strong enough to ban all LAWs. Although LAWs could defy the principle of human dignity, they are not unique in this regard. Especially in warfare, other technologies and other weapons exist that also endanger human dignity. For example, the use of explosive weapons with wide-area effects in cities and towns. Over the last few decades, these weapons have placed millions of innocents at risk, causing devastating harm to civilians and civilian objects. In this perspective, it would be more manageable to regulate LAWs, instead of banning them, since the opponents of the ban could easily make comparisons to other permissible and dehumanizing weapons, which may result in no statutes concerning LAWs at all. Therefore, it is wiser to regulate the development and use of lethal autonomous weapons to ensure their restrictions. 

Furthermore, the supporters of a ban believe LAWs cannot be trusted because of the incapability of displaying predictability in the contexts of warfare. However, it turns out that the users of LAWs could develop interpersonal trust with training, based on learning the capabilities, values, functionalities, and dispositions of autonomous weapons. Even though current training and deployment processes do not include this, for now, multiple possible changes to the practices are already being set up to facilitate a rise in trust that is required for the appropriate and ethical use of LAWs. By learning the different aspects of LAWs and developing trust in them, the LAWs could be used as intended. Besides, regulations should improve the trust in LAWs as well. International Humanitarian Law regulates the mechanisms and methods allowed in warfare. To follow the agreements of IHL, autonomous weapon systems will need to be able to differentiate between combatants and civilians and use their arms proportionally. It appears that these requirements are all within the scope of the technological possibilities, ultimately causing LAWs to be trustworthy enough to be used safely in warfare. 

Regulating LAWs

Instead of imposing a ban, we are of the opinion that LAWs should be regulated. By imposing a ban, the world could miss out on the benefits that LAWs can bring, such as its compliance, accuracy, and replaceability. Moreover, LAWs can provide advantages to international peace and security by, for example, protecting humanitarian missions. Unlike human beings, robots can be programmed to adhere to military codes and rules without the interference of human prejudice, bias, fear, and fatigue. Especially under the duress of war situations, human soldiers are prone to make subjective decisions, with sometimes catastrophic consequences. LAWs, on the other hand, will be able to comply with the principle of distinction in the future, and will also objectively process information, identify targets, and protect civilians. In addition, autonomous weapons could be more precise and effective than the use of other weapons, causing less unnecessary damage and casualties in war zones. The increased accuracy of targeting and the potential of machine learning could make war more precise and more humane. Furthermore, LAWs are replaceable compared to human soldiers. Autonomous weapons could perform dangerous tasks without risking human lives, leading to fewer human casualties taking place in warfare. Lastly, both LAWs and alternative configurations could be used for peacekeeping missions and safeguarding innocents. They could help to protect humanitarian convoys, defend refugee camps, and perform hostage rescue missions

Furthermore, we suspect that a global ban on LAWs would require an enormous and unrealistic effort of global cooperation, which we think is undoable in a relatively short amount of time. If this would be possible, a total ban on warfare would not be far away from reality. A ban would also encourage cheating by some countries that have an advantage in the arms race. Who is to say that some security services are not already developing LAWs to gain an advantage over other governments? A global ban would therefore be infeasible. Historically, there have only been treaties to which only a portion of the countries in the world is a member of and it turns out that states are more likely to comply with a regulatory scheme than with the total prohibition of LAWs.

Existing bans & terrorist organisations

Proponents of a ban on LAWs claim that a prohibition on autonomous weapon systems would certainly be possible since the CCW already successfully imposed a ban on blinding laser weapons in 1995. This ban did not only prohibit the use, but also the transfer of these weapons. Similar to the situation concerning LAWs, this ban was preventive, meaning the weapons in question were not yet introduced on the battlefield. Even though the campaigns were less coordinated and extensive than the current campaigns against LAWs, (think of the campaign to stop ‘killer robots’, which has a large audience), the ban still made it through. Therefore, the parties opposing regulations are of the opinion that preventing LAWs from entering the battlefield might prove to be just as successful as the ban on blinding laser weapons. Moreover, the ban on Anti-Personnel Landmines (APLs) was deemed as a good step towards humanitarian disarmament. 

Furthermore, banning LAWs would make it more complex for terrorist organisations to get their hands on them because the weapons will be less available on the international markets. Also, the international stigmatization of employing weapons when they are prohibited by a large number of countries will lead to less misconduct. Terrorist organisations have a large collection of controversial weapons at their disposal, such as chemical and biological weapons, but rarely use them. Therefore, advocates believe that by imposing a ban on LAWs, instead of only regulating them, a certain norm will be established quicker on a global scale. They suggest that this norm would also apply to countries that have not acceded to the particular treaty. Governments and scientists surmise that extending legislation in terms of International Humanitarian Law for LAWs would make things unnecessarily complicated. A ban would prevent this time consuming and complex legislation. 

Complexity, prevention & the arms race

We consider it inequitable to compare a ban on laser weapons or APLs to a ban on LAWs. The main problem that emerges when talking about banning ‘killer robots’ is the issue of complexity. As mentioned before, LAWs are comprehensive systems that do not adhere to a single definition and should, therefore, not be seen solely as another category of weapons. It is this complexity that allows for a more fitting rigorous framework of regulations, categorizing all types and variations of LAWs, defining the role of human involvement, and updating this framework in due course. Even the ban of APLs entailed difficulties. At an expert meeting of the CCW, the discussions about the ban were easygoing, because no prior differentiation between APLs, anti-tank mines, and ‘dumb and ‘smart’ mines yet existed. If the ban on APLs already proved to be a tedious definition, then prohibiting ‘killer robots’ would pose an even larger challenge. 

We also disagree that the ban of LAWs will serve as a preventive measure. Much like other areas of AI, autonomous weapon systems are already introduced in different countries in our society. For example, in 2016, Islamic State took credit for a drone attack, killing two Iraqi soldiers. At the border between North-Korea and South-Korea, a sentry gun that can allegedly register movements and fire without human intervention, has been installed (see image below). Moreover, the U.S. is already developing tanks that operate autonomously. Depending on the definitions, some of these could be considered LAWs, and are examples of an impending change in warfare. Thus, we believe that it is too late to claim a total ban on LAWs could be a preventive measure. It is only a matter of time, before these tanks and other autonomous weapons are officially introduced on the battlefield. 

Furthermore, we deem the notion of an international norm concerning LAWs to be an unsatisfiable argument. Creating an international moral rejection of, for example, chemical weapons has been a timely, arduous and ongoing process. Despite this taboo on chemical weapons, the government of Syria has still made use of such chemical weapons in the past decade. If a government does not already morally reject the use of certain weapons, we find it naive to assume that terrorist organisations will consider LAWs to be a taboo on the battlefield. In addition, even with the imposition of a ban, terrorist organisations will most likely still acquire LAWs through clandestine ways. As a consequence, terrorist organisations would be in the lead of the arms race and advance with new military-grade technologies. They would gain a larger platform and increase the number of targeted attacks and terrorist missions, which is a disastrous prospect. At least with regulations, countries can arm themselves equally against these ways of modern warfare. 

Overall, lethal autonomous weapons will be present in our society sooner than most people might hope. Many governments and scientists are in favor of a ban on LAWs, because they believe autonomous weapons undermine human dignity and cannot be trusted. However, we consider these arguments are not strong enough. When compared to human soldiers, autonomous weapons comply with the standards, have increased accuracy, and are replaceable, causing warfare to be more precise and humane with fewer casualties and damage. Although proponents of a ban claim a prohibition would be possible, we believe states will more likely agree to certain regulations than to a total prohibition of all LAWs. In addition, we mentioned the role of terrorist organisations. We suspect that a ban of all LAWs would cause terrorist organisations to lead the arms race. Therefore, we would like to call for action to regulate lethal autonomous weapons. The prevention of the development of LAWs is unlikely and the complexity of these systems will definitely lead to challenges, so official regulations will have to be drafted and authorized as soon as possible.

Leave a Reply

Your email address will not be published. Required fields are marked *