The discussion and media coverage of Autonomous Weapon Systems (AWS) has been increasing. But what are AWS and why have they been given the nickname of ‘killer robots’? This article will explain what AWS are, the difference between autonomous and automated, the legal and ethical considerations, and why the term killer robots is not appropriate.
It is important to set what AWS are. The definition is under debate. Autonomous weapon systems, as the Red Cross understands them, are any weapons that select and apply force to targets without human intervention. This is a very loose definition. Would a land mince already be an AWS? Often AWS are nothing more than automated weapons systems. The media inaccurately refers to AWS as ‘killer robots’. The public debate is heavily influenced by this inaccurate depiction of AWS. After investigating the public debate, the legal and ethical implications are considered to argue that society should not fear current AWS more than previously developed military weapons.
In the media, AWS are depicted as killer robots. The media need to sell their products and only the fantasy of killing machines gone rogue already makes a good story. The idea of science fiction movies becoming reality gets people interested. Major newspapers come up with headlines like Are killer robots the future of war? (NY Times) and We are fighting killer robots the wrong way (Wall Street Journal). All the headlines and the corresponding articles report the dangers of the so-called killer robot. Fear sells newspapers, it is widely known. The question is how does it affect the public debate? Are citizens against AWS or do they see their potential?
The fear of AWS is investigated in the Netherlands through two studies on moral principles and values, which show that both military personnel and civilians are more concerned about the deployment of AWS compared to human-operated drones and believe AWS have less respect for human life. A global Ipsos survey for the Human Rights Watch Campaign to Stop Killer Robots investigates the public stance on AWS. The study finds 61% of adults across 28 countries oppose the use of lethal AWS, while 21% support it, with the Netherlands closely following the global average. In contrast, a 2016 study in the US found that the majority of participants were in favor of using AWS, with the context of development influencing public support or opposition to autonomous weapons.
We argue that the public debate is heavily influenced by the media and how it depicts AWS as killer robots. The way institutions depict AWS will heavily influence people’s opinions. Killer robots are scary, drones, not so much. The phrasing used by the Ipsos institute heavily influences the results of their survey. Firstly, Ipsos uses the term “lethal autonomous weapons systems” in their questioning. This is misleading. Although Ipsos is not the only institution using this term, we argue that weapons are inherently lethal. The word lethal is only added to make it sound more dangerous. To get unbiased results, the term AWS would be much more neutral. Secondly, Ipsos explicitly states in their question that AWS are different from “current day drones where humans select and attack targets”. AWS are as current day as these drones. Ipsos tries to emphasize that AWS are dystopian futuristic killer machines, no wonder people oppose this statement.
The study performed in the US depicts a far more realistic scenario. Their question incorporates the scenario where AWS are used already. Suddenly, there is a majority supporting AWS developments. Although there is a lot of fear of autonomous weapons amongst the public, we argue that when realistic examples are presented, the public’s stance shifts to support AWS. When the public gets accustomed to the reality of AWS and sees that AWS are not much different from existing weapons systems, AWS will not be feared any more than other military weapons.
International law and politics
It is important to understand how current gun and military laws reflect the use of AWS. Dr. Magda Pacholska is a researcher and lawyer in this particular domain. She proposes that the current laws are sufficient to allow current AWS in the military. Moreover, she also proposes that autonomous is not the correct term for these systems, since they are not fully autonomous. In the military domain, these systems are usually implemented as target identification and guided weapons/defense systems. There are three main rules from the international humanitarian laws that need to be checked before the implementation of a weapon system:
- Treaty: The system does not violate the treaty between the designated countries.
- Superfluous injury: The system may not cause unnecessary harm to enemy combatants. For example, chemical weapons would not pass this rule.
- Inherently indiscriminate: These are methods of warfare that cannot be directed at a specific military target. Thus, having a significant likelihood of killing civilians in a disproportionate manner.
As we can see from the given laws of war, the current AWS do not violate the laws and thus are in law with the rules of war. Magda mainly proposes that the development of AWS is not the issue, but enforcing these laws are. When the laws are applied appropriately, unsanctioned weapon systems would never be used in war. The article Banning Lethal Autonomous Weapons: an Eduction by Russel, shows a completely different point of view. Russel proposes that all autonomous weapon systems should be banned. He states that fully autonomous systems will be possible to develop and compares them to a chess algorithm. The algorithm chooses the steps based on complex algorithms. If there would be a weapon system that would work similarly, we would have a problem and this should be banned with new legislation. However, if such an autonomous system would have the chance of killing civilians based on its calculations, it would never be able to pass rule 3: Inherently Indiscriminate. The problem does not lie in the current legislation for these kind of systems, but mainly in enforcing the current rules. The same applies to his second argument, where he states that the development of AWS would change the current war scenes. Where we would have less clear-cut scenarios because of the autonomy and thus have a higher change of killing civilian targets. But this again goes against rule 3 of the discussed laws, and mainly enforcing these 3 laws is sufficient to go against the development of non-authorized AWS.
Journal of Applied Philosophy and various other journals have published papers debating whether AWS are ethically justified. The article “Killer Robots” by Robert Sparrow formulates some ethical concerns about using AWS. The article considers a thought experiment by asking who should be held responsible when an autonomous weapons system is involved in a war crime. Three possible entities could be responsible: the programmer of the system, the commanding officer, or the machine itself. Sparrow argues that none of these are satisfactory. Yet this is a necessity for fighting a just war. Someone must be justly held responsible for death during a war. Since this condition cannot be met, the deployment of autonomous weapons is unethical.
This article was published in 2007. Once again we see that science fiction overshadows a realistic debate. Surely, there are a lot of developments in the field of artificial intelligence, but the Terminator was just as much of a science fiction when it was released in 1984 as it is now. No notable step toward this apocalyptic fantasy has been made. The pitfall of focusing on this scenario is that it causes tunnel vision. Due to the hyper-focus on the imaginary, people lose sight of the reality of AWS.
Michael Robillard refutes the stance of Sparrow. Robillard states that the arguments of Sparrow are flawed and that autonomous weapons are not morally problematic. Robillard states that Sparrow wrongfully presupposes AWS making genuine decisions but not being morally responsible for those very same decisions. Robillard argues that AWS are either a socially constructed institution that has physical features or it is in fact a genuine agent. If the first would be true, then AWS should not be treated as an autonomous entity. If the second would be true, then in fact AWS are responsible and have rights and interests. Robillard argues that it is not possible that neither of the two is true.
When we apply Robillard’s arguments, we conclude that AWS at this moment should not be treated as an autonomous entity. Although AWS are paradoxically called autonomous the weapons systems that are referred to are not autonomous. They are at most more autonomous than nonautonomous weapons systems. Maybe Intelligent Weapons Systems would be a better choice of words. Although the exact definition of “intelligent” is also open for debate. Going with Advanced Weapons Systems is probably the best solution. That way the abbreviation does not need to be changed.
For the upcoming years, there is no need to fear AWS. The public can see the added value, the International Humanitarian Laws cover AWS, and AWS can ethically be justified.