KILLER ROBOTS: the Unwanted Everlasting Soldiers of the Future

Artificial Intelligence (AI) is constantly changing our lives and this will continue to happen in the future. In many areas of our society, human life is influenced by AI: in business, education, health care and even in politics. Companies use AI to automate various tasks, while consumers use AI to make their daily routines easier. Although we are experiencing great benefits from AI, there is also fear among people about the negative effects of AI. Especially in the military and warfare a lot of people have negative thoughts about the influence of AI.

Variants of autonomous weapons have existed for years. One of the oldest precursors of autonomous weapons are the land- and sea mines that explode on touch or approach. The arrival of the first military robot was at the end of the 20th century: the Foster-Miller TALON was used in the Bosnian war. This is a small remote operated device that has been used for various tasks, such as explosive ordnance disposal, reconnaissance and communication and in rescue operations. Later, in 2016, a robot like the Foster-Miller TALON was used by police in the attack on police officers in Dallas. However, these robots are not yet entirely considered as autonomous weapons, because in the end they are still controlled remotely by humans.

In recent years, AI has led to the further development of Lethal Autonomous Weapon Systems (LAWS), also known as ‘killer robots’, which no longer involves people’s control. AI is leading us towards a new algorithmic battlefield in which there are no boundaries or borders, humans may or may not be involved, and it will be impossible to understand and control once, due to training aspects, the algorithms further evolve. But should we want this? Should we go to war when humans no longer regulate it? This opinion article discusses whether lethal autonomous weapon systems should be deployed in  warfare. A positive contribution from the deployment of LAWS would be the reduction in risk of soldiers being injured or killed, but where will this end? And what will be the threshold for even going to war? As several positions on this subject can be taken, both the positive and the negative aspects of the use of LAWS during warfare will be discussed, but our statement is clear: LAWS shouldn’t be deployed in warfare. Why? Because LAWS will make it more likely to go to war, and we have to take the big negative consequences caused by algorithmic errors into account. Since a small mistake is easily made. And yes, it maybe is indeed more effective, more efficient and better for the environment, but if LAWS further evolve we won’t be able to control them and with this new standard for warfare it can cause even more innocent deaths than before.

Give or take lives?

The military keeps and must keep improving their resources. The main reason for the importance of having the most advanced materials is the competition between other countries. The top competitors are the United States, Russia, China, South Korea and the European Union. They keep developing and increasing the size and quality of military resources to gain not only military, but also political superiority over one another, the so-called “arms race”. Meaning that building autonomous weapons is a matter of power. Nevertheless, the reason one started developing autonomous weapons is because they also have a lot of advantages when using it on the battlefield. One important advantage for using LAWS is the involvement of less humans. Less war fighters are needed for achieving the same goal, resulting in less deaths. On the contrary, how are deadly weapons, which can overpower the defenses without risking being injured, going to help save the lives of civilians? Are they going to stop fighting when an innocent civilian is accidentally in the wrong place at the wrong time? 

Cartoon by Simon Kneebone

Another advantage is that less humans will face the bloodbath during a fight which could decrease the amount of humans dealing with post-traumatic stress disorder caused by war. Medical care and disability benefits are two of the most significant long-term costs of war. It has been predicted that the war in Iraq and Afghanistan has cost the United States an amount of 350 up to 700 billion dollars for medical and disability care alone in the long term. So, using LAWS instead of human war fighters in warfare could save the military a lot of money. Therefore, LAWS will be more efficient in the long run than human war fighters.

LAWS are superior to human soldiers because they won’t get tired, they won’t get angry, they won’t seek revenge and they won’t rape”

On the other hand, there must be equality in the use and accessibility of LAWS before we can legalize them. Otherwise there will be an unfair military advantage for those  who have access to LAWS. When there is war between two countries, in which one country has access to LAWS, but the other does not, there will be serious consequences. The country without access to LAWS will have to deploy manpower and the battle will soon be over. LAWS have multiple possibilities, they can be much more powerful than human beings and may even become intangible. This could still lead to a bloodbath, but only with victims of the country without access to LAWS. Should we get this on ourselves? How are we going to decide which countries with access to LAWS are allowed to use these when going to war, and when should these countries use manpower? As it may become clear, legalizing LAWS will indeed reduce human involvement and thus human death, but it also raises many difficult questions.

Science fiction warfare

Air Defense systems, loitering munitions systems / self-flying (suicide) drones, self-driving tanks and sentry guns are all military systems that can track targets autonomously. It is no sciences fiction anymore, it is real and it is becoming the future of warfare. Air Defense systems fire when a target is detected, loitering munitions stay in the sky and attack when a target is located, sentry weapons automatically fire towards detected targets by sensors. Autonomous drones are: “programmed with algorithms for countless human-defined courses of action to meet emerging challenges”. The unmanned robotic systems can reach places which are inaccessible for humans. Besides, they are able to find and destroy a target more quickly which will increase the fighting power in shorter time frames.  

As far as we know, autonomous drones and most other autonomous systems are not yet approved and being used on the battlefield, but they are being tested and developed. It is incredibly easy to find video clips of these military tools being used for attack. The European Parliament predicted that autonomous machines will have the ability to learn from both successful and unsuccessful experiences recorded by other autonomous machines. Moreover, they could learn or adapt functioning in response to a changing environment in which they deployed. A downside of this learning aspect, will be the rapid expansion of LAWS’ uses in warfare up to a point in which humans aren’t able to take back control. When it comes to these risks, we speak of damage potential. This is the amount of damage an autonomous system could do, before a human operator could take back control to avoid risky situations. The damage potential will rise, when humans don’t have the ability to take back control. Besides learning, autonomous machines will be able to communicate with one another wireless or through a main database which will make it possible to coordinate to achieve the most effective mission results. This could reduce the time of war and save a lot of the environment being destroyed by war. Meaning that LAWS might even be more environmentally friendly. 

Cartoon by Simon Kneebone

The question is, are LAWS more effective than human war fighters? As previously explained, autonomous weapons consist of algorithms programmed by humans. The algorithms are programmed in such a way that, during warfare, no input from any human is needed. However, what happens when there has been a mistake in the algorithm or an error pops up during the fight? All programmers, whether you’re superior or just a beginner, know that an error in the program can happen at any moment. The LAWS can suddenly change from being the most efficient, effective and environmentally friendly to extremely dangerous. And who would then be responsible? Two years ago in the Netherlands something extremely traumatic happened. Four children died during a train crash while sitting in an electric cargo bike (“bakfiets”), because the driver was not able to brake. However it has never been proved whether the accident was due to some technical issues, to an error in the system or by the act of the driver. Eventually, no one was prosecuted. This event has caused a lot of discussion and is an important example of the issue of the possible consequences in LAWS as well. On top of that, algorithms do not have emotions and no seriousness of killing a human.

Moreover, what if LAWS will be used by “the wrong people”, people with wrong intentions. Should we take that risk? No we shouldn’t! If, for any reason, LAWS fall into the hands of a leader and this leader also knows how to program these, we are screwed. LAWS will never “abandon” its leader or programmer, since LAWS will never be able to realize that their actions are unjust. In order to legalize LAWS, we should definitely consider this. A legal framework on several aspects of LAWS, as production, stock and transfer, might therefore be necessary. If it is possible, we should make sure to deactivate the LAWS when they fall into the wrong hands. 

Cartoon by Simon Kneebone

The lost of human dignity

A lot of discussion is going on whether LAWS should have consciousness in order to kill or not. It has been said that they are against human dignity. An important reason why LAWS are against human dignity is because they are unable to understand or respect the value of life. They have no knowledge of the laws of war, no reflection, no justice and no morality. It has even been said that they will reduce equality, because they only fight on one side of the conflict. They are able to reduce quality of life, limit freedom, increase suffering and humiliation and therefore cause extreme psychological stress. However, don’t these definitions make other machines which cause extreme suffering and humiliation against human dignity as well? Aren’t war and killing in general against human dignity? The ambiguity regarding the definition of human dignity makes it a hard topic in the discussion. A clear definition worldwide has not yet been made and will probably never be made. The content around human dignity differs between cultures, contexts, historical era and philosophical positions. Therefore, LAWS being against human dignity can not be seen as a good sufficient reason for not allowing them. However, the non-involvement of humans in LAWS can be seen as crossing the line, because they can not understand or value the human lives that they were taking. In contrast to manual machines and (current) warfare which are not possible without human involvement. Christof Heyns explained human dignity as “The notion of meaningful human control”. Meaning human dignity can’t remain without direct or indirect human control. Still, the question remains whether we want machines to have consciousness. Some people believe that it is not likely that machines will have the ability of consciousness or morality in the future. And to be honest, we do not understand the purpose of achieving this. It will only create more complicated discussions. Besides the discussion whether they are against human dignity or whether we want them to be conscious, the United Nations discussed another major topic in the discussion of LAWS:  

LAWS will lower the threshold of going to war”

This is one of the most important political concern of the development of LAWS. During the discussion at the United Nations one of the main reasons LAWS shouldn’t be deployed was the increasing likelihood of going to war. Politicians could start a conflict or interfere in a conflict without hurting their soldiers. Also, there is a lack of accountability since a machine cannot be held responsible for its actions. The less involvement of humans might decrease the sympathy to other humans. The lowered threshold will in the long run increase the amount of conflicts, causing more human deaths and environmental destroyment. 

Cartoon by Simon Kneebone

Stop it! 

After reading this article, you have at least a bit more knowledge about our future warfare. It’s a reality that we can’t ignore anymore. If we do nothing, in several years there won’t be involvement of humans in warfare anymore. Coming back to the question: Should lethal autonomous weapon systems be deployed in warfare? Our opinion is strong and not about to change, countries should definitely not use LAWS in warfare. The military of the top competitor countries have already made big steps in the development of LAWS and all with their own good reasons. LAWS are more effective, more efficient and more environmentally friendly. It all seems to be too good to be true, and it is. We should not forget that LAWS consists of programmed algorithms by humans and humans make mistakes. Whether it is conscious or not, we should always be aware of the extreme consequences of a small error in algorithms. The highly intelligent killing systems can become our worst enemy, causing innocent deaths, traumatized opponents and civilians and destroying any environment it can reach. On top of that, in any case LAWS won’t be accountable for their actions which, together with the less injured soldier, lowers the threshold of going to war. The lowering threshold of going to war can be a serious danger to society and isn’t this precisely what we want to prevent by waging war? We want to unite humanity, and equality, continuing like this will stimulate division of society.

We already are in a roller coaster of developments of LAWS, but we must keep this to a minimum. The question could arise if there doesn’t already exist multiple types of LAWS. Yes, this could be, but we don’t know what we don’t know. If it already exists in secret, unfortunately we can’t stop it. But when the developments of LAWS become more public, we should do everything to avoid further developments and a ban should be included in the Conventions of Genève. If and only if, the use of LAWS would ever be legalized, we must generate a framework to limit the (perhaps unknown) consequences. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Humanoid robot holding a gun.
Power & Democracy War & Peace

AI weapons: Ban them before they strike

Would you believe it if we told you that completely autonomous weapon systems are already being used to kill people? Well, they are. And it needs to stop before disaster strikes. It’s no secret that companies like Lockheed Martin “have been delivering advanced autonomous systems to the U.S. military and allies” for decades, but this […]

Read More
Human & Machine Labour & Ownership Power & Democracy Power & Inequality Uncategorized War & Peace

Embracing Unleashed Intelligence: A Call for Unregulated AI 

INTRO Recent developments in AI have seen subfields of this technology, such as generative AI and deep learning, explode in terms of popularity, innovation, and investments. These advancements are happening at such a rapid pace that it is nowadays difficult to imagine a field in which some forms of AI cannot or will not be […]

Read More
Power & Inequality War & Peace

AI will Perpetuate Neo-Western Values

What happens when the invisible hand that shapes our values is no longer human, but artificial? We stand on the brink of a value transformation, ushered in not by human deliberation, but by AI. As our lives are increasingly influenced by technology, it is becoming ever more important to reflect on its effects. Technologies bring […]

Read More