Why autonomous weapons should be banned

In the present day, Artificial Intelligence (AI) is already playing a big role in our society. For example, AI is already used for proactive healthcare management, disease mapping, automated financial investing, virtual travel booking agents and many more. While many people are hyped for what the future will be with regards to AI, there is also a huge downside to this. This downside will be caused by autonomous weapons. Autonomous weapons are weapons that can operate without human control, also known as “killer robots”. Autonomous weapons can be very beneficial during war times, on the other hand these weapons also pose a huge risk to our society as we know it. This article will focus on the reasons why autonomous weapons should be banned. 

So, autonomous weapons are weapons that can operate without human control being necessary. However, there are three broad types of control humans can exercise:

  • The semi-autonomous operation, where the machine performs a task and then stops and waits for approval from the human operator before continuing. This control type is often referred to as “human in the loop.”
  • The supervised autonomous operation, where the machine, once activated, performs a task under the supervision of a human and will continue performing the task unless the human operator intervenes to halt its operation. This control type is often referred to as “human on the loop.”
  • The fully autonomous operation, where the machine, once activated, performs a task and the human operator does not have the ability to supervise its operation and intervene in the event of system failure. This control type is often referred to as “human out of the loop.”

Especially fully autonomous operations pose a huge risk. If these weapons go rogue it is very hard for someone to stop the weapon before it will end in a disastrous event. For example, autonomous weapons pose a novel risk of mass fratricide.

There are various concerns regarding autonomous weapons, namely ethical, legal and security concerns. Ethical concerns are if a machine should be able to decide over death and life. Legal concerns are about who should be held accountable if a machine performs an unlawful act. Security concerns are about the possibility that a machine might go rogue or even much worse, if a machine would get hacked.

Luckily there is the Convention on Certain Conventional Weapons (CCW), an agreement composed by the United Nations dating from 1980 that prohibits or restricts several conventional weapons. However since the developments in the AI field are succeeding each other very rapidly, states need to be sure that they do not lag behind events. Since governments began international discussions on lethal autonomous weapon systems (LAWS) in 2014, the field of artificial intelligence (AI) has seen tremendous advances. It is therefore highly necessary that not only lawmakers and military professionals give advice to these states. AI scientists should also play a big role here.
Moreover, it can be assumed that global usage of autonomous weapons will violate fundamental principles of International Human Law (IHL). The core fundamental principles of IHL are: the distinction between civilians and combatants, the prohibition to attack those hors de combat (i.e. those not directly engaged in hostilities), the prohibition to inflict unnecessary suffering, the principle of necessity and the principle of proportionality. Of course autonomous weapons could compose more precise and humane warfare with fewer civilian casualties and damages, if they are used in a good way. It is also known that banning these autonomous weapons would require a huge amount of global effort. However, looking at the risks these weapons pose, the whole world should indeed unite and put an end to these weapons.

If most people would argue that nobody should be able to decide over death and life, then why should we give a machine the possibility to make this decision for us.

Arguments for the ban of autonomous weapons

There are several arguments about why autonomous weapons should be banned. Firstly, there is the ethical aspect. If most people would argue that nobody should be able to decide over death and life, then why should we give a machine the possibility to make this decision for us. If we create machines that are capable of abiding by contemporary military doctrine and trust that these machines are better warfighters, planners or strategists of war then we should not have any ethical concerns. However, what happens if these AI machines develop their own will and will not abide by the same morals that humans have anymore. This would pose a great risk to the whole of humanity because these AI machines could turn against us. The machines could then use their autonomous weapons against humans, which could result in a lot of civilian casualties. Some might wonder what could go wrong with autonomous weapons. In the most extreme case, an autonomous weapon could continue engaging inappropriate targets until it exhausts its magazine, potentially over a wide area. On the off chance that the failure is repeated in other autonomous weapons of a similar kind, a military could confront the upsetting possibility of huge quantities of independent weapons firing at the same time, with conceivably cataclysmic outcomes.

Another ethical issue is about how such an autonomous weapon should be trained on distinguishing between enemies and civilians. If one would test a machine that is making its own decisions about the world around it, this should be done in real time. Besides that, how do you train a machine that can distinguish between subtle human behavior or discern the difference between hunters and insurgents. Or how should a machine that is flying out there distinguish between a combatant or someone who is just hunting rabbits. As we can realize now, letting autonomous weapons make their own decisions would highly increase the chance of having a disastrous outcome. Especially, if these autonomous weapons would have been trained using data that is very poor and lacking in good information. 

Secondly there is the accountability aspect. Because autonomous weapons can make decisions on their own, it becomes difficult to hold someone accountable. In situations where humans make the decisions, it is often quite clear who is accountable. For example, if a soldier pulls the trigger on a gun on the command of his superior it is obvious who should be held accountable for the bullets fired, namely his superior who gave the order. The same reasoning is not applicable with autonomous weapons. Let’s say a person is sitting in the driver seat of a self-driving car. The car failed to recognise a pedestrian which resulted in a terrible accident. The question of who is to blame in this situation has no clear answer. It could be the programmer of the car who might have made a mistake. What about the manufacturer of the car. And let’s not forget the person seated in the driver seat of the car. It is not clear who to blame now, is it? Now imagine the same situation but with autonomous weapons. Who is responsible if there goes something wrong. The programmer, the manufacturer or the organisation that employed the weapon.

An example of where the use of autonomous weapons went horribly wrong, was when the Netherlands issued a drone attack on a factory in Hawija, Iraq. Crucial information about the factory was missing, which resulted in a drone attack that caused a lot of civilian casualties. After this there were a lot of discussions about who should be held accountable for this horrible mistake. This example shows us that it is necessary to make a very good assessment about the risks before deploying these autonomous weapons. It was due to the fact that the Dutch government did not make a good assessment, that these weapons resulted in a lot of civilian casualties. Examples like this shows us that humans should not be trusted with autonomous weapons, and therefore a ban on these weapons should be in place.

It should be noted that there is an important asymmetry between humans and machines in the rules of war, which is that humans are legal agents and machines are not. Humans are bound to the law of war and should comply with this. However, there is still no clear agreement about the degree of involvement humans should have with these decisions that would ensure that their actions are legally binding. From an ethical point of view this makes it also difficult to pinpoint who should take responsibility if an autonomous weapon would go rogue.

At last, there are also concerns regarding the security of autonomous weapons. Autonomous weapons function on the input they receive from their sensors. Based on the data provided by their sensors they make decisions. So autonomous weapons are nothing more than computers that make decisions on the fly based on the observations made with their sensors. This makes them prone to several security and malfunctions issues. The first issue that comes to mind is what if the autonomous weapon miss interprets the data provided by his sensors. This is a common thing that can occur in all machines that use sensors to make predictions. There simply exists no such machine that is right 100% of the time. So who is to say that the autonomous weapon will not hurt innocent people. Another issue is what happens when an autonomous weapon is hacked. This could be possible in two ways, either by conventional hacking techniques by which the enemy takes over control of the autonomous weapon or by environment manipulation. The possibility of this happening should not be taken lightly. The consequences would be devastating.

The outbreak of a new type of war

There are also benefits to having autonomous weapons. For one it decreases the number of soldiers needed by reducing the human warfighters needed. Which in turn will reduce the number of casualties. Autonomous weapons also serve as a force multiplier. Combined they make the army of countries much stronger and more capable compared to the current situation. This can be seen as a good thing. A stronger and more capable army would also be able to protect us better. The downside is that it would also make it easier for countries to go to war with each other because the upfront cost of going to war might seem much lower because of the autonomous weapons. For example, if there is a conflict between two countries and they decide to go to war with each other thinking it would be namely autonomous weapons fighting each other. But the real consequence of the war might be the most devastating thing ever seen on this earth. The number of casualties would be immense. Autonomous weapons are often called the third revolutions in warfare, the first being gunpowder and the second one being nuclear weapons. As we all know how catastrophic the nuclear weapons were in Hiroshima and Nagasaki with plenty of casualties and many years of rebuilding. One can only imagine what the results of a war with autonomous weapons would be. So just like with nuclear weapons the development of autonomous weapons should not be allowed. 

Conclusion

Because of the complex legal, moral, ethical, and other issues raised by AI systems, policymakers are best served by a cross-disciplinary dialogue that includes scientists, engineers, military professionals, lawyers, ethicists, academics, members of civil society, and other voices. Such dialogues can be held during the Convention on Certain Conventional Weapons (CCW). It is highly necessary that the policy makers will acknowledge the risks that using autonomous weapons pose for humanity. Laws should be put in place for banning these weapons and a treaty should be written up to hold all nations accountable for this ban. Of course this will not happen overnight, conflicts between countries might first need to be resolved before they would consider adhering to this treaty. This will require huge efforts from every nation on this planet, but we do believe that in the long run banning autonomous weapons will be highly beneficial for the whole of humanity. However, even if this might be a stretch too far it would be great if countries would already come together and start regulations on the autonomous weapons that already exist. From this, we can slowly work toward a complete ban on autonomous weapons.

Leave a Reply

Your email address will not be published. Required fields are marked *

Humanoid robot holding a gun.
Power & Democracy War & Peace

AI weapons: Ban them before they strike

Would you believe it if we told you that completely autonomous weapon systems are already being used to kill people? Well, they are. And it needs to stop before disaster strikes. It’s no secret that companies like Lockheed Martin “have been delivering advanced autonomous systems to the U.S. military and allies” for decades, but this […]

Read More
Human & Machine Labour & Ownership Power & Democracy Power & Inequality Uncategorized War & Peace

Embracing Unleashed Intelligence: A Call for Unregulated AI 

INTRO Recent developments in AI have seen subfields of this technology, such as generative AI and deep learning, explode in terms of popularity, innovation, and investments. These advancements are happening at such a rapid pace that it is nowadays difficult to imagine a field in which some forms of AI cannot or will not be […]

Read More
Power & Inequality War & Peace

AI will Perpetuate Neo-Western Values

What happens when the invisible hand that shapes our values is no longer human, but artificial? We stand on the brink of a value transformation, ushered in not by human deliberation, but by AI. As our lives are increasingly influenced by technology, it is becoming ever more important to reflect on its effects. Technologies bring […]

Read More