I’d like to live. Thank you.
In a time where war is almost something that doesn’t even surprise us anymore, this paper is going to be controversial. So, we would like to state that we are neither against or supporting any of the countries or wars that we are going to describe. We are only discussing the machinery, the AI in the systems and the possibilities that they provide. With several countries creating autonomous systems, many potential problems arise. Autonomous weapons can be described as a weapon or weapon system that is able to, independently, search for targets and attack on the basis of pre-programmed methods and functions. Discussion on whether or not they should be implemented is something that many papers and conferences devote themselves to. There are two different types of autonomous systems. Defense systems and attack systems, both having advantages for the owner, and disadvantages for the enemy. We argue that we do not think it is wise, and even morally reprehensible. Where we figure that systems should not be able to decide whose life should end and whose life should proceed, it will begin to present even newer problems. Political solutions will be less inclined to work, and the temptation for using the machines could become too great. There should always, always, be a human in the eventual decision making, and we will elaborate further on that in this paper.
Ever since it was possible to record it, war has been an occurrence that keeps repeating itself over and over. It is between almost every kind of race, issue or religion one could think of. Where spears turned into guns, guns have turned into automated machinery. Back in the days, it was harder to even the playing field. You could think of Native Americans fighting the Europeans when they came to what is now the United States of America. It could take years before new weapons were invented. Hundreds of years, spears and bow and arrow were the weapons to use. It was only until the years in 1300 that people started to use firearms as weapons. Ever since computational technology was invented and more and more information was spread at an impeccable rate, evening the playing field has become easier. Weapons could be flown over, blueprints were sent over right away, raw materials could be turned into anything the human mind could think of. The spread of information has grown exponentially. There was almost no reason for many countries to be under armed. other than simply not having the money or means. With technology advancing everyday, Artificial Intelligence is more heavily implemented in everyday objects. While these advancements improve our daily lives, many companies want to produce autonomous weapon systems to save their own lives, and possibly end others if the moment is ‘right’. Currently, there are some projects that are working with autonomous weapons in world leading countries such as Russia, The United States of America, China and the European Union. It is safe to say that it is a multi-billion dollar business system. When people think of autonomous systems, they often think of killer robots and attack systems, but this might not always be the case. There are more than enough autonomous systems that are only there to save and protect, instead of attack and kill. You could think of the Iron Dome in Israel. The Iron Dome is an Artificial Intelligence system, described as a ‘dome’ because the system is able to intercept and dismantle any rockets or other projectiles that are fired towards populated areas in Israel. If the system detects that the missile is going to land in a rural area, the system will leave it alone. There are several trucks with the system in the country, that keep rerouting itself to different positions, making it impossible for the enemy to dismantle the system. If the Iron Dome sees that it is going to hit populated areas, it will send a missile towards the projectile and either blow it up in the air, or make sure it is redirected to rural areas. This alone has saved Israeli cities over 1500 times. In contrast with an autonomous defence system, there are also more attack-focussed systems. An example of this is the SGR-A1 is the first unit with an integrated system for surveillance, tracking, firing, and voice recognition. This system has the ability to lock on to a potential threat. The SGR-A1 keeps watch over the demilitarized zone between North and South Korea, saving data and ensuring full surveillance. The system assumes that every person in the DMZ is an enemy, to then identify the ally through voice recognition or – in the case of an enemy – take countermeasures with, for example, rubber bullets. This autonomous system works with a human in the loop system, which means a human has to give permission to engage with the target. In the case where there is no human to give permission, the system has an automatic code which commands the enemy in the DMZ to surrender and it will recognize hands in the air as a sign of surrender. These are just examples of what autonomous systems can do up to now. They can be a great way of protecting and serving a country. Soldiers can be replaced with machines, and save them from mental and physical issues. The soldiers have no PTSD after shooting people, or bombing areas. And if there is a bombing, there are no human casualties if it is just between autonomous weapons. There will be some damage to the systems and some maybe even unsalvageable, but no harm will be done to humans. There will be no bias, no fear and no personal vendetta. Some might even say it is the fairest fight one could have. It will be purely based on a system with set rules. It sounds great with every issue solved, right? Sadly, it has many downsides.
Many scientists, political leaders and ordinary people see problems with the implementation of autonomous weapons. At an international joint conference on Artificial Intelligence, an open letter was released calling for a ban on autonomous weapons. With an impressive list of allies, this letter calls for a ban on autonomous weapons beyond meaningful human control. It notes that an military arms race might damage the reputation of Artificial Intelligence, which might hurt the potential to benefit humanity. For example, the public opinion on assistance systems will take a hit if the same systems are used to kill soldiers. PAX – the Peace organisation PAX – is also of the opinion that defensive systems will not improve the balance of military power in the world. They note that even if we can assume a shield is strictly defensive, it can be destabilizing if it allows bigger countries to attack each other without a possibility of a retaliation strike. The Iron Dome in Israel is a great example of such situations. We think it is a legitimate use of Artificial Intelligence in the military, but PAX argues that it leads to an uneven balance between Israel and Palestina, because the Palestines do not have such a defence system and cannot protect themselves with such efficiency. Another concern is the accountability of actions in a war. This is a more ethical issue, as international humanitarian law states that someone must be held responsible for the civilian deaths. Autonomous weapons that leave no way of deducting the person who was responsible for the casualties should not be employed in war. The problem of responsibility of an AI-equipped machine is already a big issue in the design of self-driving vehicles and we expect it to be an even bigger issue if actual human lives are at stake. The added possibility of committing a war crime is enough incentive to halt the production of these systems. All these developments could contribute to a lower threshold for the use of force. According to PAX, the reduced military casualties is an upside that will have negative effects. Due to the lower risks involved it reduces the incentive to find political solutions to end conflicts, and thus will lead to the use of more autonomous weaponry and brute force. Politics has always been the more civil way of settling differences, but if the consequences of force are lowered, the ease of use of these systems will contribute to an accidental and rapid escalation of conflict. PAX claims that when used, these systems will react faster to each other and this will lead to an earlier escalation of a situation. The risk of this happening is increased by the unpredictability of these systems. PAX states that self-learning systems could develop unpredicted reasoning which will then lead to irregular and unforeseen actions. The operational officers that are needed to supervise this system and the way that self-learning systems work have also raised questions about supervision and control. Will the operational officer be replaced or will it stay put? Is the system eventually smart enough to function on its own, and make decisions by itself? People might share this fear, and it is also one that we share. Scientists could decide the system is fully functional by itself, and leave that technology available. To add to these facts, systems are also ‘just’ systems. They can be hacked. The person that created the rules in the system could implement certain personal choices that eventually could lead to many civilians dead. It is not an air-tight, fail-safe system, and it is open for tweaking with some smart minds working on it. The creation of autonomous weapons is down-right dangerous. This technology could increase proliferation risks and could enable dictators or terrorists to acquire these autonomous weapons, due to the technology being relatively cheap and simple to copy. Like the long distance missiles, high power explosives and capable marksman rifles, every new type of autonomous weapon will eventually become cheaper as it is a simple game of supply and demand. It will eventually fall into the hands of people with bad intentions, who yet truly believe it is for a good cause. This is a consequence of the constant developing arms-race around the world.
While this topic is heavily debated with both up- and downsides, we do not think an autonomous system is qualified for making decisions that could affect human life. As said before, autonomous systems hold many flaws and are far from the final solution. And even if these systems are eventually created, these systems can be hacked, could override decisions and are not final solutions for wars. They could even start more conflict where a simple political debate is sufficient enough. Even if the autonomous system is only there to defend, it could provoke initial attack. The only way to prevent issues like this is to not develop the systems at all. It is a tough task that might be almost impossible to achieve. The only way to ensure that is to make people realize the dangers of the system. The whole reason that people want them is because of war benefits. If it is possible to make them aware of the downsides, where it might not only turn on the enemy, but might also bring massive issues for themselves, it is possible to stop fully autonomous weapons. It should be up to the Artificial Intelligence community, and scientists to ensure that the people that should be educated on this, will be. Human officers and soldiers should always be held in the loop, no matter what. They are currently far more advanced in skills, feel morally obligated and are harder to hack – or manipulated – and this way the control stays with the people. Having human officers itself can bring issues, of course. Mental and physical, like discussed earlier, but it is currently a wiser choice to let it stay that way. Both fully autonomous systems and the ones with humans in the loop provide positives and negatives, but we need to take a closer look to those to ensure that we make the right decision, and it is our decision that a human needs to stay in the loop. No matter what.