Data is beautiful. Through the use of scientific methods and algorithms we extract knowledge and insights from structured and unstructured data. Did you know that 90% of world’s data had been generated in the past two years? This explosion of information known as “Big Data” is completely transforming the world around us.
Back in 1940 when uranium and plutonium were discovered, humans found a way to put these elements into great use, namely by building the world’s first nuclear reactor in 1942. Scientists discovered that with these elements they could produce nuclear fission in a reactor core to boil water into steam which in turn turned the blades of the steam turbine to generate electricity. Wonderful. But merely 3 years later, humans found another use with these same elements, but this time to build something more powerful. So powerful that we call it the weapons of mass destruction.
Similarly can be said for data. While data can enhance our lives, it can also be used for malicious purposes. In this article, we will discuss how Autonomous Weapon Systems (AWS), also known as killer-robots could change the way future wars are fought and why we believe that restrictions should be put on autonomous weapon systems.
“The world hasn’t had that many technologies that are both promising and dangerous — you know, we had nuclear energy and nuclear weapons. The place that I think this is most concerning is in weapon systems.”
Microsoft’s founder Bill Gates about Artificial Intelligence
The immorality of autonomous weapon systems
Without human control, autonomous weapon systems have the power and discretion to independently search for and engage targets based on programmed constraints and descriptions, such as using facial recognition or pattern recognition, for example to target military fatigues only. When the AWS encounters something the algorithm perceives to match its given target profile, it fires and eliminates the target. This is immoral because autonomous weapons lack the human judgment necessary to evaluate the proportionality of an attack, distinguishing civilian from combatant, and abiding by other core principles of the laws of war, and so should never be empowered to decide who lives and who dies.
An AWS may not be able to distinguish a military fatigue from a civilian wearing similar patterns of urban camouflage clothing, or an AWS might violate the Geneva Convention that established laws for humanitarian treatment in war. An AWS might ignore an enemy waving the white flag to surrender and choose to strike, whereas a human-being would cease to shoot. An article on The Conversation states that autonomous weapons might not be able to distinguish between hostile soldiers and 12-year-olds playing with toy guns, or civilians fleeing a conflict site and insurgents making a tactical retreat. Moreover the article states that multiple studies have been performed on algorithmic errors, and that even the very best algorithms can generate internally correct outcomes, while in reality making a dreadful error.
“Machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”
United Nations Secretary General António Guterres about autonomous weapons
Because autonomous weapon systems are faster, cheaper, and in larger numbers, it can destabilize both on regional and global levels because it introduces the threats of rapid escalation and unpredictability. These killer robots would have the potential to make armed conflicts spiral rapidly out of control since robots lack the capacity to empathize or to understand the context. In case of an unlawful act, autonomous weapons can not be held accountable, making it problematic to ensure justice. The nature of war will change permanently as sending robots instead of troops would lower the threshold for a conflict and decrease the motive to find political solutions in order to end the conflict. Currently engaging in armed conflicts is a costly affair in multiple ways, such as financial costs that occur in war, but also political, social, and moral costs.
While people generally might not be concerned about the costs of an armed conflict, they definitely are concerned about the sufferings their soldiers might experience in war, and therefore people are generally hesitant when it comes to engaging in armed conflict. Autonomous weapons may change the cost-benefit analysis of armed conflict as there is no loss of human life, as autonomous weapons could function without putting any soldiers in harm’s way. According to an article on Vox, the costs of autonomous weapons are lower than costs of regular warfare. Combining these two mentioned factors, armed conflicts becomes less costly and thereby lowering the threshold, which could be an incentive for engaging in armed conflicts more often. If one has less to lose, engaging in conflicts becomes more appealing.
What can we do about it?
According to the Human Rights Watch, autonomous weapon systems are being developed and deployed by nations such as China, Russia, Turkey, the United Kingdom, Israel, South Korea and the United States. While fully autonomous weapon systems are not entirely deployed yet (more on this below), we should act now to draw a legal and moral line. In order to do this, we should adopt legally binding laws to regulate autonomous weapons in order to ensure civilian protection in compliance with international humanitarian law. Some of the things that can be done are:
- Unpredictable autonomous weapon systems should be ruled out, notably because of their indiscriminate effects. This would be achieved by banning autonomous weapon systems that are designed or used in a manner such that their effects cannot be sufficiently understood, predicted and explained.
- Use of autonomous weapon systems to target human beings should be ruled out. This would be best achieved by banning on autonomous weapon systems that are designed or used to apply force against humans.
- The design and use of autonomous weapon systems that would not be prohibited should be regulated. This in order to protect non-military targets, upholding rules of international humanitarian law. This can be done through:
- Putting limits on type of targets: such as by strictly enforcing to focus only on objects that are military objectives.
- Putting limits on the scale of use: by prohibiting to send a swarm of autonomous weapon systems for example, and limiting the duration.
- Putting limits on situations: Civilian areas should be avoided at all costs by autonomous weapons and should only be permissible to use in military areas.
- The human factor is always involved: Fully autonomous weapons should not exist, there should always be a human supervising the autonomous weapon so it can intervene when deemed necessary. A human should always give the final command before striking.
Since the development in autonomous weapon systems technology and use is growing rapidly, it is important that international limits should be established before fully autonomous weapons are deployed in field. Such laws should be written at national and international and levels, in collaboration with representatives of (inter)national governments, armed forces, the scientific, technical, and industrial community.
There is also an organization in The Netherlands called PAX also wants states to create a pre-emptive ban on the development, production and use of killer robots. Moreover, PAX states that there are currently 30 countries, over 3000 Artificial Intelligence experts, 116 CEO’s from robotic companies, the European Parlement, 20 Nobel Peace Laureates and over 160 religious leaders called for a ban on autonomous weapons. Moreover, PAX also mentions that there is a growing agreement among states that meaningful human control over the use of force is deemed necessary.
PAX mentions in their article that they take part in the meetings at the UN Convention for Conventional Weapons (CCW), where they meet with diplomats, make statements, and speak at events to warn about the dangers of autonomous weapons. We believe this is a good thing, as we believe that many institutions and governments underestimate the possible wrongdoings that may occur with autonomous weapons, and it is good that there are big instantiations such as PAX, Human Rights Watch, Amnesty and many many more pressuring governments that a human-factor should always be involved.
Unfortunately however, as of recently in december 2021 the United Nations failed to agree on a ban for autonomous weapons, stating that nations pouring billions into autonomous weapons research with the United States alone spent a budget of $18 billion for autonomous weapons between 2016 and 2020, and they already have deployed autonomous sub-hunting ships and tank-seeking missiles. They are not the only one’s however, according to an article on Newsweek, Russia has robotic tanks and missiles that can automatically pick their targets. China has autonomous rocket launchers and submarines. Turkey, Israel and Iran are also pursuing AI weapons. According to New York Times, Turkey has already possibly used an autonomous drone against militia fighters in Libya’s civil war which may had selected a target autonomously. The militia were hunted down and remotely engaged by the drone that was programmed to attack targets without a human controlling it. According to Livescience, the drones were programmed to attack if they lost connection to a human operator, but the report doesn’t explicitly say that this happened. However, the manufacturer of the used drone admits on their website that the drone has a proximity fuse with customizable detonation range. This does sound like a fully autonomous weapon system to me. It is interesting to see how countries are rushing the AI-arms race while humanitarian organizations are racing to establish regulations on the development autonomous weapons. It will certainly be an interesting future.
This is an extremely worrying development: lethal autonomous weapons are ethically unacceptable. The moral decision of life and death cannot be improved into an algorithm.
Project leader at PAX Daan Kayser about banning killer-robots
What about defensive autonomous weapon systems?
There might be specific situations where it might be necessary to react faster than humans can, for instance in the case of missile defense systems. Gatesnote wrote that U.S. Navy’s Aegis Combat System, an advanced system made for tracking and guiding missiles at sea, has a mode of operation in which humans delegate all firing decisions to AI, reasoning that when an enemy target fires 50 missiles towards you all at once, it would be favorable to have a system that can react faster than a human could. We agree with this statement. Fully autonomous weapon systems designed in such a way that it targets only incoming threats should be acceptable. We believe that it should be kept under human supervision and intend to fire autonomously only in situations where the time of engagement is too short for humans to be able to respond in a timely manner.
An example of a good fully autonomous weapon system is Israel’s Iron Dome missile defense system. The Iron dome can assess where an incoming missile will possibly detonate, and suggest countermeasures accordingly. If the incoming missile is not threatening particular military/civilian assets, the system may even suggest not to attack it. Since these weapon systems act purely defensively and only target incoming missiles, we believe that deploying similar countermeasure autonomous weapon systems should be acceptable.
As interesting as the idea behind autonomous weapon systems may be, we believe the drawbacks are far larger than the advantages. Fully autonomous weapon systems are dangerously unpredictable, even made so by design to behave unpredictably in order to remain one step ahead of the enemy. We also believe that autonomous weapon systems increase the risk of accidental and rapid conflict escalation. One could imagine that in a country such as Libya where multiple allied countries are present, a fully autonomous drone unintentionally attacking another allied target might lead to escalations between the countries.
Moreover, one important factor that autonomous weapon systems lack is situational awareness. As mentioned before, an autonomous weapon systems can not distinguish the difference in civilians fleeing a conflict site and insurgents making a tactical retreat, or not detecting the enemy waving a white flag wanting to surrender and therefore violating the Geneva conventions by striking the target. This will cause a lot of international diplomatic problems even between countries who are allies now but might take a turn because of a decision made by a fully autonomous weapon.
However, with the right regulations and ensuring civilian protection in compliance with international humanitarian law, we believe that autonomous weapons in where a human being is always supervising and can intervene when necessary, and has to give the final command before striking, should be an acceptable option. This way fewer errors will be made as we do not fully depend on the machine-learning algorithm. When used correctly, we believe that AI can greatly assist nations in warfare, eliminating soldiers from suffering in the battlefield and in the future it will probably be an all out robot-war. While still not ideal, we would prefer a robot warfare above a human one.
All in all, we preferably have that autonomous weapons will be banned completely, but since this is an unlikely outcome as superpower countries are opposing it and technology is advancing rapidly, regulations similar to the Geneva conventions can be written in order to prevent errors and war crime.