The Development and Deployment of Autonomous Weapon Systems Violates the (International Humanitarian) Law and there is Nothing we Can Do About It.

Artificial intelligence is changing every aspect of war. Current warfare has already been influenced by artificial intelligence, and the amount of advanced artificial technology on the battleground is growing.  One of the technologies developing quickly is the fully autonomous weapon systems (AWS), also known as “killer robots”. Autonomous weapon systems, as defined by the U.S. Department of Defense, are “weapon systems that, once activated, can select and engage a target without further intervention by a human operator”. Over recent years, parties in favor of this weaponry argued that these weapons could save lives and execute warfare with more accuracy. However, governments, scientists, experts, and campaigners are speaking up, saying that autonomous weapons are a threat to human values and rights on the battlefield and therefore demand a ban for AWS, since it violates the international humanitarian law. The international humanitarian law is a set of rules which seek, for humanitarian reasons, to limit the effects of armed conflict. As a big amount of media sources cover this violation and the negative consequences of AWS get most of the attention, you would say that something is being done to resolve this breaking of the law. Sadly, this is not the case. This article discusses the legal issues and violations of the international humanitarian law that the AWS cause. Subsequently, two reasons are discussed for the impotence of the general public to counter this problem. 

The Accountability Gap

Who would be held accountable for death at the hands of a killer robot? The problem of not determining who is accountable for unlawful killings by lethal autonomous weapons is a strong legal argument in favor of banning these weapons. 

AWS themselves can not replace a responsible human being as defendants in a legal proceeding with deterrence and retribution justice as purpose. These weapons do not act with criminal intent, and therefore could not be punished. Because these robots would be designed to kill, someone should be held legally accountable for unlawful killings and other harm the weapons cause. Who should that someone be, the programmer or the manufacturer developing the weapons, or the commander deploying the weapons? A survey conducted in 2015 claims that according to the subject’s opinions, political and military leaders should be held accountable when AWS are used, but also attribute responsibility to the manufacturers and programmers of these weapon systems. I would disagree with holding programmers or manufacturers accountable. These groups of people may not know what these weapons will do in combat. AWS make decisions themselves without human input, and this decision-making program gets more and more complex as it needs to cope with all different kind of novel variables on the battlefield. In addition, these lines of code are written by teams of programmers, none of whom know the entire program by themselves. This makes the autonomous weapons unpredictable of its actions. 

Holding the commander responsible for the autonomous weapons he deployed, makes sense according to the current law, “since it imposes liability on an individual with power and access to information who benefits most concretely from the autonomous weapons systems capabilities in war-fighting.” However, in order to apply accountability for the actions of AWS to the commander, the systems should at least have some degree of predictability. If the actions of the system are hard to predict, the commander can not be in a position where he or she should have known what was about to happen. As already mentioned, AWS are not always predictable enough to hold full accountability. Moreover, the reliability of the weapons needs to be sufficient. There is a need for knowledge of how consistently the machine will function as intended, which can be achieved by testing the devices in realistic environments. These environments can not be fully identical with the real-life deployment environments, again because of the variability, so it is hard for the commander to fully rely on the actions independently executed by the autonomous weaponry. 

During an UNSW Grand Challenge event, professor Jessica Whyte, a Scientia Fellow in the School of Humanities & Languages and UNSW Laws, said there is no evidence at all that there could be any accountability once lethal weapons become fully autonomous. The argument of the influence of quick decision-making by fully autonomous weapons for accountability was mentioned: “Autonomous weapons systems will make targeting decisions more quickly than humans are able to follow. Accountability in such circumstances will be particularly difficult.”, professor Whyte said. 

Proponents of fully autonomous weapons counter the accountability problem by imaging new legal regimes that could provide compensation to victims, establishing some predictability, and setting limits on the defendant’s costs. These are the so-called “No-Fault Compensation Schemes”. These schemes have been proposed universally for personal injuries as a result of the use of AI, and for accidents made by self-driving vehicles. Such no-fault systems have already been used in the medical sector, by compensating patients who were injured by vaccines

However, as these no-fault systems sound promising, I think these systems would not capture the elements of accountability under the international humanitarian law. Compensating a victim for harm lacks moral blame and deterrence, therefore not fulfilling full accountability.

The Violation of the Martens Clause

A second legal argument against AWS is the violation of the Martens Clause. The Martens Clause, founded in 1899, is a legal and moral standard for judging new technology not covered by existing treaty language. The International Law Commission has defined the Martens Clause as the following: “[the Martens Clause] … provides that even in cases not covered by specific international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.” Bonnie Docherty, an author of the broadsheet press The Conversation, explains that autonomous weaponry would not comply with the principles of humanity and the dictates of public conscience. “The principles of humanity require humane treatment of others and respect for human life and dignity. Fully autonomous weapons could not meet these requirements because they would be unable to feel compassion, an emotion that inspires people to minimize suffering and death.”, Docherty claims. Moreover, the weaponry lacks the ability to value human life or the significance of its loss. The individual lives get translated to zeros and ones in the algorithms and based on these programs lethal decisions are made. Reducing human targets to object violates their human dignity, therefore partly violating the Martens Clause. 

Another phrase of the Martens Clause that gets violated is the dictates of public conscience. Experts, governments, and the general public showed objection to the possibility to use force without human control. Thousand of scientists and artificial intelligence experts including Elon Musk and Stephen Hawking support the prohibition of the autonomous weapons, demanded action from the United Nations, and issued a pledge not to assist with the development of fully autonomous weapons. Thirty countries support the ban on AWS, where most of them have spoken at an U.N. meeting, stating that when using weapons, there should be some form of human control. The 75 nongovernmental organizations of the Campaign to Stop Killer Robots, co-founded and coordinated by Human Rights Watch, shows the great scale of opposition by nongovernmental groups. 

Source: Campaign to Stop Killer Robot

A scientific journal written in 2018 states that the argument of the violation of the Martens Clause is not enough to ban autonomous weapons. “The Martens Clause was simply not intended to be the primary weight-bearing pillar of international humanitarian law. Basing a preemptive ban on weapons on its spare text would, therefore, be unprecedented and unwarranted.” I agree with this statement, but I do think these bans are not preemptive as big countries are deploying AWS already, which I will explain later in the article.

The Violation of the Principle of Distinction

Before entering combat, soldiers must comply with the principle of distinction, which forms the prime directive of international humanitarian law. The principle of distinction requires parties to an armed conflict to refrain from intentionally targeting civilians, thereby distinguishing between combatants and civilians. Peace organization PAX expresses its concerns if AWS could properly distinguish between civilians and combatants and make proportionality assessments. The popular press shares this view, saying that autonomous weapons can not make a clear distinction yet and the borders between the combatant and the civilian are becoming increasingly blurred in cyberspace. 

A scientific article written in 2014 states that only a few weapons fail to be found lawful after the weapon review. According to the article, the operational setting is very important, as the settings where civilians are not likely to be present, do not require weighting military advantage against civilian harm. In settings where the civilians could be present, can the weapon system 100% correctly distinguish between combatants and civilians? This is a fair question to ask. 

An article in the Global Policy Journal argues that the principle of distinction is overall not the strongest argument against banning fully autonomous weapons.  They claim that although the inability for autonomous weapons to distinguish between civilians and combatants is inherently unlawful, lethal autonomous weapons become acceptable when it reaches a point where it can distinguish between combatants and civilians. According to the authors, due to the technological progress enhancing the weapon capabilities, this point will be reached, therefore making the argument obsolete. I believe this point may be reached, but based on the current technological development, it will take at least ten years or more, making the violation of the principle of distinction argument a solid one for now as the deployment of the weapons happens as we speak. 

What can the general public do about it?

As discussed above, AWS violating the international humanitarian law is based upon several characteristics of AWS, but what can we do about it? Although the press and campaigns show the size of the scale of people that disagree with autonomous weapons, nothing major has been accomplished to ban this weaponry in my opinion. This is mainly due to two reasons: The AWS arms race between the big countries and the definition of AWS. 

The Arms Race

The annual Convention on Conventional Weapons (CCW) is the place where a ban on AWS could be initiated. The function of the CCW is to restrict or ban the use of specific types of weaponry that are considered to cause unlawful suffering to combatants or civilians. The structure of the CCW was adopted in this manner to ensure future flexibility. The convention itself contains only general provisions. Three protocols have already been annexed, including the Prohibition on the use of Mines. Followers of the Campaign to Stop Killer Robots are present yearly, among with the proponents states, which is the majority of the attendees. That would be enough to initiate a ban on AWS right? Wrong. The Convention for Conventional Weapons operates by consensus. This means that if the entire room of diplomats wants to move forward with a treaty, and one state says no, then it goes nowhere. This means that there can only be consensus on insignificant changes which doesn’t affect the problem much. These agreements are way too weak to grow towards a ban on AWS.

The arms race can cause many casualties
The arms race can cause many casualties

States in the CCW that are opposing a killer robot ban are big countries like the United Kingdom, Australia, Israel, Russia, China, and the United States. For now, it seems like they will continue to stand their grounds as for most of the reasoning comes down to the arms race. Armed drones and other weapons with varying degrees of autonomy have become far more commonly used by the high-tech militaries of these countries. South Korea announced to develop a drone swarm army and China has been testing autonomous weapons on land, air, and sea. Israel has Harop, a loitering munition working autonomously, which has lethal results on its name. The world’s most powerful nations are already at the starting position of a potentially deadly arms race, while regulators, the majority in the Convention, lag behind. A member of the International Committee for Robot Arms Control, Doctor Elke Schwarz, confirms this: “It’s clear that the U.S., Russia, and China are vying for pole position in the development of sophisticated artificial intelligence in combat.”

Thirty countries have signed the list to ban killer robots, of which most of them are not even developing this technology and have a small military unit. China is on the list as well, supporting the ban for using AWS, but not for developing or producing it. At first sight, thirty countries supporting the ban sounds like a hopeful amount, but it means close to nothing when you need consensus on a treaty and there are always some big players present that disagree with that treaty. 

Definition of AWS

Even if the members of the CCW reach a consensus on a treaty which describes some way of banning AWS, another problem lies ahead. To this day, it is not entirely clear what the exact definition of AWS is. Paul Scharre, an employee of the Center for a New American Security states the following: “When you say autonomous weapon, people imagine different things. Some people envision something with human-level intelligence, like a Terminator. Others envision a very simple robot with a weapon on it, like a Roomba with a gun.”

For this reason, the UK is an opponent for banning killer robots as they “believe a preemptive ban is premature as there is still no international agreement on the characteristics of lethal autonomous weapons systems.” Considering the speed in which protocols are annexed, a definite definition would be shaped too late to stop the development of these weaponry. Besides, even if a ban on AWS would be initiated right now, it is hard to address certain new autonomous weapons under AWS, making it hard to ban these new developed weapons.

As reviewed in this article, the development of the AWS pose challenges for international humanitarian law. The presence of the accountability gap, and the violations of the Martens Clause and the Principle of Distinction are strong legal arguments in favor of banning autonomous weapons. Violating laws is followed by prosecution, one would say, but this is not the case nowadays. Because of the arms race between big countries with grand military units, and the unsettled definition of AWS, banning autonomous weaponry today seems impossible.All in all, I believe that short term action is needed from all the governments, instead of the general public, to confine the development of autonomous weapons before is goes out of control. All we can do is stand or ground, spread the word, and protest to wake up that government. Contact your government or spread the word by pressing the following button:

Learn more

Leave a Reply

Your email address will not be published. Required fields are marked *

Humanoid robot holding a gun.
Power & Democracy War & Peace

AI weapons: Ban them before they strike

Would you believe it if we told you that completely autonomous weapon systems are already being used to kill people? Well, they are. And it needs to stop before disaster strikes. It’s no secret that companies like Lockheed Martin “have been delivering advanced autonomous systems to the U.S. military and allies” for decades, but this […]

Read More
Human & Machine Labour & Ownership Power & Democracy Power & Inequality Uncategorized War & Peace

Embracing Unleashed Intelligence: A Call for Unregulated AI 

INTRO Recent developments in AI have seen subfields of this technology, such as generative AI and deep learning, explode in terms of popularity, innovation, and investments. These advancements are happening at such a rapid pace that it is nowadays difficult to imagine a field in which some forms of AI cannot or will not be […]

Read More
Power & Inequality War & Peace

AI will Perpetuate Neo-Western Values

What happens when the invisible hand that shapes our values is no longer human, but artificial? We stand on the brink of a value transformation, ushered in not by human deliberation, but by AI. As our lives are increasingly influenced by technology, it is becoming ever more important to reflect on its effects. Technologies bring […]

Read More