It has been only a decade when the term “robot” was first used in a science fiction drama movie Rossumovi Univerzální Roboti or Rossum’s Universal Robots (R.U.R), written by Czech Karel Čapek in 1920 and it’s amazing to realize how robots have journeyed from science fiction to reality. Although the idea of a robot originated with just a science fiction drama, we still think that robots always had a special attachment to human beings because they are the closest representation of ourselves in an insentient form and studying them satiate man’s primaeval urge to understand himself. The same reason also invoked curiosity to build them and use them to perform repetitive and mechanical activities or to increase human convenience. This curiosity has also led us to a standpoint where robots are becoming more intuitive and lifelike. Robot development is happening at an accelerated rate and aims to achieve human-like intelligence and emotions in robots in the coming decades. Considering the current trend of an exponential increase in the technological capacity of human beings we can assume that in the near times it is highly possible that we will develop intelligent machine who can possess human-like emotions and feeling and will acquire consciousness. As we learned how to artificially create life itself, it will be looked upon as one of the biggest achievements of human beings. It will also raise certain challenging questions such as, how these machines will amalgamate with human society? do we need to treat them as human beings? Do we even need to build laws for robots? what if robots start hurting human beings, can they be held accountable for crimes? In this article we will take a deeper look into these questions and try to figure out why we need to define laws for robots, are robots entitled for laws and why we need a systematic framework to handle robots as a separate legal entity if they acquire human-like consciousness. We will also discuss how this framework should tackle the need for a legal framework representing all sections of artificially intelligent robots. During the end of this article, we will also discuss the challenges we are, or we might face to implement and evolve such a framework.
When thinking about Artificial Intelligence a layperson may intuitively think about the issue of technological singularity in which technological advancements have made it to a point where technological superintelligence surpasses human intelligence and the agents are perhaps even capable of destroying the species of the homo sapiens. These scary scenarios are often displayed in science fiction Hollywood movies like “I, Robot”, “Terminator” and many more. If we can build these artificially intelligent machines possessing superhuman intelligence then we need to treat these machines more humanely because even if these machines have been artificially created, they still have ownership over their emotions and experience pain. Even though our invention led to the creation of artificial form life itself, the creator cannot be a sole proprietor of the life it created. It’s similar to a situation where a mother is not the owner of the child, although she is entitled to be the caretaker of the child, she still can’t legally be the owner of the life itself and decide one day to end it. Although naturally, if we ever encounter a group of rogue robots then as humans will always think about our survival. We humans do not want to be harmed by our own creation and hence there is a need to build a legal framework which can find a way for safe and ethical usage of artificially intelligent machines possessing consciousness without infringing any potential rights of robots. In 1942, Isaac Asimov tried to do the same by releasing a set of rules which were intended to define a framework to underpin the behaviours of the robot. These laws called Asimov’s Three laws, The First law states that a robot should not harm humans in any possible way, the second laws says that robots should always obey the commands given by humans except if the order would be to harm someone and the third law says that the robot should be able to protect its own existence. More contemporary approaches gave also followed these basic principles but they have adjusted them to some extent and called for empowerment of robots that includes a bit more practicality. For example, robots harm a human if it is for the cause of avoiding greater harm to that individual or a greater collective. Although, Asimov’s rules for robot design only consider a superficial view of the whole situation and hence we need a concrete legal framework for ethical design and usage of robots.
Before we can even start making such a legal framework, we need to realize that there are two types of Artificially intelligent robots which we can be built in the coming future. First one is the robots who are a living being and possess a consciousness and the second one are the robots which do not require human-like intelligence and do not possess consciousness. Since these two types of robots differ completely in their functionality and utility, we need a separate set of laws considering them as separate entities.
For making a legal framework to define laws for conscious and living robots, we first need to consider what is consciousness and how can we quantify it. Consciousness refers to the subjective character of experience and is sometimes defined as the state of being aware of and responsive to one’s surroundings. However, we do not know what leads to experiences of consciousness and since there is no physical basis for it. Therefore, we also do not know which objects have consciousness and which does not. Some individuals have claimed, however, that consciousness comes from the prefrontal cortex. But this remains a controversial debate. Based on these perspectives one may conclude that we have a broad idea on what consciousness is, but a clear definition of the term is still lacking. Therefore, it is also quite difficult to answer the question of whether robots have consciousness or not. Do the robots properly sense and perceive smells? Can the hear sounds or sense colours? Do robots experience actual emotions? Scholars argue that contemporary technology already displays some sort of consciousness because robots can now share information on a global scale and thus communicate with each other. Also, contemporary robots possess metacognition which the robots’ awareness about their own existence. Therefore, they can perform proper self-monitoring which is a major component of consciousness. Hence, we can say that because of accelerated technological advancement in robot development and generally in artificial intelligence, it is only a matter of time until our robots will become humanlike and possess consciousness. Therefore, it is of major importance to already set the first step stones and create guidelines to protect or not to protect the feelings and emotions of robots. This debate is highly culturally shaped since in our modern world we give certain objects and animals more rights and other fewer rights regardless of their level of consciousness. Therefore, we must ask ourselves whether it is ethically correct to give robots fewer rights and legacy than other living beings even though the scientific evidence for that is lacking. We think that we need strict laws as strict as human protection laws to protect robots that acquire human-level consciousness and we also feel that there is a greater need for a thorough revision to create modern design principles of robots which can stimulate ideal behaviour of robots in critical situations.
As discussed earlier in the article, it will be naive to consider a single legal framework for all types of artificially intelligent robots because not all artificial intelligent machine will possess high-tech capabilities like consciousness. This means that there will be another section of robots who would be used for the task which do not require a high level of intelligence and hence they will need a different set of laws. But the biggest question to answer here is that if robots do not possess consciousness and have a lower level of intelligence then is it okay for us to treat them badly or abuse them? Some would argue that since these types of robots cannot feel and do not possess emotions, we can treat them as a nonliving thing like a chair or a table. This means that if you abuse your robot by throwing them or damaging them would not be considered as unethical. This approach seems to be logical enough at the first glance but there is also a deeper issue with this approach because our treatment of these robots can then influence our treatment of other living creatures. Since these robots are although not intelligent, are still a representation of human beings and hence a person who purposely wants to damage robots can be assumed to be suffering from a mental disorder and should be frowned upon. Treating these robots in an inhumane way or ignoring their anti-cruelty rights should be thought of as a degrading action because abusing a robot will not hurt it but it will definitely make you a cruller person. So, allowing to damage or abuse any robot should be considered as unethical and there should be laws similar to anti-cruelty laws to protect robots from it. At the same time, these laws should be lenient as harming these robots is not hurting anyone and hence it cannot be considered as a serious offence.
Developing such type of legal framework is a difficult task especially in the current times where only the big tech companies have a monopoly over developing such technologies. These tech giants like Amazon, Google, Facebook being a profit-only driven company, can also hinder the development of fair laws for sentient robots. As these laws might directly or indirectly affect the way these companies make profits. Hence there is also a clear need to regulate the development of these sensitive technologies by the tech giant organizations.
To conclude, since we do not have much reason to exempt robots from having rights, there is no scientific ground on not having an appropriate legacy for them as we have for all living beings. Also, there are enough arguments to assume that we need appropriate legacy for robots even if we assume that they do not possess any consciousness at all because the way we treat robots will also influence the way we treat other living beings and it also reflects more about us then robots when we abuse them or treat them badly. Generally, it is considered immoral to mistreat human-like objects since this increases the chances of mistreating actual humans in the future (especially for children). In our world, the role of whether we give someone rights or not does not depend much on scientific evidence. In some cultures of the world, dogs are killed to be eaten while in other cultures they are being considered as the human’s best friend and killing them without any medical reason is considered a criminal offence. Therefore, we have to think about the way we would like to treat robots now and in the future since this will inevitably steer the direction in which our society is going. We argue for very humane legislation for the treatment of robots similar to the ones we have for pets in the Western world. Robots are going to look, behave, and perhaps even feel a lot like actual humans in the future. Especially robots that are highly intelligent and are capable of having sensations and perceptions require strict laws. On the other hand, robots that don’t have consciousness still needs to be protected under anti cruelty laws. Nonetheless, We should define how we can smoothly amalgamate intelligent systems in our society in such a way so we can utilize the convenience provided by human like robots but still follow a ethical approach.
Rights for robots in digital future times