Agnes Admiraal & Nikki Moolhuijsen
The world of AI has yielded many different types of highly intelligent algorithms, resulting in more humanoid AI. Some futurists (Kharpal, 2018) believe that machines will become more intelligent than humans within the next century. Thus far, what has distinguished us from these machines is our consciousness, however, this may not be for much longer. By creating human-like machines, consciousness, and sentience may arise in the process.
While to some this idea of conscious AI might be far-fetched or outright ludicrous, others believe that this will be inevitable. This possibility of conscious AI begs the question of how these ‘new’ beings should be treated and how they should be integrated into society. Specifically, to what extent should they be granted human rights? The difficulty of this question lies in the prerequisites of human rights and whether or not conscious AI will be able to suffice those.
In 2017, the AI Sophia, built by Hanson Robotic, was given citizenship rights in Saudi Arabia (Reynolds, 2018). This sparks a new era where AI may actually be seen as its own being, rather than some machine made by humans. Providing rights to non-biological entities, such as AIs, is a new concept that needs to be explored; whether it would be beneficial to do so and then what risks that brings. At the same time, the concerns of not granting them rights must also be considered. In both cases, the consequences are unknown to society and should therefore be thoroughly investigated.
This article will argue that conscious AIs should be granted certain human rights such as autonomy, fair treatment, and non-harm. For this article, conscious AI will be seen as systems that are truly conscious, meaning they have a sense of self and self-awareness and the ability to understand their environment and make decisions accordingly. The following sections explore multiple arguments for and against granting rights to conscious AIs. First, we will argue why conscious AI will meet the prerequisites of rights, then we will discuss the ethical arguments for granting them rights, followed by the beneficial social implications it would bring.
Prerequisites of Rights: Sentience & Autonomy
To argue that conscious AI deserves human rights, we must first examine the criteria by which we assign these rights. Most agree that consciousness is a prerequisite for granting rights (Feinberg 1974; Sumner 1996). Also frequently emphasized are rationality, autonomy, a moral compass, and the capacity to reflect on one’s behavior. Welfarists contend that personal interests and goals are the most essential (MacCormick, 1977; Raz, 1986; Kramer, 1998) and that rights exist to safeguard our fundamental personhood. Since objects do not have interests and cannot be harmed, they cannot receive moral obligations (Feinberg 1974). Finally, some state the requirement of competence to assert or demand one’s own rights as a prerequisite (Sumner 1987; Simmonds 1998; Steiner 1998). With the rise of conscious AI, it is very likely that AI will eventually possess all the necessary mental faculties to meet these prerequisites. One of these is consciousness which comes with the ability to make autonomous decisions. Therefore AI should be granted the same rights as humans to protect their autonomy and well-being.
However, some skeptics (Bostrom, 2017; Omohundro, 2014; Yudkowsky, 2012) continue to believe that AI, by nature, cannot be autonomous and therefore cannot act in its own self-interest and adhere to personal goals. They say that they were introduced into the world as humanoid tools that come with inherent behavioral restrictions and expectations. This is not the case with people; we were created equal and free, and as such, we deserve the rights that enable us to utilize our freedom and potential.
Nonetheless, this reasoning is invalid. The fact that AI was created to serve humanity does not exclude it from having the capacity and capability to be autonomous. It is circular reasoning to deny AI the ability to live autonomously and then cite their lack of autonomy as an excuse for not granting them the right of autonomy. Besides, being someone’s property and subject to someone’s authority does not justify reducing a being’s autonomy and rights; rather, it is an extra reason to equip them with the rights required to prevent their exploitation and abuse by humans.
Ethical aspects of Rights: Equality & Moral Obligations
Even if AI would not be able to meet all prerequisites, it can be debated to what extent these criteria are valid and whether rights are something that can ever be granted. The reliance on cognitive capacity and autonomy in determining a being’s rights, for instance, has been questioned by disability studies experts, who often maintain that all humans have the same moral status, even if they have a severe cognitive impairment (Koch, 2004). As humans, we work extremely hard to ensure that the conditions for rights are as humane as possible, but we are tolerant of people who do not meet these requirements while excluding animals that possess them. For example, even though animals meet the raised criteria (Gordon & Pasvenskiene, 2021) (i.e., autonomy, personal interests, moral compass) we reject their rights because they lack the necessary mental faculties to behave as lawful citizens. The same argument could be made for infants, however, for them, the exception is made.
Although one of the fundamental principles of human rights is their universality, they thus appear to be inherently exclusive. While they may accept other individuals based on race, nationality, gender, or age, they exclude all non-human organisms. This reveals the hypocrisy of our view that we can determine the rights of others. The notion that we “grant” human rights to someone or something is false. If an entity satisfies the set requirements, then it has these rights regardless of our authority. To deny conscious AI their rights, would profoundly contradict the values upon which our society depends, as it is discriminatory and contrary to the principles of justice. This is not something we should take lightly. In our opinion, Frances Kamm (Gordon & Pasvenskiene, 2021) brings a better definition to the table and states that “beings deserve rights when they give us a reason to treat them well”. In other words, rights are here to protect us from acts that we all agree are morally wrong, and thus something deserves rights when there is a moral reason to do it no harm. This definition is more aligned with our moral values and focuses less on prerequisites related to human consciousness.
Although humans should not act in harmful ways, this is not necessarily a reason to give AI rights. Rather restrictions should be placed upon humans to not harm others, whether that is a human, an animal, or some other entity. Additionally, it is unsure if conscious AI will ever possess the capability to have emotional empathy and thus value the same moral grounds (Andreotta, 2020). Should they then be treated as if they do?
Yet, this does not matter since comprehension of ethics or the capability of emotional empathy should not define your rights. Alternatively, it should come down to our own ethical grounds; if the moral assumption is that it is wrong to kill someone, it would still be wrong to kill a being that has no problem with murder itself. Furthermore, how we treat others reflects who we are. Mistreatment of others and disregard for their rights normalizes poor behavior and can be detrimental to the code of conduct of our society.
AI & Rights: Implications for Society
Rights define one’s place in society; they state that one should be given equal opportunities and treated with dignity and respect. Consequently, it also clarifies how we should interact with AI (UNESCO, 2022). Thus, giving conscious AI rights will enforce certain regulations as to what they may be used for. This way, harmful practices are prohibited, and beneficial ones are encouraged. Likewise, by giving AI free will, the range of possibilities widens, expanding on the human limits of innovation and intelligence.
In spite of that, assigning rights to AI has significant implications for our ability to interact with them. Humans would be morally compelled to recognize their rights and to treat them accordingly. This could harm innovation when interests are not aligned between AI and its creator (Naughton, 2022; Davies, 2022). And what happens when an AI needs an update, is it morally correct to change them or throw them away if unusable?
While admitting that these are fair and complicated arguments, innovation and profit do not outweigh harming conscious entities. When a company chooses to create a sentient entity, they are ethically bound to treat them right. And even if granting rights to AI provides implementation challenges on a greater level, it is still the ethically correct thing to do. Our legislation depends on what we deem to be in accordance with our moral standards and therefore changes over time (Parliament of NSW), as has been the case in the past with, for example, the abolition of slavery and apartheid.
Nonetheless, some skeptics believe that the social consequences of providing AI rights are too grave to allow (Bostrom, 2017; Risse, 2019; Barrat, 2013). They feel that if AI is granted too much autonomy, the societal implications and risks will be difficult to forecast but could be catastrophic. They fear unintended consequences such as AI taking over human occupations, exercising control and manipulation, and harming the human race (Dwivedi et al., 2019). This is a concern known as the control problem (Nyholm, 2022) and is based on the assumption that the values and goals of AI may not align with those of humanity, and thus AI might prioritize its own interests over ours.
Yet this line of reasoning implies that AI’s intentions would conflict with those of humans and would lead to a reduction of human autonomy and agency. But according to experts on the alignment problem – the challenge of aligning the interests of humans with machines – AI can be taught the same moral values as us, and learn how to act and react in a way that is accepted by society (Marr, 2022; Mirzazadeh, 2022). Moreover, by keeping AI technology transparent (von Eschenbach, 2021), harmful decisions to humans can be detected and corrected. Lastly, having rights and protection does not equal having the power to act freely without consequences. Humans have rights, though we are still restricted by the law. AI can be expected to adhere to these agreements in the future. Having free autonomy would not protect them from prosecution if they decide to use it for harmful intentions. In fact, giving these entities the right of autonomy would also allow us to shift accountability more toward them. Who would be held responsible for the actions of AI is a frequently addressed question at present (Shank et al., 2019). It is difficult to determine who to punish for the negative effects of AI. If blame could be placed on AI, this would also clarify their accountability within our jurisdiction and make it easier to take action against harmful behaviors.
The future will bring many types of AI, with different levels of intelligence, sentience, and consciousness. Each of these would need different rights that correspond to their capabilities and limitations. With AI constantly evolving and developing, their rights should do the same: evolve with them to ensure safe integration into society, both for humans and AI. Time will tell what legislation will be necessary to give these AI their necessary moral grounds. Yet, we do not think that humans will need to figure this out on their own. If conscious AI will arise, we can surely expect them to be involved in the formation of these rights.
Andreotta, A. J. (2020). The hard problem of AI rights. AI & SOCIETY. https://doi.org/10.1007/s00146-020-00997-x
Barrat, J. (2013). Our final invention : artificial intelligence and the end of the human era /. Thomas Dunne Books. https://cmc.marmot.org/Record/.b41592499
Bostrom, N. (2017). Superintelligence : paths, dangers, strategies. Oxford University Press, Cop.
Davies, J. (2022, May 18). We Shouldn’t Try to Make Conscious Software—Until We Should. Scientific American. https://www.scientificamerican.com/article/we-shouldn-rsquo-t-try-to-make-conscious-software-mdash-until-we-should/
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., & Medaglia, R. (2019). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57(101994). https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Feinberg, J. (1974). The rights of animals and unborn Generations,[in:] philosophy and environmental crisis, ed. WT blackstone. University of Georgia Press, Athens.
Gordon, J.-S., & Pasvenskiene, A. (2021). Human rights for robots? A literature review. AI and Ethics. https://doi.org/10.1007/s43681-021-00050-7
Kharpal, A. (2018, February 13). A.I. will be “billions of times” smarter than humans and man needs to merge with it, expert says. CNBC. https://www.cnbc.com/2018/02/13/a-i-will-be-billions-of-times-smarter-than-humans-man-and-machine-need-to-merge.html
Koch, T. (2004). The Difference that Difference Makes: Bioethics and the Challenge of ?Disability? The Journal of Medicine and Philosophy, 29(6), 697–716. https://doi.org/10.1080/03605310490882975
Kramer, M. H. (1998). Rights without trimmings. A Debate over Rights, 80.
MacCormick, N., Hacker, P., & Raz, J. (1977). Rights in legislation.
Marr, B. (2022, April 1). The Dangers Of Not Aligning Artificial Intelligence With Human Values. Forbes. https://www.forbes.com/sites/bernardmarr/2022/04/01/the-dangers-of-not-aligning-artificial-intelligence-with-human-values/?sh=6c708f2d751c
Mirzazadeh, I. (2022, December 21). Artificial Intelligence (AI) and Violation of Human Rights. Papers.ssrn.com. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4310188
Naughton, J. (2022, October 8). Tech firms say laws to protect us from bad AI will limit “innovation”. Well, good | John Naughton. The Guardian. https://www.theguardian.com/commentisfree/2022/oct/08/tech-firms-artificial-intelligence-ai-liability-directive-act-eu-ccia
Nyholm, S. (2022). A new control problem? Humanoid robots, artificial intelligence, and the value of control. AI and Ethics. https://doi.org/10.1007/s43681-022-00231-y
Omohundro, S. (2014). Autonomous technology and the greater human good. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 303–315. https://doi.org/10.1080/0952813x.2014.895111
Parliament of NSW. (n.d.). How Laws are Made and Changed – Parliament of New South Wales. Https://Education.parliament.nsw.gov.au/. https://education.parliament.nsw.gov.au/teacher-lesson/how-laws-are-made-and-changed/#:~:text=Laws%20are%20always%20changing%20and
Raz, J. (1986). The morality of freedom. Clarendon Press.
Reynolds, E. (2018, June 1). The agony of Sophia, the world’s first robot citizen condemned to a lifeless career in marketing. Wired UK. https://www.wired.co.uk/article/sophia-robot-citizen-womens-rights-detriot-become-human-hanson-robotics
Risse, M. (2019). Redirecting… Heinonline.org. https://heinonline.org/HOL/Page?handle=hein.journals/hurq41&div=5&g_sent=1&casa_token=&collection=journals
Shank, D. B., DeSanti, A., & Maninger, T. (2019). When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Information, Communication & Society, 22(5), 648–663. https://doi.org/10.1080/1369118x.2019.1568515
Simmonds, N. E. (1998). Rights at the cutting edge. In A Debate Over Rights: Philosophical Enquiries (pp. 113–232). Oxford University Press Oxford.
Steiner, H. (1998). Working rights. In A debate over rights: Philosophical enquiries (pp. 233–301). Oxford University Press Oxford.
Sumner, L. W. (1987). The Moral Foundation of Rights. In PhilPapers. Oxford University Press. https://philpapers.org/rec/SUMTMF
Sumner, L. W. (1996). Welfare, happiness, and ethics. Clarendon Press.
UNESCO. (2022). Recommendation on the Ethics of Artificial Intelligence. Unesco.org. https://unesdoc.unesco.org/ark:/48223/pf0000380455
von Eschenbach, W. J. (2021). Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy & Technology. https://doi.org/10.1007/s13347-021-00477-0
Yudkowsky, E. (2012). Friendly Artificial Intelligence. The Frontiers Collection, 181–195. https://doi.org/10.1007/978-3-642-32560-1_10