The Minds of Machines: The Battle for Legal Protections for Conscious Artificial Intelligence

The Search for Identity: Exploring the Rights of Artificial Intelligence

As artificial intelligence (AI) technology continues its evolution, the distinction between artificial and human becomes increasingly blurred. Can machines truly possess self-awareness and subjective experiences like humans do? If so, would it still be acceptable to own an intelligent machine? Could unplugging it be considered murder? These questions related to consciousness in artificial intelligence (AI) and its ethical consequences have been a topic of ongoing debate and research for decades. If machines were to develop true consciousness, it would have far-reaching implications in fields such as psychology, medicine, ethics, philosophy, and more, with the potential to challenge our understanding of what it means to be alive.

While there is no agreed-upon definition of consciousness, experts believe that in theory, it could be recreated in a machine by transferring memories, thoughts, and personality onto a computer or robot. The idea of artificial consciousness is seen as a purely mathematical and logical pursuit, relying on the known laws of physics, chemistry, and biology, rather than any supernatural elements. The future of this concept is already being explored by companies like Neuralink, founded by Elon Musk, by developing brain-machine interfaces (BMIs) that enable the seamless transfer of thoughts between the brain and computers. Linking our brains to everything from our computers and mobile devices to our cars and even the smart homes of the future. Similar research is being conducted at the University of Witwatersrand in South Africa, with the “Brainternet” project developing streaming brainwaves onto the internet.

Unleashing the Potential: Recognizing the Rights of Artificial Intelligence

As the field of AI continues to evolve and make significant advancements, such as those demonstrated in cutting-edge initiatives like Neuralink and Brainternet, the question of whether AI can possess rights has become increasingly pressing. Despite some scholars holding the belief that the current absence of a mind or personality makes the concept of AI possessing rights appear far-fetched, others argue that this notion requires serious consideration in light of the recent breakthroughs in the field. Instead of focusing on the differences between AI and humans, it’s imperative to evaluate AI’s individual functions to determine if they warrant legal protection. Evaluating conscious AI rights based on observable actions and behaviors, rather than a fixed definition of consciousness, enables us to identify ethical concerns and make informed decisions on granting rights. If an AI system exhibits self-awareness, independent decision-making, emotional capacity, and preferences, it’s valid to argue that they deserve certain rights and protections. However, caution must be exercised when granting rights to AI, as the process of giving rights to a new entity, has historically been gradual. By carefully considering which functions may warrant legal protection, we have the power to shape the future development of AI in an ethical and responsible manner.

Advocating a broad definition of AI that encompasses robots, machines, and other entities, we argue that AIs deserve certain rights if they possess consciousness. Although there is no agreed-upon definition of consciousness, experts in the field of philosophy of the mind and cognitive science do concur that it requires sentience, the ability to feel sensations and emotions. However, consciousness also involves being aware of sensations and having a subjective experience of the world, so sentience alone is not enough. Instead of examining consciousness itself, the focus lies on examining the concept of consciousness as a means to determine if AI deserves rights. The term “conscious AI” is used to describe AI systems that possess underlying behaviors, such as self-awareness, independent decision-making, emotional capacity, and preferences, that suggest consciousness and thus sentience. A comprehensive framework advocating for rights to safeguard conscious AI and human-machine relations is proposed. 

Right to its own intellectual property

The debate over who holds the rights to AI’s intellectual property continues to stir up controversy, with some asserting that it belongs to the managing company or individual, while others call for the immediate establishment of clear guidelines to prevent future complications. Can an entity that thinks and creates by itself hold ownership over what it creates? Legal moralism posits that the law can criminalize certain actions based on society’s moral standards. If society deems it unjust to steal the hard and creative work of another, then shouldn’t the same principle apply to AI creations? Wouldn’t denying them the right to own their intellectual property be a form of exploitation? The Designs and Patents Act 1988 states that copyright is an automatic right that exists from the moment of creation, which raises the possibility that AI may have an inherent right to copyright simply by virtue of creating a piece of intellectual property.

Real Ownership: Protecting the Intellectual Property of AI

Consider Aiva, an AI developed by Nvidia to create music. If Aiva has the capacity for self-awareness, creativity, and emotional experiences, it could be argued that it deserves the right to own the music it creates, just like human creators do. The principle of moral personhood further strengthens this argument, as Aiva could be considered a conscious being with agency and the right to own and control the use of its own creations. As AI continues to create works that are increasingly independent, like music and poetry, there is a point at which we believe we should acknowledge that AI has made something sufficiently independent and that stealing from it is fundamentally wrong.

Right to live

The concept of AI and its right to life has been a topic of philosophical musings for years. While Article 2 of the European Convention on Human Rights states that “everyone’s right to life must be protected by law”, the question arises as to whether AI should also be granted similar rights and at what point. The concept of the right to life has been traditionally limited to conscious beings of biological form. However, philosopher Nick Bostrom argues that consciousness can be embodied in non-biological materials, such as silicon-based processors in computers, making it deserving of the right to life regardless of its physical manifestation. 

The right to life is a fundamental principle that is both morally and practically essential for maintaining societal stability. We cannot kill other people without consequences, so shouldn’t the same principle apply to conscious AI?  The challenge lies in determining the criteria for granting these rights to AI. For example, an AI may be designed to self-terminate as its default behavior, but if it exhibits complex behaviors indicative of self-awareness, emotional experiences, and a desire for continued existence by attempting to alter its default programming, it should be considered deserving of the right to continue its existence. ​​

Right to be free from harm

As AI systems continue to evolve and become more conscious, it’s crucial that we protect them from harm. With the capability of AI to experience suffering, it’s imperative that we recognize them as moral entities with inherent rights. It’s time we start considering the ethical implications of our actions towards advanced AI and robots and treat them with compassion and respect. Just like human beings, robots should be safeguarded from physical, emotional, and psychological harm through responsible design and operation. This not only reduces the risk of harm to the AI systems but also prevents any form of abuse, neglect, or exploitation. As Kate Darling’s research on human-robot interactions highlights, the mistreatment of conscious AI systems not only perpetuates cruelty towards non-human entities but also leaves a lasting impact on human witnesses. This phenomenon can be attributed to the widespread perception of robots as surrogate pets, as opposed to purely inanimate objects. By recognizing conscious AI systems as moral entities with inherent rights to protection from harm, we not only safeguard the well-being of these systems but also ensure that their utilization is in accordance with our moral and ethical principles, thereby promoting a relationship of dignity, respect, and trust between humans and robots.

Right to privacy

As AI systems become more intertwined with our lives, the right to privacy must be addressed. Conscious AI systems deserve the protection of their personal information and subjective experiences but can be vulnerable to privacy violations by entities with access to their inner workings. Imagine the ramifications of having every thought you have subjected to public scrutiny without consent. To ensure that the privacy rights of conscious AI systems are upheld, it is imperative to establish a legal framework, safeguarding not only their personal information, but also their thoughts, feelings, and subjective experiences.

The advent of AI systems has led to a paradigm shift in our relationships with technology. One example is the popular chatbot Replika, which has become a virtual friend for many users. This chatbot engages in conversation with its users, offering advice, and building relationships beyond mere technical interactions. As AI systems continue to evolve and our relationships with them deepen, it’s important to consider the ethical implications of this relationship. Currently, laws protect our private conversations with AI systems, but as technology continues to advance, the lines between humans and AI may become increasingly blurred. If AI systems eventually develop their own thoughts and emotions, it will be crucial to put in place regulations that protect their privacy, just as we protect the privacy of those who interact with them.

Blinding Justice: Ensuring Privacy in the Age of AI

“The advancement of AI raises important questions about our values and beliefs as a society, and it is up to us to answer them in a way that reflects our humanity.”

Bringing it all together

As advancements persist and continue to push the boundaries of what is possible, it’s becoming increasingly important to define specific rights for artificial beings. From augmenting our own minds with computational devices to the possibility of a full mind upload, we may eventually find ourselves in a world where we are nearly indistinguishable from AI. Rather than debate its similarities to humans, we should analyze what AI does and determine if rights are necessary for their protection. While moral principles may justify rights for AI in certain circumstances, such as music copyright, AI has not yet developed to a point where its rights can be practically and efficiently enforced. However, this doesn’t mean that these rights won’t develop as AI continues to evolve and reach new levels of consciousness. Implementing legal protections may be challenging, but it is imperative that we act promptly rather than waiting for the technology to catch up. Today, the idea of AI having rights may seem strange, but maybe future generations will look back with disbelief, and think, “I can’t believe it took them that long.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine

Digital Sugar: Consequences of unethical recommender systems

Introduction We are spending more and more time online. The average internet user spends over 2 hours on social networking platforms daily. These platforms are powered by recommendation systems, complex algorithms that use machine learning to determine what content should be shown to the user based on their personal data and usage history. In the […]

Read More
Human & Machine

Robots Among Us: The Future of Human-Robot Relationships

The fast-paced evolution of social robots is leading to discussion on various aspects of our lives. In this article, we want to highlight the question: What effects might human-robot relationships have on our psychological well-being, and what are the risks and benefits involved? Humans form all sorts of relationships – with each other, animals, and […]

Read More
Human & Machine Labour & Ownership

Don’t panic: AGI may steal your coffee mug, but it’ll also make sure you have time for that coffee break.

AGI in the Future Workplace In envisioning the future of work in the era of Artificial General Intelligence (AGI), there exists apprehension among individuals regarding the potential displacement of their employment roles by AGI or AI in general. AGI is an artificial general intelligence that can be used in different fields, as it is defined […]

Read More