Human-like artificial systems should not be the end goal: views on intelligence explosion and humanoid robots.

Artificial intelligence, as we know, is the branch of computer science that deals with the simulation of intelligent behavior in computers. AI methods are used in many fields today, and they show abilities such as speech recognition or planning and are able to handle specific tasks that only humans were thought to be able to do. So far, we have based the development of AI on human thinking and appearance. This has always been the most natural choice: We are the most intelligent species we know, we are the best example to emulate something equally intelligent. And as it got more intelligent, we started to give it the visual form of a human. But is this really the best way forward? In this article, we will discuss the potential risks associated with continuing to think of AI as a human-like product.

We would like to take you with us into a possible future. It’s 2052, a month ago your company decided to make human staff cuts to replace it with an AI, which performs faster and more efficiently. You have been told that your presence is obsolete because now machines can work without the need for any human control. Your life seems to have come to a standstill, so you decided to see a therapist. You are walking down the street and you are surrounded by humanoid robots, walking in and out offices in full regalia. You enter the therapist’s studio for the first time and you are astonished to discover that behind the desks sits a humanoid. This is the world you live in now, a world where AI has taken over most of humans’ job positions, walks alongside you in the streets and is evolved enough to provide mental health treatment for a cause generated by itself. It does look like one of those movies we are so used to, perhaps less engaging, perhaps more real. 

What has been just described by the metaphor is a futuristic scenario in what is nowadays known as “Technological singularity” or “Intelligence Explosion”. The term was coined back in 1993 when Vernor Vinge presented the underlying idea of creating intelligence and a self-explanatory definition has been given by Priyadarshini and Cotton (2020). It describes a situation in which it is believed that AI would be capable of self-improvement or building smarter and more powerful machines than itself to surpass human control or understanding: ordinary human intelligence would rather be enhanced or overtaken by artificial intelligence. It is a future that we find difficult to think about with a critical eye because it seems dystopian and, if not unreal, at least very distant. But is it really so far away?

Can AI even think like humans?

Let’s say that the technological singularity is the direction in which we are going, but as things stand, it is difficult for us to think of AI as being able to think like a human being and there are many reasons for this. Even Deep Neural Networks (DNNs), the algorithms that most closely resemble a human brain, do not represent the world as we do. They are very easy to fool, even the smallest amount of noise could trick them into thinking that a panda is gibbon. They learn and apply what they have learned in a different way. DNNs need an enormous amount of data to train and engage shortcut learning: they learn statistical associations that allow them to produce correct answers but sometimes for the wrong reason and that’s because, unlike us, they don’t have a good model to pick up what matters. The huge amount of training trials allows them to perform very specific tasks and they fail in what is called transfer of learning, the ability of using previously acquired knowledge and skills in new learning or problem-solving situations. They are still only as good as the training material. There’s another reason why building human-like AI might not be the way to go to create a super-intelligence: even human intelligence is limited and we still don’t know everything there is to know about it.

Why should we not want them to think like humans?

Although it might seem a distant future, its many risks must be taken into account and in this article we will deal with those that in our opinion are the most relevant.

Technological unemployment

One of the biggest risks is that AIs will overtake us in every field to the point where humans would be replaceable and so useless. Technological unemployment concerning AI is something that is already being discussed, in fact there are some positions in which intelligent systems are displacing humans, e.g. manufacturing or service delivery, but, for the time being humans and machines don’t really seem to be in competition with each other. The reason resides in the different types of intelligence possessed: what AI lack is intuition, emotions, cultural sensibility and the ability to imagine, anticipate, feel and judge changing situations. From this perspective it seems impossible, doesn’t it? But what would happen if machines were able to acquire these abilities? Particularly the ability to reprogram themselves in response to unforeseen events and to apply problem-solving that is more efficient than the human one because it can think of several options at once?

Battle of resources and self-preservation

With awareness of themselves and their relationship with man might come awareness of the world around them and the impact man has on it. It is no mystery that man is putting the planet he lives on at risk, undermining not only his own existence but also that of all living species and the ecosystem in which they live. If we are determined to bring into this world an intelligence on a par with our own, we have to take the risk that it will not be a mystery to her either and that she may want to take action on it; they may believe that eradicating human life will be the only reasonable move in order to preserve the earth.

Risks of Emotional AI

In the current state of affairs we refer to emotional AI as that branch of AI that learns how to interpret and respond to human emotions. We are already aware of some of the biases and risks of this technology: such technologies appear to ignore a growing body of evidence that confirms that basic facial expressions are universal across cultures. As a result those AIs run the risk of being unreliable or discriminatory. In the future, with the advancement of technological singularity, we may come to define emotional AI as a technology that is able to feel emotions, as only we are able to do. And as much as we may strive to program it to be able to feel only love towards us, the level of autonomy we are aiming for also involves the risk of negative and destructive emotions, just like us. What it could lead to is a risk we have to ask ourselves if we are willing to take. 

These are only three assumptions on which we have decided to express our opinion, but there would be much more to say. Leaving aside the risks, one question we can ask ourselves is whether we should continue to think of AI as a tool that can do something better than us that we already do well, for example therapy, but start thinking, not only about our own value and that we might be irreplaceable in some ways, but also about how to be better inhabitants of this planet.

Why should we not want them to look like humans?

One of the great promises of robotics is that robots can provide pleasure, comfort and even some form of companionship to people who cannot participate in society appropriately. The main reason for robots that look like humans is therefore social interaction. But is it even possible to create human-like robots? How can we define human likeness? And how do we perceive human likeness? Ishiguro & Nishio (2017) argue that three aspects are required for a robot to be perceived as human: human skin/surface, human movement and human perception, to interact with the world like a human. Such a robot that is designed to move, look and communicate like a human is called a humanoid robot. Today’s humanoid robots are powered by artificial intelligence and can hear, speak, move and respond. They use sensors and actuators and have features that mimic human parts.

Think back to the scenario about the year 2052 that we presented earlier in the text. How would you feel if you saw a humanoid robot as a therapist? Would you feel happy or scared? Would you trust this machine or not? The perception of an agent is subject to several effects, namely the uncanny valley effect, anthropomorphism and the Eliza effect, which affects how people react and feel around robots.

Effects on perceiving a humanoid

Uncanny Valley is a phenomenon that refers to human reactions to artificial human forms that have almost human-like characteristics. Mori (1970) was the first to describe the phenomenon that the relationship between human-likeness and the reaction to non-human agents is not linear but valley-like. According to his framework, increasing human-likeness of an agent triggers positive reactions only up to a certain point, after which human-likeness triggers negative reactions until the agent becomes indistinguishable from a real human, which in turn triggers positive reactions again, creating a deep valley. Recent work, such as Urgen et al. (2018), has shown that the uncanny valley effect can be attributed to early expectation violations when the brain encounters almost-but-not-quite-human agents. These violations include appearance-motion mismatches, visual-auditory cues, or task-relevant violations. The “Eliza effect”, named after a computer software that mimicked a psychotherapist, on the other hand refers to the tendency to attribute intelligence to responsive computers. It can be seen as a type of anthropomorphism, which Epley (2018) defines as “the perception of human-like features in non-human actors” (p. 591). Anthropomorphism occurs naturally and automatically in response to a variety of stimuli: a human-like appearance evokes a human schema, and human-like behaviours lead to attributions of a “mind”. According to Aggarwal & McGill (2007), anthropomorphism occurs more frequently if the robot is equipped with human-like features, such as a human face. In addition, anthropomorphism has also been identified as a strong determinant of perceived trust.

With those effects in mind, Bartneck et al. (2004) formulated guidelines for so-called social robots. The authors emphasize how important it is that the form of a robot fits the task for which it was developed. For example, a humanoid robot is usually expected to have robust speech recognition capabilities and users are often confused when their expectations are not met. A biomorphic form, such as an animal, may be better suited to create expectations of the robot’s capabilities. Furthermore, a humanoid robot is also more likely to be perceived as a life-like being. As Bartneck, Kanda et al. (2007) stated: “Being alive is one of the major criteria that distinguish humans from machines, but since humanoids exhibit life-like behavior it is not apparent how humans perceive them” (p. 300). A vivid example for this is the humanoid Sophia, who was made an honorary citizen of Saudi Arabia in 2017, which was unexpected even for her creator David Hanson. Another example is Will Jackson (CEO Engineered Arts), reporting about their humanoid being treated as a part of a group, even when switched off.

However, while anthropomorphised robots appear more alive, they can also be perceived as a threat. For example, older people, in particular, seem to be most afraid of human-like robots and many studies suggest that higher perceived human-likeness and mental abilities are associated with negative emotions in older people. In addition to that, an analysis of likeability by Zlotowski et al. (2015) showed a more machine-like robot is more liked than a highly humanlike one.While the approach to building human-like companions for the enrichment of human life seems somewhat intuitive the reality shows that it would come with many caveats. It would be very difficult to find the sweet spot in human likeness, especially for the population groups that would profit the most from a humanoid robot. Therefore other design ideas could be more purposeful (e.g. smart home or AI with a biomorphic form).

Having human features is not efficient for task-oriented AI

However, there are many other areas outside of social interaction in which robots will be able to help shape our lives in the future. Our thought experiment began not only for reasons of impressiveness with the loss of one’s job to a machine, but also for reasons of urgency. Robots could also make our lives easier by relieving us of repetitive, boring or dangerous tasks at work or around the house

In this sense, we would build AI systems more like a tool than an agent. And since general artificial intelligence is still a long way off, optimizing a narrow AI seems to be the right way to go about it.  In 2021, Tesla announced a humanoid robot called Tesla Bot that uses the same (narrow) artificial intelligence that is used in Tesla’s autonomous vehicles. The company is taking its AI system and planting it in a human-like robot (rather than a car) to replace human labor, especially repetitive and boring tasks, and potentially end the need for humans to work for a living unless they wanted to. Or at least realize the dream of AI robot helpers for every household.

Unfortunately, the human body form is not suited to this endeavor. To automate a task, it doesn’t make sense to build a human agent to do it but to “robotise” the task itself. It is not a humanoid that vacuums for me, but the vacuum cleaner is the robot. A recent example is a robot that can switch between spherical and cylindrical shapes for medical applications such as navigation in the digestive system. Some work has also been done on shape-changing robots that can adapt their shape and behavior to changes in the environment or terrain.

Ethical considerations

All the situations described also entail some ethical considerations. Firstly, as already shown, the way people react to robots is massively influenced by their appearance. Robots that look like humans are deceptive – contrary to the principles of robotics. Or as Alan Winfield puts it: “Robots are manufactured artifacts. They should not be deceptively designed to exploit vulnerable users; instead, their machine nature should be transparent”.

Secondly, for the reasons mentioned above, people tend to respond to computers and robots as if they were responding to other people, which means that there is a bias associated with perceived gender. For example, women are expected to show experience and warmth, while men are expected to show agency and competence. Considering that the male or female gender of a robot creates different expectations in terms of agency and communication, this can reinforce existing gender biases. This is already a concern for voice assistants (like Siri or Alexa). To cite a UNESCO report from 2019: “Dominant models of voice computing are crystallizing conceptions of what is ‘normal’ and ‘abnormal’. If the vast majority of AI machines capable of human speech are gendered as young, chipper women from North America (as many are today) users will come to see this as standard”. Machines that reproduce patriarchal ideas contradict the promise of technology to contribute to gender equality. According to Saran & Srikumar of the World Economic Forum, designs for autonomous systems should be informed by a multi-ethnic, multi-cultural and multi-gender ethos. Artificial intelligence and its advancement must serve much larger populations and access to benefits must be available to all.

Conclusion

So how did our story about the year 2052 end? Hard to say. But in a world where human-like AI rules our lives, the human margin is shrinking. We are not aware of the real impact that continuing on this path could have, but we can think of what to do to prevent the situation from deteriorating. We still have one major advantage: we get to build the AIs. The challenge of our time is to ensure that we develop AI that not only protects our human values, but also fits into our society. We consequently propose a shift in research away from the development of human-like AI and humanoid robots. We also propose international regulations for super-intelligent AI and comprehensive education on robotics and its applications. We need to stop thinking of AI as a human-like product and start thinking about a different kind of intelligence, because AI will never be and look like a human. And for a human-like AI to be perceived as positive, a match of mind and body is required. Nevertheless, we should continue to develop intelligent tools that can help us in our daily lives, but optimizing narrow AI and its non-human form might be the better way.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine

Digital Sugar: Consequences of unethical recommender systems

Introduction We are spending more and more time online. The average internet user spends over 2 hours on social networking platforms daily. These platforms are powered by recommendation systems, complex algorithms that use machine learning to determine what content should be shown to the user based on their personal data and usage history. In the […]

Read More
Human & Machine

Robots Among Us: The Future of Human-Robot Relationships

The fast-paced evolution of social robots is leading to discussion on various aspects of our lives. In this article, we want to highlight the question: What effects might human-robot relationships have on our psychological well-being, and what are the risks and benefits involved? Humans form all sorts of relationships – with each other, animals, and […]

Read More
Human & Machine Labour & Ownership

Don’t panic: AGI may steal your coffee mug, but it’ll also make sure you have time for that coffee break.

AGI in the Future Workplace In envisioning the future of work in the era of Artificial General Intelligence (AGI), there exists apprehension among individuals regarding the potential displacement of their employment roles by AGI or AI in general. AGI is an artificial general intelligence that can be used in different fields, as it is defined […]

Read More