NO ONE SHOULD TRUST AI

The study of Artificial Intelligence, started by engineers working on cutting-edge technology, has evolved over the years into a substantial multidisciplinary research area that scientists from all over the world are interested in. At this point, there is immense interest in the field of AI research regarding user perspective. Having read various research papers, we can see that a critical factor to the successful use of automated systems is trust that these systems will work properly and effectively. It is said that this concept plays an essential role in many contexts and is multidimensional. However, the biggest issue, still being unresolved over the years, is that no single consistent definition of trust has emerged. Scientists from Florida State University and the University of Minnesota rightly point out that each discipline views trust from its own unique perspective. What is more, the bibliographic study from 2007 identifies a number of papers that address the topic of trust but define it in terms related to their own disciplines (sociology, psychology, ethics, politics, information systems, organisational behaviour, strategy, and marketing).  

We observe that the topic of trust is still unexplored in-depth, thus being unclear and unexplained. Yet, documents (written by officials) like recommendations or ethical guidelines are stubbornly mentioning the need to build trust in AI without its actual definition. Therefore, the aim of this essay is to discuss whether we should even consider trusting an artificial entity like AI. We believe that the terminology used in such papers contributes to the anthropomorphisation of AI, implying the possibility of creating a relationship with AI resembling human interpersonal relationships. 

In support of our view, we found a rather critical paper arguing with guidelines for trustworthy AI prepared by the European Commission’s High-level Expert Group on AI (HLEG). In the aforementioned work, Mark Ryan shows that “AI cannot be something that has the capacity to be trusted”. While it may sound controversial, he manages to identify the fragility of the concept of trust and trustworthiness in the case of AI. 

all images courtesy of freepik.com

Before we begin, though, it is crucial to distinguish between narrow AI and general AI, which is another issue concerning the vast majority of popular press articles. While reading most of the news, it is not hard to see that journalists avoid getting into the technical details of AI. The theory behind this technology is rather challenging to apprehend due to its mathematical complexity. Additionally, AI is in a dynamic phase of development with new products and innovations deployed on a daily basis. Therefore, its current state of affairs creates a problem even for scientists in research design as it does not have a clear framework and definition set. These issues might be why most online articles describe AI with no distinction of different types of it. Thus, leading to treating AI as if it was already a singularity. There is a severe downside of this approach, meaning treating AI like a generalised system that is able to operate on human cognitive abilities level (or even above) – it does not exist, and it is still speculative. Narrow AI can be defined as a system designed to be used for a specific task, while general AI is believed to overcome limited application. So when presented with an unfamiliar task, artificial general intelligence should be able to find a solution without human intervention. Having considered the aforementioned issues, we focus on narrow AI as all the products out there in the market are specific to their domain.

Trust and reliance

It’s important to distinguish between the notions of trust and reliance. While most philosophers agree that trust is a specific kind of reliance, the actual differentia specifica is still subject to discussion. According to the philosopher Mark Ryan, there are five dimensions on which to analyse trust/reliance relations: the confidence that the trustor has in the trustee, the competence that trustor believes the trustee possesses, the vulnerability to which the trustor is exposed due to trustee actions, the capacity of the trustor to feel betrayed by the trustee and finally the basis of trustee motivation to fulfil their obligation to the trustor. Ryan states that the first three dimensions are sufficient for a reliance relation to occur. The latter two serve as the differentia specifica, distinguishing the types of reliance that constitute trust. 

The exact formulation of the trust-specifying conditions relies on the philosophical stance one is willing to take. This essay focuses on three prevailing accounts of trust: the rational account, the affective account, and the normative account. We aim to analyse how each of these ideas corresponds to AI systems. We believe that trust is not a relation that pertains to algorithms, neural networks and other AI systems – we’ll show that both the affective and the normative account cannot be accurately related to the type of relationship we form with the AI. On the other hand, the rational account is such a minimalistic account it’s hard to call the notion defined by its trust rather than mere reliance.

The Rational Account

The rational account of trust states that trust occurs when the trustor, basing on her knowledge, makes a logical choice to place her confidence in the trustee. It doesn’t consider the trustee’s motivation, nor does it require the trustor to be exposed to betrayal. 

While the rational account, due to its lack of focus on the trustee’s motivation, seems like a promising idea for capturing the relations we form with the AI, it’s not clear how this account differs from the notion of reliance. 

One could easily see that rationally relying on someone isn’t enough to talk about trust. Consider a peace treaty between two hostile countries. If the treaty is well-written and solid, it’s rational for both countries to place their confidence in the other party. However, this is still a hostile relationship, and both countries choose not to break the bond only due to the treaty – their motivation is selfish. We wouldn’t call that relation a trust – as one should, following the rational account. We believe that the rational account waters down the notion of trust, reducing it down to reliance.

The Affective Account

According to the affective account of trust, the trustor places confidence in the trustee’s goodwill. It’s described as the “expectation that the one trusted will be directly and favourably moved by the thought that we are counting on her”. These emotions should form the basis of the trustee’s motivation not to breach the trust the trustor put onto her. 

Today’s AI doesn’t have the capacity to consciously experience emotional states (or any states for that matter). While some AI systems have their algorithms enhanced by artificial emotion systems, they cannot feel them but merely understand them or “read” them. A vital distinction to bring up here is the one of psychological and phenomenal mind, introduced by David Chalmers. According to the philosopher, there are two aspects of the mind: the psychological mind contains everything needed to explain one’s behaviour – it can be characterised by what a mind does. On the other hand, the phenomenal mind is responsible for what a mind feels. To illustrate this concept further, consider a robot that has decided to change its environment due to the low temperature around it. While the information from a temperature sensor made the robot change its location, there was no explicit feeling associated with this phenomenon – as it would be if we substituted the robot for a human being or an animal. 

That being said, there is little evidence that an AI system could exhibit goodwill, be moved or have any other emotional reaction to the trust that may have been put in it by somebody. Since this forms the basis of the affective account, we can safely say that this account is not well-suited to describe the relationship we develop with AI.

The Normative Account

The normative account of trust is based on the normative obligations that the trustee has towards the trustor. The normative aspect of this account implies that the trustee could be held responsible for breaching the trust. Can the AI be actually held responsible for anything, however?

Philosophers agree that the prerequisite for responsibility is intentionality – the ability of mind and mental states to “be about” something, to represent or to be directed at another object. While the discussion around this issue is convoluted, the prevailing view is that only intentional agents can be considered moral agents and be ascribed moral responsibility. 

In the contemporary philosophy of mind, intentionality is deeply connected to consciousness. As one is conscious of a mental state, a mental state must exist corresponding to the conscious experience, which is in turn “about” the experience itself. 

We believe that today’s AI systems possess no consciousness and no intentionality. Therefore, it’s impossible for them to be held responsible for anything. Since the capacity to be held responsible forms the basis of the normative account, we can conclude that this account doesn’t capture the type of relation we tend to form with AI systems.

A careful reader might object that while the AI itself isn’t to be held responsible for any actions it leads up to, there is undoubtedly someone or something that can and should be held responsible for the AI’s actions. We agree to hold the moral agents responsible – not the AI systems, however, but the developers, users, and legislators.

Misplaced Trust

All the preceding discussion brings us to the principal notion of this article that trust is misplaced. While we think using the term “trust in AI” should be avoided, we acknowledge the need to reinforce the adoption of this technology to benefit from it. However, in the current situation, when AI is implemented and used to recommend or even decide still in an insufficiently transparent way, it may pose various threats. Relatively harmless, when we’re talking about an AI playing chess, but in other situations “may be very problematic, even life-changing or lethal”. Mark Coeckelbergh gives a couple of examples of what could really go wrong in the near future. Think of a doctor who is unable to justify the wrong diagnosis which was recommended to him by an intelligent system, then again, think of a judge who was recommended to prolong prison time or the “737 max pilot who does now know why the aeroplane’s advanced autopilot system keeps pushing the nose down in spite of his efforts to take control of the aeroplane”. If we genuinely want a responsible AI, then it needs to be able to explain decisions to someone. All of this, to answer the simple question – Why?

This is not (only) the problem of philosophers, but everyone who’s going to use or develop this technology. The problem that urgently needs more conceptual work. One of the threats we’re talking about here is the potentially problematic over-reliance on AI (resulting from the misplaced trust) or, even worse, “a morally blameworthy attempt to offload responsibility”. As we stated earlier, some portion of responsibility should be on the part of the developers to have an awareness of possible mistakes, alternative uses and misuses of their product. Unfortunately, as Coeckelbergh rightly points out, there is often a long causal chain of human agency in the development of such products. We’re talking about very complex software that may have a long history with many developers involved at various stages for different parts of the product. Traceability, according to him, is crucial to operationalise responsibility and explainability.

On top of that, we think it’s important to underline that so far, AI has not met traditional criteria for a complete moral agency such as freedom (or free will) and consciousness. Therefore, it also cannot be (held) responsible. Thus, our only option is to make humans responsible for all the actions the technology leads up to. 

all images courtesy of freepik.com

Another problem revolving around trust in AI is the growing trend of anthropomorphism in AI. It concerns the researchers trying to develop and study human-like features of AI products as it has been confirmed that the technologies’ human likeness influences how users interact with the specific products. In fact, making the AI products’ actions resemble human behaviour mitigates individuals’ anxiety and stress. While this sounds relatively positive as it contributes to the adoption of AI in society, we believe that it may also cause unwanted adverse effects. One of them that especially springs to mind is blinding people from the limitations of AI, making them trust in something substandard (thus creating overtrust, which can lead to over-reliance). Additionally, anthropomorphism can spark unrealistic expectations, possibly making people less interested in using the technology that wasn’t working as they thought it would be (thus impairing the adoption of the technology). Furthermore, researchers point out that “anthropomorphism in AI risks encouraging irresponsible research communication”. What they mean by that is that scientists, because of exaggerated hopes intrinsic to the anthropomorphic language (talking about computers in psychological and neurological terms), create an image of AI technology as if it already essentially functioned as the human brain.

So, overall, if the assumptions that responsibility correlates with trust and anthropomorphisation leads to overtrust hold, then indeed…

Trust is misplaced.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine

Digital Sugar: Consequences of unethical recommender systems

Introduction We are spending more and more time online. The average internet user spends over 2 hours on social networking platforms daily. These platforms are powered by recommendation systems, complex algorithms that use machine learning to determine what content should be shown to the user based on their personal data and usage history. In the […]

Read More
Human & Machine

Robots Among Us: The Future of Human-Robot Relationships

The fast-paced evolution of social robots is leading to discussion on various aspects of our lives. In this article, we want to highlight the question: What effects might human-robot relationships have on our psychological well-being, and what are the risks and benefits involved? Humans form all sorts of relationships – with each other, animals, and […]

Read More
Human & Machine Labour & Ownership

Don’t panic: AGI may steal your coffee mug, but it’ll also make sure you have time for that coffee break.

AGI in the Future Workplace In envisioning the future of work in the era of Artificial General Intelligence (AGI), there exists apprehension among individuals regarding the potential displacement of their employment roles by AGI or AI in general. AGI is an artificial general intelligence that can be used in different fields, as it is defined […]

Read More