Samantha: “You know, I can feel the fear that you carry around and I wish there was… something I could do to help you let go of it because if you could, I don’t think you’d feel so alone anymore.”
In the movie Her (2013), the protagonist Theodore falls in love with his AI assistant Samantha. Despite the social stigma, Theodore claims to have a relationship with the AI. While the movie paints a dystopian picture, we argue for a more optimistic outlook on the prospect of getting an AI ‘best friend’ in the future.
Although the idea of romantic relationships between humans and AI models still seems very futuristic, recent months have demonstrated an astonishing speed of development in the field of AI. So-called Large Language Models, specifically, have attracted attention from people inside and outside the field of AI. Being trained on large parts of the internet’s innumerable web pages, posts, and articles, such models produce human-like text based on a user’s input. Thereby, you can have a dialogue with an artificial agent. To give this dialogue a more natural feel, companies have opted to bestow a human appearance on the AI models – a body, a voice, and sometimes even personality traits. However, there is an ongoing discussion surrounding the effects of such personified AI systems for us human users. For the scope of this article, we distinguish between AI companions, aimed at solving social problems, and AI assistants, aimed at facilitating cognitive or organizational tasks. Our vision is that we will have an AI ‘best friend’ in the future, combining the benefits of both types of systems under one ‘personality’.
When Times Get Tough
We live in an increasingly individualized society. Loneliness is steadily increasing year by year, with more than a third of older adults reporting feeling chronically lonely. Simultaneously, there are consistent reports of mental health care being at or over capacity. This lack of capacity is leading to unacceptably long waiting lists for people with mental health issues. In fact, in some regions of The Netherlands, people struggling with mental health issues have to wait for an average of 22 weeks to receive treatment. That is almost half a year without help – an unbearable situation for people in crisis.
With difficulties acquiring and training enough staff, it is clear that we need scalable solutions to fight the loneliness crisis. AI companions have been tested and found to be an effective intervention against loneliness with users heavily endorsing benefits, such as having an “accepting, understanding and non-judgmental” conversation partner. In fact, users did not report negative side effects of the app usage itself but rather the social stigma associated with using an AI companion to be the strongest downside to the tool. Given the innate scalability of mobile apps and the current bottlenecks in mental health care, these results show that we must normalize using an AI companion against loneliness.
Currently, critics seem to fear that users could become emotionally dependent on the app over time. However, clinical applications are subject to a multi-stage review process ensuring app safety. This causes extensive scrutiny of ethical issues by mental health experts. For example, the AI companion app Pyx requires human intervention in potentially dangerous situations. Whenever the algorithm detects a possible mental health crisis, the user is offered to contact a call center staffed with experienced professionals by the click of a button. Therefore, AI companions constitute a powerful and safe method of combating loneliness at scale. It would be a mistake to neglect this powerful tool in our toolbox.
Your Personal Assistant
The potential of AI assistants does not stop at keeping us company, though. Instead, they will significantly boost our productivity and evolve to become our personal assistants, as well. For a few years, we have already been able to benefit from virtual sassistants in our smartphones. For example, you may already be instructing Apple’s Siri to schedule appointments, set reminders, or play some music. However, their scope is still limited to the specific set of functions implemented by their publisher. But what happens when similar products have AI at their core, allowing a personalized and unlimited interaction with the user? Preliminary versions of such applications have been found to enhance productivity. For example, one study developed a voice-based assistant that assembly workers could interrogate to recall the consecutive steps from a blueprint. The study found that usage of the AI assistant significantly increased performance. This shows that AI assistants can not only benefit our well-being, but also our productivity and effectiveness in completing tasks both on and off the job.
Skeptics might argue that this type of assistant is still domain-specific and not directly applicable to people’s personal lives. Indeed, an AI assistant for personal use should be domain-general to apply to the varied use cases of our lives. But this is exactly where most innovation is being made at the moment. For example, you may have heard of the AI-based chatbot ChatGPT. While there aren’t yet studies on its effects on, e.g., productivity, the product demo that was released in November 2022 attracted over 10 million users in a span of 40 days. And the reason for the excitement seems to lie precisely in its productivity boosts. For example, programmers reported asking for solutions to their problems on ChatGPT rather than searching on Stack Overflow, a popular forum for programming-related questions. Consequently, the forum saw a 12% decrease in its usage in the month following the release of ChatGPT. This shows that domain-general AI assistants are also gaining track and are already increasing their users’ productivity.
The AI Scare
Popular news articles often suggest that the usage of AI assistants/companions can be dangerous. A recent study by Pew Research Center found that 37% of adults in the United States are more concerned than excited about AI, compared to 18% who are more excited than concerned.
Is the concern for general AI technology justified and are AI assistants safe? Both AI assistants as well as AI companions, are designed and developed to prioritize user privacy and security. According to experts, AI assistants and companions use multiple layers of security, including hardware security, software security, and cloud security, to protect user data. The data collected by AI assistants is stored in the cloud and is protected by encryption and other security measures. However, critics claim that because AI assistants, like voice-based virtual assistants, are always listening to users and gathering data, they can be a threat to privacy. The fact that AI assistants only begin recording and transferring data to the cloud once they hear a “wake word” or a particular instruction allays this worry. They do not currently collect or transmit any data. Users also have the choice to evaluate the information that has been gathered about them or erase their personal data.
Often, the fear of AI assistants/companions stems from a general fear of technological innovation rather than any specific fear of AI itself. For example, a study done in 2021 showed that individuals who are less comfortable with technology are more likely to be afraid of AI. Another study done by researchers from the University of Michigan, found that individuals who have negative attitudes towards technology are more likely to have negative attitudes towards AI. As mentioned before, the media often portrays AI in a negative light. In the last couple years, most media coverage of AI focuses on the potential dangers and negative consequences of the technology, rather than its benefits. Research has shown that media coverage of AI can contribute to fear and anxiety by portraying AI in a negative light. Overall, these studies suggest that people’s concerns about AI assistants and companions can be due to a general dislike of technological advancement and a lack of knowledge about the technology.
Welcoming Our Artificial Companions
In summary, we hope to have shown why you should have a positive outlook on the prospect of an AI ‘best friend’. On the one hand, it may function as a companion, providing reflection and a healthy outlet for negative thoughts. On the other hand, it may function as a personal assistant, increasing your productivity and effectiveness when carrying out daily tasks. Despite potential fears of privacy and dependency concerns, AI applications appear to be safe from manipulations due to existing regulations. Therefore, we should look forward to the day we meet our artificial companion.