Ban YouTube’s Recommendation Algorithm

Fight the spread of misinformation and increase users’ well-being

Introduction

YouTube is the world’s second-most visited website, right after its parent company, Google. For work or leisure time, YouTube is a great tool for online education, providing entertainment or seeking out information about other cultures or countries on the other side of the world. The success of YouTube depends on the amount of hours that users spend on the platform. To increase revenue, by selling advertisements on their platform, YouTube has various ways of increasing users’ watch time. One of these is called the recommendation algorithm. This system suggests a collection of personalized videos based on the users’ previous activity on the platform. This incentives them to watch more videos. Unconsciously, users spend hours on YouTube simply by clicking on the next interesting video that appears on their screen. In some ways we are not even in control of what content we are seeing. In this paper, we will discuss how YouTube’s recommendation algorithm can negatively influence our society and one’s personal life. As a result of this algorithm, we are directed to various content, which differs for each user based on their interests. This way, YouTube’s system creates a divide in society. In addition, the recommendation algorithm influences users’ sense of agency, which leads to a decrease in personal well-being. We believe that the recommendation algorithm should be banned on YouTube. To support our position, we will further discuss the mechanism of the recommendation algorithm of YouTube and elaborate on two main destructive consequences of this system: it divides our society by the spread of misinformation and it reduces users’ well-being.

YouTube’s Recommendation Algorithm

YouTube’s recommendation algorithm thrives because of two factors. It all starts with the fact that YouTube wants you to spend as much time on the platform as possible, therefore they give you recommendations of videos that you are likely to watch next. Second, the system guesses what you would like to watch next to then show you increasingly extreme content to keep you interested.

This system is very advantageous for YouTube. The algorithm is highly profitable, so there is no incentive for the company not to use it. With use of the recommendation algorithm, YouTube encourages users to spend more time on the platform, since the profits originate from selling advertises. More hours spent on YouTube, more profits made from advertisements. YouTube made 5.56 billion advertising revenue in the United States in 2021. People do not just watch one video and leave the platform. The average user checks 9 pages while visiting YouTube and spend on average 41.9 minutes on the platform per day.

In 2015, Google modified the recommendation algorithm so it acted as a human brain that was based on a neural network system. The algorithm tried to find relationships between videos humans would not spot, based on what you are likely to watch for as long as possible. However, this algorithm bored the users, because it recommended videos similar to the ones they already watched before. Therefore, a new implementation was introduced considering reinforcement learning, with a goal to maximise engagement of the users. This feature was intended to make users watch more videos than before, by suggesting videos that expands people’s taste. No good or reliable videos were recommended, rather videos that users were likely to watch.

YouTube’s Recommendation Algorithm Leads to the Spread of Misinformation

Design mechanisms, like recommendations, exploit psychological vulnerabilities in order to increase user’s engagement leading to longer watching time. The spread of misinformation is easily facilitated, which we will further elaborate on with the next four claims.

First, a MIT study found that misinformation is 70% more likely to be shared. The algorithm pushes people towards more extreme opinions by showing them increasingly extreme content in order to hold their attention. Nonsense content is highly engaging. Therefore, YouTube has no internal motivation to stop using recommendation algorithms that promote these types of content, since more screen time leads to more money from advertising companies. A spokeswoman of YouTube said “the US test of a reduction in conspiracy junk recommendations has led to a drop in the number of views from recommendations of more than 50%”. Considering the 2 billion monthly users of YouTube highlights the enormous reach of misinformation on the platform.

Furthermore, there are examples of the spread of misinformation in several countries, including Germany, Spain, Brazil, the Philippines, Taiwan and more. The Estimated Time of Arrival, a Canadian social support program granted by Public Safety Canada, works with individuals who have been referred by police services and they identified a correlation between young boys spending more time online and the adoption of escalating violent ideologies.

Third, fact-checking platforms confirm the problematic consequences of the rabbit hole, the process of watching endless YouTube videos. Co-creator of fact-checking platforms Istinomjer and Raskrinkavanje Tijana Cvjetícanin says YouTube’s audience is pretty universal, people from all parts of society can easily be led down the rabbit hole on the platform if they search for a keyword that triggers the algorithm to suggest misleading content. One of the analysts working at Moonshot, a counter-violent extremism firm, tracked incel behaviour on platforms like YouTube and said “We should not be discouraging people from searching for or wanting to find out more information about how to how to date or how to have sexual or romantic relationships. The problem is that a young man searching for that content is very quickly and very easily pulled down that rabbit hole”. This exactly illustrates our point of why we should ban recommendation algorithms on YouTube. You do want people to be able to search for videos on the topics they like, but you want to protect them from being pulled down the rabbit hole.

Lastly, former employees of the Big Tech companies have stepped out as a “whistle-blower” about the negative impact of attention keeping algorithms. Frances Haugen, who previously worked at Facebook as a data scientist, revealed that Facebook was aware of its harmful consequences of keeping users excessive amounts of time on their platform, however, they chose to keep the algorithm as it is to keep up their profits. Once people spend less time on the website, less ads will be clicked on and less profit would be made. So they acknowledge the problem but they just don’t want to solve it.

Similarly, YouTube acknowledges that there is content on the platform that crosses the line of what is acceptable within policy boundaries, referring to examples such as flat-earth beliefs or 9/11 conspiracies or quackery, and that they should do more to maintain a good recommendation algorithm that does not promote disinformation. A former Google engineer went public about the negative impact of the algorithm and described it as “toxic”.

On a contradicting note, one of YouTube’s engineers Ben McOwen Wilson thinks “YouTube does the opposite of taking you down the rabbit hole”, instead he claims it is designed to broaden your horizon by showing you other content than you are searching for. Similarly, the study of Ledwich and Zaitsev, which looks at the influence of YouTube’s algorithm in proposing extremist content and contradicts the allegations that YouTube encourages radicalization. They tracked down the algorithm traffic flows out and between roughly 800 political channels and concluded that YouTube’s recommendation system actively discourages users from viewing extreme or extremist content and rather appears to recommend mainstream media that are politically neutral. However, this study only worked with anonymous users. The authors did not have access to individual user accounts to track the recommendation algorithms. Therefore, the study does not accurately represent real-life long term users. Given the fact that YouTube stimulates people to spend a long time on the platform, the algorithm changes along the way based on watching history. With so little information available on the track-records of the recommendation algorithm in for the frequent users, we have to discredit Ledwich and Zaitsev’s conclusion that claims that YouTube prevents the rabbit hole.

A way to prevent the rabbit hole would include fact-checking. Since the dangers of the rabbit hole concern radicalisation and a division in society regarding the truth, a fact-checking system should be helpful to reduce the disastrous consequences by reducing the spread of misinformation. Over 80 fact-checking organizations signed an open letter urging that YouTube should do take stronger to act upon misinformation on its platform. They want them to do this with independent fact-checking organizations. The article does suggestions of solutions to the spread of misinformation, such as promoting fact-checked information, providing context and calling on YouTube to disclose its moderation policy regarding disinformation. However, fact-checking has two indisputable problems. First, fact-checking is very labor-intensive. Carlos Hernández-Echevarría, the public policy and institutional development coordinator at fact-checking organization Maldita.es in Spain, says fact-checking YouTube is extremely difficult given that YouTube’s content completely consists of videos, making fact checking a very labor intensive job, given the fact that more than 500 hours of new content is uploaded on YouTube every minute. Therefore, it is simply not realistic to implement fact-checking on YouTube. Second, YouTube should not have the authority to make a distinction between what is true or false. We might be comfortable with YouTube deciding on the fact that the Earth is flat or not, but what makes YouTube qualified to decide on political matters, for example. Even if a fact-checking system would be possible, the debate on letting a big Tech company have the authority what to show would be never-ending and inconclusive.

YouTube’s Recommendation Algorithm Leads to a Decrease in Well Being

Besides the alarming societal effects of YouTube’s recommendation system, the individual effects on users can also lead to unfavorable outcomes. This is mainly due to the fact that the algorithm aims at increasing watch time. Studies show that people who spend more time on social media score lower on well-being. The following studies elaborate on how the algorithm negatively influences its users’ well-being in this way.

First, the study of Writz et al. showed that an increase in time spent on social media increases feelings of loneliness for participants in the study. Furthermore, Brooks concluded that higher amounts of personal social media usage lead to distraction from everyday tasks, higher levels of techno-stress, and lower happiness. Lastly, the study of Lukoff et al. showed that people report to feel less in control
over their online behavior and that recommendation algorithms in particular cause users to feel like their sense of agency declines, which leads to negative effects such as sleep deprivation.

In contrast, instead of banning the recommendation algorithm, YouTube could offer the option to disable the system. In China, new regulations regarding recommendation algorithms will come into effect that restrict the use of these algorithms. It demands more transparency over how the algorithms function and aims to give more control on which data companies can use to feed the algorithm. It also demands algorithms to operate for ethical purposes. This way, the average time spent on YouTube would go down, as well as the negative consequences for the users’ well-being. However, this is not a very realistic prospect. If users had the option to choose for more transparency or to disable the recommendations, they would not choose to do so. Recommendation algorithms exploit the vulnerabilities of the human mind. Therefore, people might not be able to get rid of the recommendations because of the addictive nature of the algorithm.

Moreover, social media does not only have a negative effect on the well-being of its users. On the a positive note, Wirtz et al. also found that social media can enable direct social interaction with others. When people use social media sites to enable direct interactions and social connections, such platforms could have a positive effect on well-being. So social media features that facilitate this, such as posting reactions, should be present to increase in well-being. Nevertheless, the fact is that YouTube’s recommendation algorithm is not targeted at direct social contact, but rather at indirect contact: there is no connection with the owner of the video and the person watching the video as a result of the recommendation system. Thus, this does not belong to the category of social media that positively impacts users’ well-being considering direct social relationships. YouTube’s recommendations system is a social media feature that distracts from making direct social contact and should be taken down.

Conclusion

To summarize, we talked about two main arguments in favor for banning YouTube’s recommendation algorithm. First, this would decrease the spread of misinformation. As this system aims to increase the watching time of the users and it guesses what they would like to watch next. In order to keep their attention, it shows increasingly extreme content to keep them interested. Second, users’ well-being would not further decrease. As studies showed that extensive time spent on social media increased the feeling of loneliness and a decrease of agency, especially when features of social media do not involve any form of direct interaction with others, such as the recommendations.

As we look at the prospect of the recommendation algorithm, we see a few possible future scenarios. In the scenario where no policies will be made for the recommendation system, nothing will change. Considering the disastrous consequences, this is not a favorable outcome. Regulating the use of the algorithm is also not a preferable option, given two reasons. First, letting YouTube implement a fact-checking raises system is very labor intensive and raises ethical problems. In addition, giving the users the option to turn off the recommendation algorithm will lead to insufficient reduced effects for our main arguments, considering the users will not turn off this feature due to its addictive mechanism. Therefore, we believe the recommendation algorithm should be banned as a whole. People should be encouraged to search for information close or far from one’s expertise, but we should not be pulled into any rabbit holes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine

Digital Sugar: Consequences of unethical recommender systems

Introduction We are spending more and more time online. The average internet user spends over 2 hours on social networking platforms daily. These platforms are powered by recommendation systems, complex algorithms that use machine learning to determine what content should be shown to the user based on their personal data and usage history. In the […]

Read More
Human & Machine

Robots Among Us: The Future of Human-Robot Relationships

The fast-paced evolution of social robots is leading to discussion on various aspects of our lives. In this article, we want to highlight the question: What effects might human-robot relationships have on our psychological well-being, and what are the risks and benefits involved? Humans form all sorts of relationships – with each other, animals, and […]

Read More
Human & Machine Labour & Ownership

Don’t panic: AGI may steal your coffee mug, but it’ll also make sure you have time for that coffee break.

AGI in the Future Workplace In envisioning the future of work in the era of Artificial General Intelligence (AGI), there exists apprehension among individuals regarding the potential displacement of their employment roles by AGI or AI in general. AGI is an artificial general intelligence that can be used in different fields, as it is defined […]

Read More