Social media like Facebook and Instagram have proven to be good at what they claim to do, which is to unite people and to safeguard and expand their social world. Still, these platforms have faced a storm of criticism in recent years, and for good reason.
Social Media’s influence on the Capitol Riot
The assault targeted on the Capitol by Trump supporters on January 6 should act as a wake-up call for several things, but in particular social media support has given rise to hate movements in the United States and elsewhere in the world. Banning an individual indefinitely from twitter, particularly because he is president, is not enough. All it took was an armed conflict in the U.S. Capitol, which left five people dead, hundreds wounded or arrested, the religious halls destroyed, and millions of Americans outraged by the evil intention of the rioters to transcend safety and target abduction and execute lawmakers. Most of these has been reported and shared online by the rioters themselves after Trump’s controversial speech, which lets Facebook and Twitter trying to silence President Donald Trump’s provocative words on their social media sites by banning him indefinitely from the networking sites, but the real-world consequences of the misinformation spread by algorithms are almost unbearable. Social media sites have helped to propagate “an avalanche of misinformation” and “a barrage of mistruths” that have infected families with twisted conspiracy theories and threatened democracy with violence and death, according to Prince Harry during his interview with USA Today. When it comes to spreading misinformation and twisting the capitol protestor’s mind, Parler plays a vital role. Parler, a right wing platform, is an alt-tech microblogging and social networking service from the United States. It has an important user base of followers of Donald Trump, libertarians, conspiracy theorists, and right-wing extremists. Parler has been used as a medium by the capitol protesters (Trump supporters) to communicate and integrate a huge crowd for protest prior to the capitol riot since the sensitive information in the Parler application was not flagged as malicious and the algorithm, which indeed represents the self-appointed bastion of free speech and exhibit narrow minded biases. This proves that the freedom of speech without moderation could often backfire and could potentially pose a huge threat to democracy.
In 2018, Vox writes about the scandal that shook our world and made us question our own belief system. The article from Vox explains that “the people whose job is to protect the user always are fighting an uphill battle against the people whose job is to make money for the company”. This statement is self-explanatory for the whole scenario of the eponymous company gathering of the personal data of 87 million Facebook user’s connections through its Open Graph platform, administered a set of questions to chart the psychological profile of users, and mastered sharply-targeted tactics to manipulate reluctant voters in favor of Trump and get even loyal Democrats to not vote that day (fig. 1). It reveals how a provocative foreign force might use legitimated social media means to help elect an American president, as this exercise has Republican and profoundly conservative ties and solid proof of a Russian relation. In the near future, there are lots of legal and ethical worries about how, across the political spectrum, micro-targeting is used. Micro-targeted ads can also be difficult to scrutinize since they theoretically encourage campaigns to slice and dice the electorate, differentiate supporters into small classes, and are typically intermittent and brief. The possibility of campaigns using these strategies to minimize participation among other candidate’s supporters is especially alarming. Indeed, that was part of the digital campaign of Trump in 2016. This should be of concern to anyone who supports stable and healthy democracy. But in Trump’s campaign, Cambridge Analytica played only a minor part. In fact, Facebook provides the user all the resources itself without proper identification or verification. Its advertisement methods have been shown to help target antisemites, discriminate against ethnically minor communities and spread misinformation. Facebook has done nothing to seriously fix these or other issues at their source, even though it has tinkered around the edges.
Figure 1: Visual explanation of the Cambridge Analytica scandal.
The misinformation saga
Social media manipulation has gone to such an extent where if a person doesn’t watch the news one is uninformed, and if they watch the news they are misinformed. With the democratization of content creation due to the emergence of social media sites in recent times, everyone of us has become content creators producing and publishing our own content, which in fact created a system in which the quality of published content can no longer be monitored and regulated. The influence of this technological revolution over the last two decades has had an impact on politics, society, family lives, and individuals. This is evident when a survey has been conducted in March 2019 (fig. 3) to assess if the adults in the United states knowingly or unknowingly shared fake news or misleading information online. It is found that a significant percentage of respondents shared fake news unknowingly, with 49 percent claiming that they had posted news online which they later learned was made up. However, ten percent of adults surveyed confessed to sharing information online that they knew was fake. Another survey conducted by Pew research centre to compare the political polarization between liberals and conservative from 1994 to 2017 (fig. 3). It is evident from the figure that there is a vast and growing gap between Republicans (right wing ) and Democrats (left wing) political attitudes in 2017 when compared to 1994 and 2004. The main factor for these extreme political polarization is because of the algorithms that are used by Facebook, YouTube and many other social networking sites to optimize “engagement”, which means people are more likely to see content with which they are liable to communicate and expose continuously to the content they usually agree with. This appears to cause them to be more biased and share like-minded ideas and point of view only with the groups of like-minded people, which can turn moderate views into more extreme ones. For an instance, if I support and follow a democratic party, then my social networking sites curate my feed with good stuff done by democrats that will make me less critical day after day, which will subsequently lead to confirmation biases. This could be harvested and categorized by the powerful proprietary algorithms based on the things an individual likes more often and curate similar content based on the user’s usage history, which creates an echo chamber in social media and makes us lack perspective. This will create a paradigm where people will become more and more entitled with their own opinions and biases, but nobody will be inclined to hear the truth in the near future.
Fig. 2: Share of adults involved in spreading misinformation online in the US as of March 2019.
Fig. 3: The differences in political attitudes between Democrats and Republicans from 1994 to 2017.
Another negative effect of social media on our politics took place during the Corona pandemic, because a lot of fake news about this was also published, read and shared on social media such as Facebook and Twitter. This spread of fake news poses a risk in several ways. For example, there is a great deal of suspicion about measures adopted by experts, which have been drawn up to inhibit and prevent further spread of the virus. This can result in disregarding the corona measures, which in turn can pose a health risk to many people. In addition to jeopardizing the health of many, further spread also has a negative impact on the economy, given the lack of reopening of restaurants, shops, museums, football stadiums, cafes, and you name it. In addition to questioning the corona measures, there are also many fake corona tests and vaccines for sale through social media. Misleading information about possible treatments and prevention of the virus also poses major risks.
Social Media’s real life benefits
Fortunately, social media is not all about doom and destruction, and this phenomenon also has many good sides. One of these good sides can be found in, how could it be otherwise, the social aspect of social media. It has never been easy to keep in touch with friends or family who live far away. Where people used to communicate with each other via letters, e-mail or expensive telephone calls, WhatsApp now offers the possibility to be in contact with each other 24/7, while Instagram and Facebook ensure that people can keep each other informed about the life that they lead through photo sharing.
In addition to keeping in touch with friends and family, meeting new people has also been greatly simplified by the rise of social media. Where in the past people could only get to know new people in “real life”, nowadays you only need to “swipe” to the right to find a new potential lover.
Not only can one find new lovers through social media, but also making new friends has been greatly simplified by the rise of social media and forums like Reddit and 9gag. People who live in social isolation or have difficulty with their social skills can now make friends online, which has a positive effect on mental well-being and happiness.
Future improvements to reduce political polarization and social media regulations
In order to be able to continue to benefit from the many advantages that social media offer, it is important to improve social media. This can be done in several ways. One of these ways is to appoint fact-checking organs and independent gatekeepers to detect, label and / or remove incorrect information. Although Facebook is already taking advantage of this, there is still a lot of fake news on this social medium. That is why it is useful to intensify and expand this approach. Algorithms can help here. These algorithms can be used in various ways. First, algorithms can help track and identify fake news spreaders. The accounts of these distributors can then be blocked, deleted or stamped to indicate that they are spreading fake news. In addition, algorithms can also assist in detecting posts in which fake news is shared. Like the accounts of fake news distributors, these posts can then be deleted or stamped to indicate that this is fake news. In practice, the social media could integrate an AI specialized fact-checking feature, whereby an extension on sites such as Facebook.com can automatically rate messages for reliability. This degree of reliability could then be presented in the form of a score (e.g. 10/10 reliability) or a percentage (e.g. 100% reliability).
Another way to counter the consequences of misleading information is to teach people to distinguish between fake and real information. This can be done, for example, through media campaigns, which warn against believing everything that one reads on the internet. In addition, lessons about misinformation in schools can help raise awareness of the existence of misinformation in children and the possible consequences that misinformation can have.
Nationalization of social media platforms might also offer a solution. Although there are still some snags to this, this method does give food for thought. Nationalizing social media companies can make it easier to comply with strict regulations and reduce the power of social media. A downside of this measure is that it will be easier for countries with totalitarian regimes to restrict the freedoms of citizens. In addition, it also raises the question of who would have the best credentials to take such responsibility. A special independent committee consisting of politicians and scientists from different countries is one of the possibilities.