Social media doesn’t limit free speech, and neither will autonomous AI

Posting on social media is like visiting someone’s house

Within the current digital age, social media is starting to play a big role in the fast transfer of (mis)information. People from all over the world are using social platforms for numerous different purposes. Unfortunately, the spread of misinformation and hate speech is sparking polarisation and division between opposing groups such as political opponents. To counter that issue, the platforms have increasingly started regulating what is being posted, in order to enforce their guidelines. An example of a recent case is the permanent ban of Trump’s Twitter account, for violating their guidelines. Concerns have been raised whether these actions are limiting the principle of free speech as formulated in the constitution. Even though the action was impactful, Trump’s freedom of speech has not been harmed in our opinion. We argue that any regulation by social media companies of posts on their platform does not limit the free speech of the user, and by extension neither will autonomous AI that regulate posts on these platforms.

Social media platforms are companies like any other, they have terms and conditions to which you agree upon registering. The freedom of speech principle is applicable for any law enforcement made by governments, not companies. That means that companies can enforce their guidelines as they see fit, they are not obliged to host your post in any way. Compare it to visiting someone’s home, certain rules will apply there and if you do not adhere to their rules, they will make you leave (and possibly never let you come back), and they are in their full rights to do so.

There are always consequences

Social platforms are run by the big tech companies. Big tech are the largest, market dominating companies active in the IT industry. They consist of companies such as Amazon, Apple, Google, Twitter, Microsoft (LinkedIn) and Facebook (WhatsApp). Together, they hold the majority of the market shares in the IT industry. This is evident nowadays, as you can hardly interact with an IT device without dealing with the big tech in some way or another. One could argue that they hold a monopoly in the IT industry in which they make all decisions regarding the market, their product and what is and isn’t allowed. This, however, makes them judge, jury, and executioner. But if they own all of these roles, and they determine what we are allowed to post online, are they limiting our free speech?

To answer this question we first need to determine the exact meaning of free speech. We consider free speech the right to express yourself without having any governmental consequences. As we stated before, if you express a certain opinion that someone does not agree with, they are allowed to remove you from their house. The same applies to these social platforms. If you express your opinion in your own house and the government holds you accountable for that, with any form of negative consequences, then your right of free speech is violated. Since we have now established that they are not limiting your right to free speech, and they are allowed to remove your content the actual question becomes multi folded: Are they violating any rules by restricting content? And what happens if more AI is employed by these companies? Let’s investigate a case example.

Rules even apply for Trump

Trump’s active usage of Twitter is the perfect example of why the restrictions and regulations are justified. Right from the beginning of his inauguration, questions were raised whether the tweets should be considered as official white house statements. After all, he was using a company to reach his audience, instead of the standard and official white house channels. He argued that Twitter’s actions were to be viewed as censorship. However, a judge in 2019 ruled [1] that the official status of Trump’s Twitter account was violating the constitution itself because Trump had blocked numerous people, effectively limiting their access to official white house statements. This is of course problematically hypocritical, since he flags Twitter’s actions as censorship, while at the same time deciding himself who gets to see his statements. Bottom line, Twitter decides, according to their guidelines and rules, what is being shown on their platform and even Trump has to deal with that. Eventually, he didn’t, which as a result got his account suspended. However when his account was active some of his posts were automatically flagged, marked as unreliable or even removed. Twitter had the right to show or hide posts as users have to agree to their rules upon registration, thus they were in their right to take action. But what happens if posts are automatically removed by autonomous AI? How are we supposed to deal with AI controlling our posts?

Automated content regulation

In a NY times article [1], the question is raised whether ethical AI systems are even possible. While concerns are voiced, tech companies involved in AI systems are announcing their contribution to ethical standards. They are creating corporate principles that are meant to ensure that their systems are ethical in the autonomous decision making process. They try to ensure this by assigning ethical officers and boards that review and manage these principles. The immediate responsibility lays with the companies producing the autonomous AI. This effectively means that the companies themselves are responsible for what their potentially autonomous AI’s are doing. If the AI would go on a rampage and delete all kinds of posts, the tech company itself is responsible and they are free to (automatically) remove what they want, just like their human employees with normal rights can, the computer can too. Technically speaking the computer could enforce the companies guidelines and rules just like a normal autonomous employee could.

Not all autonomies are created equal

When we talk about autonomy in the decision making process, typically we mean another form of autonomy compared to the variant we typically mean when talking about human autonomy. Morin [3] describes the formulation of autonomy as an ecological relation, a system is tied to its surroundings in order to act autonomously. In this context it is not ment as a term that is used when we describe the ability to formulate goals and intentions but rather a complex interaction between human and nonhuman components. We do not expect cars, for example, to achieve any kind of moral autonomy such as creating their own vision and working towards that, like we would expect from autonomous humans.

More and more decisions are automated with artificial intelligence. If machines decide what certain outcomes are, how does that influence the power and control we as humans still have? If algorithms can make decisions, or influence the decisions that we make, how do we make sure that we do not end up in a biased society? If we are severely influenced by machines in our decisions, do we still have a true democracy? And by extension, free speech? Many more questions come to mind, but we will limit the scope to our initially set topic of social media, free speech and autonomous AI.

We already concluded that social media does not affect free speech. That does not mean however that they are, at least ethically, allowed to remove whatever they want. This might initially be allowed, but if they would for example fully remove everything Trump related and nothing Biden related because they just did not feel like it (and this still has no free speech impact), this might still be unfair and breaking the laws. Social media platforms should do everything they can to prevent fake-news spreading, call to hate and other violations. They are also allowed to remove content that violates the rules, but they should not be allowed to cherry pick content so that they can steer democracy, for example by influencing elections. If autonomous AI would be implemented as content moderator, we would like it to be fair, according to the rules and consistent.

Bias, racism, and their future issues

According to Hartley [4] the British government used a racist algorithm in order to help determine the outcome of visa applications. Last week they announced that it “needs to be rebuilt from the ground up”. He wonders how the government got this algorithm in the first place, who designed it, if anyone should be punished, and how we can prevent such cases from happening again. Where algorithms started as a formula in which weights could be manually assigned by the company, they’ve grown to formulas that automatically consider thousands of variables automatically and spit out the ‘optimum’. The outcomes were determined by datasets created in the past, created by people who judged someone a threat solely based on a characteristic such as the color of their skin or their country of origin. This ‘racist’ data was fed into the algorithm and the algorithm made racist decisions. In the case of content regulation, who determines the optimum between removing and allowing posts? And if we decide to remove posts based on a trained model, how do we make sure the algorithm is not biased? If an algorithm is trained on specific US data it might be biased towards removing content from Trump supporters rather than being objective and fair. We expect future trouble in this aspect, and this surely was not the latest trending news article about algorithm bias.

How do we prevent this from happening? In the paper ‘Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics’ [5] the author states that systems typically require three iterative stages: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. According to the paper different ethical decisions have to be made in each phase, and questions are brought for system developers to try and clear up their own ethical bias. Currently there are no global protocols set out for companies to make sure these iterative phases are followed properly, but companies should already look into possible solutions to prevent upcoming issues, preferably before they show up on the next frontpage of the digital newspaper like British government.

Learnings, take-aways and more questions

We have discussed whether your free speech is restricted if your posts are deleted or if your account has been banned. Even though social media platforms have an increasingly growing role in public discourse, they still remain public companies and the freedom of speech principle protects you only from censorship by governments. Therefore, social media platforms are fully in their right to delete your post, as they have absolutely no obligation to host and display it. We do however pose arguments about the fairness of content regulation by these platforms. They are still obliged to contribute in a fair way to society, and to prevent bias. If companies are training algorithms to automatically regulate content, they have to prevent any bias. We expect this issue to become bigger and more relevant in the years to come. Surely multiple companies will hit the front news before proper measures are taken. Unfortunately most proper actions are taken after some serious hits, instead of before. We have raised a lot of questions, some with answers, but we hope you got inspired to think about these issues as they are most likely to become more apparent in the near future. Having that said, if anyone ever states that their constitutional right of free speech is limited by social media you can simply tell them that is not true because you read so on the internet. You are able to provide them with a nice analogy of being removed from someone’s house, and you can inform them that some biased AI probably unfairly removed their very precious and unique opinion. So, in summation, do you have any constitutional free speech rights to air your opinions on any social media platform? The answer is no.

Thank you for reading,
Demian Voorhagen and Marijn van Rijswijck

References

  1. Savage, Charlie. “Trump Can’t Block Critics From His Twitter Account, Appeals Court Rules.” The New York Times, The New York Times, 9 July 2019, www.nytimes.com/2019/07/09/us/politics/trump-twitter-first-amendment.html.
  2. Metz, Cade. “Is Ethical A.I. Even Possible?” The New York Times, The New York Times, 1 Mar. 2019, www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html.
  3. Yampolskiy, Roman V. “Could an Artificial Intelligence Be Considered a Person under the Law?” PBS, Public Broadcasting Service, 7 Oct. 2018, www.pbs.org/newshour/science/could-an-artificial-intelligence-be-considered-a-person-under-the-law.
  4. Morin E (1992) Method: towards a study of humankind, vol 1. The nature of nature. Peter Lang, New York
    The Ethics of Algorithms: It’s not the algorithm’s fault that society is racist https://medium.com/swlh/the-ethics-of-algorithms-1c69b87a656
  5. Rochel, J., Evéquoz, F. Getting into the engine room: a blueprint to investigate the shadowy steps of AI ethics. AI & Soc (2020). https://doi.org/10.1007/s00146-020-01069-w

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More