Why the use of AI in politics should be restricted and regulated
The prospect of AI working for us and enriching our lives, is not a new one. Various research covering the topic has been published over the years [a, b]. In these articles the possibilities of AI are sketched [a, b]. Not all predictions have come true (so far), but aside from that it is notable how positive these articles project the future with AI. AI has come a long way and there are actually many products that use AI in some form. Think about Google navigation, a game like Call of Duty and virtual assistance from Alexa, Google assistant or Siri. However, the positive attitude towards AI has been tempered somewhat. Although the capabilities of AI have grown closer to the expectations of forty years ago, the positivity towards it has not, and not without a reason. Nowadays the drawbacks of the use of AI are becoming clearer; big tech companies using AI for personal gain, instead of for the good of the consumer and society, its capabilities being used to sway people’s opinions and sow divide, and creating so called fake news using AI. Social media companies use AI to keep users hooked and earn them revenue by watching the personalised advertisements based on private information.
The use of AI in politics can be problematic. AI can be used to influence opinions, just like all other impulses you get. It can also be used to influence political opinions. The problem arises when political information is personalised for each individual, like is done by the social media companies. This can create a false sense of complete information, while it might seem you make a well-informed choice it is actually one fueled by one-sided information. The platforms advertised on are private companies, which in its current form are not regulated, like newspapers or television networks. Further argumentation will follow. Besides arguing why AI in politics are a problem and should be regulated, we will also propose some solutions that can improve the balance between useful AI and obstructive AI to society.
The term used for personally targeted and adjusted advertisements is micro-targeting. Big tech companies use a lot of data to determine what kind of messages and advertisements may be interesting for a social media user. Political parties also make use of this technique. Specifically, they make use of political micro-targeting. Political micro-targeting is a marketing strategy which uses internet data to personalize political messages. For instance, they can target Facebook users based on location, behaviour and demographics. People can be influenced more easily with the help of tailored messages. Because of political micro-targeting a specific target group can be reached. This can be useful for political parties because it allows them to reach a group of people who are considering voting for them. For instance, a student will be shown a message or advertisement in which political parties make promises about the student grant, while a worker who is about to retire will receive messages about improvement of the retirement regulations. The messages that are sent can differ from each other and can even contra-dict each other. The political parties show political views that are interesting for specific citizens. Nowadays this technique is used worldwide. For instance, political micro-targeting has played an important role in the British Brexit and the United States Elections in 2016. Political micro-targeting may not sound that alarming at first glance. However, there are certainly dangers to this marketing strategy. Because of political micro-targeting voters mainly see information that matches their opinion. Voters will no longer see messages that they do not agree with nor see the general standpoints a political party is fighting for. Through micro-targeting and algorithms people mainly get to see points of views that are relevant to them. However these may not be the main points of the party. Political micro-targeting misleads the voter into voting for a party they might actually not have the most in common with, but rather the one that merely pushed the right buttons. This can lead to distrust in governments by making promises during campaigns in automated advertisements that they are not actually planning on making true. Therefore voters can be misled and disappointed. Ultimately this may lead to politics becoming more and more individualistic. The Dutch politician Thorbecke once said that politics should serve the common good. It is not about what each individual wants for themselves, but about what society as a whole needs.
Due to algorithms people get to see more and more information that matches their opinion. As a result, everyone will increasingly “live in their own bubble”. Therefore there is less understanding for people with a different opinion. Nowadays anyone with a different opinion can find enough like-minded people on the internet. On social media platforms they can talk together about their political dissatisfaction and ideals substantiating their own worldview. Often these people have extreme visions. Because they only encounter things that verify their beliefs and nothing that contradicts them, it may get more difficult to relate to divergent opinions. This can eventually lead to polarization, which has already occurred within social media platforms over the last ten years according to research.
Experts specialized in disinformation and fake news have advocated for restrictions on or even banning of political micro-targeting. They argue that when false information is displayed to a wider audience rather than a specific sub-group, the falseness is noticed more rapidly. Research has shown that voters are more strongly persuaded by political advertisements that match their own personality traits. Thus, a specific target group would be more likely to believe the incorrect information, as it likely matches the opinions they already have.
There are already several companies that have taken their responsibility when it comes to micro-targeting. For instance, Google has limited political micro-targeting. And while Facebook has not banned micro-targeting, they are concerned with their influence in politics. In November 2020 Facebook decided to temporarily stop private political groups from promoting themselves. Therefore users will no longer see recommendations for political groups on their profile. This measure was made permanent after United States senator Ed Markey wrote a letter to Mark Zuckerberg. In this letter he called Facebook groups ‘venues for coordination of violence’ and ‘breeding grounds for hate’. Plans to invade the Capitol were made on Facebook and other social media websites. This shows that algorithms can influence democracy in a negative way.
Deepfakes and Bias
Micro-targeting techniques can strengthen the effects of a deepfake. Deepfake is a combination of the words deep learning and fake. Deepfakes can spread fake information. In a deepfake video, it appears as if someone is saying something, without them actually doing so. AI is able to do so with deep learning by researching how a person looks and sounds when he or she speaks through videos and then create a scene that did not happen.. Deepfakes can harm elections in different ways. They can cause confusion, as it is not clear to voters which videos are real and which are not. Politicians can also claim that real videos are deepfakes. A deepfake can influence your opinion of a politician, or even a whole political party. A deepfake of Nancy Pelosi made her look like an stammering drunk old lady. This video has more than 2 million views and was shared by Donald Trump. Because of the authority of the president’s office, his sharing of this video may lead to people thinking the video really happened. When these deepfakes are used within micro-targeting strategies, they can contribute to the blurring of the truth and the creation of alternative facts and fake news within sub-groups. Videos can be created to fit the narrative of the people being targeted, regardless of their verity. If this is left without regulation, the line between truth and nonsense may become indistinguishable.
One last factor to take into account with AI-powered tools that can be used to influence our behaviour is that these systems are designed by people. These people can be biased and may either intentionally or unintentionally influence the behaviour of systems. As these systems are mostly used for commercial purposes they are often closed sourced. Determining whether and to what extent bias occurs is made difficult. Nonetheless, the fact remains that it is possible. The effects AI can have on our behaviour and on how we perceive the world can especially be a problem for democracies. Democracies are built on the idea that the ruling power lies with the people who elect the representatives within the government who will rule in their place. The power lies with the masses, not with the few and most definitely not with non-elected individuals. These AI systems are controlled by a few non-governmental companies and developed by even a smaller group of individuals working at such companies. This does not fit with the idea of power to the masses. These companies are also often not focused on catering to the people but on making as much money as possible. They have shown that as long as money can be made by manipulating the truth and feeding political polarization, they will continue to do so. Regulations created by elected governments should thus be put in place to prevent this from happening.
Why it’s not necessary
On the other hand there are also arguments to be made as to why such tools and systems should not be regulated. For example, it may be difficult to properly enforce the rules as these companies often operate internationally, making national regulation difficult. However, Matthijs Pontier, a member of the Dutch party the Piratenpartij, noted that with international agreements that were signed by a majority of the countries, a trend occurred where the countries who had not signed the agreement were still less inclined to violate the agreements. Perhaps a sense of an agreed moral correctness can create moral rules that should not be violated even if it is not an official rule. Furthermore, national laws can include enabling tariffs on foreign countries if they violate the law and with such agreements even countries that did not sign the agreement may be more inclined to adhere to it.
Another argument for not imposing rules on the development of AI powered systems or the use of them in political settings is that it may limit the freedom of speech, which is regarded as one of the most important rights in a democracy. There is, however, a difference between personal freedom of speech and the use of private data in combination with AI to create a personalised advertisement that might in fact not even portray the true standing of that party on that topic. Furthermore, many countries have regulations in place for political advertisements on other media such as radio and television. In the UK and Ireland paid political advertisements are completely forbidden on these media and in many other countries, with as an exception the United States, political advertisements are heavily regulated [a, b]. This is to secure that the purpose of such messages is to inform the voter and not to mislead them, and prevent wealthy groups from blocking valid arguments of less wealthy groups or shift the public debate.
Solution & Conclusion
To counteract the negative consequences that can occur when AI is used in promoting political parties various approaches can be taken. There are a few key factors that we consider to be important. First of all transparency is important. It must be clear to people whether an advertisement has been adapted to their preferences or when a video is a deepfake. This can be done, with the help of a logo or warning message in the advertisement or in the video. Therefore national and international regulations are needed. Before these regulations can become reality, a committee of experts has to be set up. According to us a complete ban on the use of AI in political context is too strong a measure since AI can also be useful. However, there are countries that prohibit the broadcasting of television advertisements for political parties during elections, so it is certainly a possibility in the future. Secondly awareness of micro-targeting, deepfakes and algorithms must be created. People need to become aware that not everyone gets to see the same information. This message can be spread through books, television programs, or on the social media platforms themselves.
All in all, we don’t want our politics to be influenced by AI. We do not want the power to be in the hands of the wealthy and the big tech companies. Without the proper measures in the field of AI, it will be hard to avoid this fate. But if the proper measures are taken, we will be able to form our real opinions, and beliefs. And perhaps we’ll do it better than before.