The threat of AI to democracy and what we should do about it

New achievements in the field of artificial intelligence (AI) continue to present humanity with extraordinary challenges. AI is already influencing our democratic processes, and this influence will become even more significant in the future.

Researchers in the field of AI are currently making amazing progress. But talking to researchers working in this field, we often hear that it will be a long time before we see the promises made in relation to AI. Fully autonomous vehicles, artificial general intelligence (AGI) or even human-like thinking. However, if you look closely, you can see millions of small advances now that can add up over time and lead to systems that can autonomously make many decisions at once. At first glance, the advances may not seem as exciting and sexy as the predictions of an AGI. But if you look at the whole situation from a higher level, you can see that we are moving towards a situation where systems will make decisions for us. So we have to ask ourselves what would happen if one day these systems were to push aside human strategy in favour of something completely unknown to us.

“At least when there’s an evil dictator, that human is going to die. But for an AI, there will be no death — it would live forever. And then you would have an immortal dictator from which we could never escape.”

Elon Musk

For example, since 2012, the principle of “deep learning” has permeated the world of artificial intelligence. Researchers have abandoned the old way of programming artificial intelligence and have switched to Deep Learning, as it works far more efficiently than all previous systems. There is consensus among many that in the future, human capacity will be surpassed by supercomputers in nearly all domains. Innovation visionaries like Elon Musk of Tesla Engines, Bill Doors of Microsoft and Macintosh-Apple co-founder Steve Wozniak warn that superintelligence poses a real risk to humanity, in fact significantly more dangerous than nuclear weapons. The way society and the economy are constituted will change with the widespread spread of simulated intelligence. This brings extraordinary opportunities, but also significant threats.

Nations such as China, the United States and Singapore are already using far-reaching information to control their citizens. In Singapore, it began with protecting residents from terrorism, but soon the technology was influencing economic and migration strategy, the property market and schools. The impact of AI is not limited to influencing individuals. Democratic processes are facing major challenges, because the use of algorithms and artificial intelligence in government affairs brings new opportunities but also great dangers. Until now, citizens elected government representatives who they believed would best represent their interests and perspectives. Today, politicians are studying voters’ perspectives and changing their own in the same way. By studying social media such as Facebook or Twitter, candidates are trying to use AI to explore information about voters’ leanings and better predict their movements. Democracy could be strengthened by the use of artificial intelligence, but misuse, for example undermining the election-based system, could also have major negative consequences.

What is Democracy and what is AI?

In order to understand how AI can influence democracy, it is important to define these two terms. The term “artificial intelligence” was first coined by John McCarthy in 1956. Researchers from various disciplines came together to clarify and further develop the previously quite divergent concepts surrounding “thinking machines“. Today, AI is defined as a sub-field of computer science and describes how far machines can imitate human intelligence (be human-like rather than become human). The English Oxford Living Dictionary has the following definition on AI: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.“
The goal of AI is to mimic human insight through machines. In contrast to conventional PC programs, artificial intelligence algorithms are a set of instructions made to take care of some explicit problems and learn without input from anyone else. As a result of their progress and potential applications in almost all areas of life, these advances are poised to drastically change the world order, and the essential goal of humanity must be to reinforce their benefits and mitigate their negatives.

“A great democracy must be progressive or it will soon cease to be a great democracy”

Theodore Roosevelt

When researching what exactly democracy means, the Greek meaning of “rule of the (common) people” soon comes up. The ancient “democracies” in Athens and Rome are precursors of today’s democracies and, like them, arose as a reaction to too much power and abuse of power by the rulers. However, it was not until the Enlightenment (17th/18th century) that philosophers formulated the essential elements of a modern democracy: separation of powers, fundamental rights / human rights, religious freedom and separation of church and state. Abraham Lincoln defined it as “government of the people, by the people and for the people“. However, there has been criticism from scholars such as Mannheim (1998) who consider this definition inadequate. Individuals cannot participate directly in government, but they can express their aspirations at certain times – which is sufficient for democracy. As early as the time of the ancient Greeks, democracy was referred to as the rule of the people. Equality and rights for the people are ensured and direct participation in the issues of the state is guaranteed. A further characterisation of democracy is provided by Agi (2000), who distinguishes between direct and representative democracy.
According to Agi, one speaks of a direct democracy when all citizens are directly involved in the laws and take part in their execution in turn. This is how it was practised in ancient Greece, for example. In a representative democracy, on the other hand, the laws are not made and enacted by all individuals, rather individuals are elected to take on this task.

How AI is a threat to democracy

One might think of the events in Poland and Hungary when discussing threats to democracy. But Cambridge Analytica’s role in the 2016 American elections is also a threat to democracy. The company spread biased information to American voters, persuading them to vote for Donald Trump.  The Canadian company AggregateIQ has influenced the 2016 Brexit referendum in favor of leaving the European Union. Through these issues it becomes clear how the use of AI hinders people from taking autonomous decisions, disturbing democracy. 

“The use of AI obstructs autonomous decision making. This disturbs democracy and rule of law “

Andrew Murray

Some might say; you are not forced to believe what you see online. It is your own responsibility to look for accredited sources. However, this is not the point. Your Google results might not be the same as mine while googling the same topic; the data you receive online is not objective. Google knows who you are and alters its content accordingly. This does not only go for online shopping or the weather forecast, but for political issues and news content too. AI hides information for us and presents us with what we want to see. Citizens are powerless in this situation because companies know they can hide behind soft self-regulations, there is no law that prohibits them from these activities. Our autonomy is threatened by companies; our opinions and beliefs are echoed to us through algorithms, reinforcing our thoughts without a critical note.

And AI is not only influencing individuals. Take the 2014 welfare scandal in The Netherlands. The Dutch House of Representatives called for stricter fraud prevention regarding the Dutch welfare system. This decision was caused by Bulgarians wrongly receiving welfare through fake Dutch addresses. The tax authorities implemented algorithms in their systems that were created to track down fraudulent citizens. While the system got stricter and fraud was detected more often, the algorithms turned out to be biased and unjustly designated certain individuals as fraudulent. These individuals appeared to all have a double nationality and the algorithms certified this double nationality as a risk factor for fraud even though these people had done nothing wrong. The algorithm discriminated a certain group of individuals with great negative consequences for the people involved, including huge penalties and legal complications. Here, AI not only influenced the victims in this situation but the policy making of a complete institution. This ultimately even led to the resignation of the cabinet. The Dutch welfare scandal teaches us that it is necessary for humans to intervene in the use of AI. 

Monitoring citizens through AI is not only seen in The Netherlands. Numerous governments have invested heavily in the creation of surveillance infrastructure. Millions of cameras are installed throughout several countries. These cameras cannot be monitored every second of the day. This is were AI comes in handy, with its algorithms being able to process and analyse every frame in real time. China for example, is considered to be one of the biggest players in mass surveillance technology. The Chinese government even used AI-based technologies in large-scale invasions in Chinese regions populated by ethnic minorities. As of right now, China is using camera footage analyzed by algorithms to track down members of muslim minorities. The algorithms link data from camera footage to online activity, phone calls, text messages and banking information to be able to identify suspicious behavior. It is not clear to citizens what behavior is regarded as suspicious, giving the government power over every behavior shown, targeting minorities and disadvantaging them in daily life. 

Another concern is that it shows that AI, more specifically algorithms, do not apply context to its task.  Lack of context in AI is evident in the use of LawTech, where algorithms predict the chance of winning a lawsuit based on similar lawsuits in the past. Lawyers can use this technology to advise their clients either to litigate or not. However, context is important here. The algorithm could advise not to proceed although a certain specific case would have been won disregarding similar lawsuits. Again, human intervention turns out to be important. 

It shows that the application of AI can have serious implications for societal norms and standards, influencing democracy in a negative way. Awareness of these issues is crucial for a safe and ethical implementation of artificial intelligence.

How can AI help to improve democracy

While there are many negative aspects regarding the use of AI, we believe AI can also benefit our society. In many ways, artificial intelligence is changing our world and, with its developments it is also having a positive impact on our democratic systems. By strengthening the connection between government agencies and individuals, AI can support a more direct and representative democratic system. Information about potential voters is processed by AI systems and used by government officials to tailor their perspective to residents in order to gain more supporters. The Obama and Trump presidential campaigns in 2012 and 2016 are prime examples of how government officials are using these systems effectively. By using AI in such decisions, it is possible to take into account the feelings and preferences of nearly every single voter, as AI algorithms create a direct link between voters and government officials. AI bridges any barrier between representatives and citizens, and individuals can have their perspectives taken into account, improving the election-based process.

“The tools that are used to mislead and misinform could be equally be re-purposed to support democracy“

Vyacheslav Polonski

With AI, we could be able to make better decisions on political issues and economic aspects. The use of machine learning, information science and predictive analytics in politics could empower policymakers to look for an evidence-based approach, with AI providing an accurate picture of what a nation needs and how problems might be understood. For example, a potential economic slowdown could be prevented through predictive AI investigations to discover vulnerabilities. It seems within the realm of possibility to accurately predict and limit the coming negative effects of economic cycles through fully utilised AI systems. By enabling the administration to be more responsive to citizens and deliver public services in the best possible way, data-driven policymaking will make the democratic framework work better. Politicians and government officials would no longer have to rely solely on personal experience, unreasonable instincts, confused beliefs or uncertain inclinations and prejudices. They would be able to draw on huge data-bases processed by AI algorithms.

But isn’t improving democracy and destroying it almost the same thing? Our current democracy will change. With or without the multitude of emerging technologies already in the works to rewrite the rules of parliamentary thinking. Words like AI democracy, autodemocracy or autogovernance will be in circulation soon enough. But the transition will not be easy. The strongest resistance will be around terms like “which is better?”, where the real questions is “better for whom?” Will it be better for poor or rich, better for families, better for businesses, or even better for poor and rich equally? The all-important question here, however, is “how will we know”? AI offers the possibility and potential to automate the entire government decision-making process and democracy. It will most likely be a long and gradual process in which each stage must be extensively tested and improved. However, the risk of misuse of AI systems must always be kept in mind and weighed against the potential benefits.

We need to take action

The use of artificial intelligence calls for governance and regulation. AI will definitely change our lives through the improvement of healthcare, the increase in farming efficiency, contribution to climate change adaptation, increasing security and many other ways. At the same time, the use of AI comes with a number of risks, such as discrimination, intrusion, criminal purposes and biased decision making.

Right now, the technology is ahead of our laws. Take for example companies such as Facebook and Google. Together they have over 6 billion users. We can assume that anything posted on one of these platforms has the potential to reach a lot of people across the world. Should these platforms be held accountable for disinformation and manipulation that is spread by its users? The answer to this question is ‘no’ in the US. A digital provider is merely a platform and does not have to take any notice of the message it transmits, how hateful, inflammatory, violent, manipulative or discriminatory it may be. The European Union (EU) has started with developing regulations regarding these platforms. The General Data Protection Regulation law that was created in 2018 made platforms responsible for the way they process personal data. The law indirectly forces platforms to take privacy into consideration while carrying out freedom of speech. In this way, the EU made it possible through lawmaking to address digital platforms about their responsibilities, in this case protecting its users. This is an example of how the regulation of AI can benefit society and in this way democracy.

However, several EU countries just signed a position paper, urging the European Commission to only soft-regulate when it comes to AI. The paper states that “soft law can allow us to learn from the technology and identify potential challenges associated with it, taking into account the fact that we are dealing with a fast-evolving technology”. As Andrew Murray states; “governments do not want to kill the goose that lays the golden eggs”. 

“Governments do not want to kill the goose that lays the golden eggs”

Andrey Murray

AI is an amazing tool that, as we said before, can improve lives in many ways. We must however be aware of the dangers that come with the use of it. At this point, we believe that awareness is not enough. We have showed that AI is able to negatively influence autonomous thinking, equal treatment of citizens and decision making. These are all aspects of our democracy. Governance of AI is needed to be able to regulate the influence of AI on democracy. We need global organizations for AI like we already have them for health care and economy. Creating global norms and standards will protect democracy and society for future danger in AI. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More