The term literacy originally refered to the ability to read and communicate using written language. In the past decades, literacy has been used with increasing flexibility to encompass a variety of skills and knowledge within many disciplines, from scientific literacy to digital literacy, and Artificial Intelligence (AI) literacy. This article focuses on AI literacy, which we define as the set of abilities that enable individuals to evaluate AI technologies critically; collaborate and communicate effectively with AI; and apply AI as a tool in the workplace, at home, and online.
Why should you care about AI literacy? AI is becoming increasingly prevalent in a wide range of areas, ranging from healthcare and finance to education and beyond. It is being used by large companies and governments to inform decisions, adopt economic policies and maintain national security. Regardless of prevalence, the public literacy about AI remains globally poor. Results from a national survey indicate that a large majority (84%) of Americans are AI illiterate, being unable to correctly answer questions such as “Can artificial intelligence write its own programs?” and “Is there AI in your TV remote control?”. This figure is in line with other surveys conducted in other parts of the world. Only 9% of the British public said they had heard of the term machine learning, and less than half of EU residents had heard, read or seen something about AI in the previous year. The public generally underestimated the reach of AI in everyday applications and decision-making, mistakenly thinking that services like Google Search, Netflix, and Amazon’s recommendation systems do not employ AI technology.
We believe that inadequate AI literacy is a major threat to democracy, for three main reasons. First, it increases disparities among particularly vulnerable populations. Secondly, it makes societies susceptible to manipulation and abuse by governments and large corporations. Finally, it fuels science-fiction narratives that divert attention away from the real-world dangers of AI.
The rapid development of AI has outstripped our ability to understand and regulate its use, and this gap in AI literacy is a serious threat to democracy– Timnit Gebru, Research Scientist at Google AI
When it comes to access to digital technology and AI literacy, vulnerable groups—such as low-income communities, individuals with disabilities, and older adults—often find themselves at a disadvantage. This lack of accessibility is frequently caused by a lack of funds, which might include the price of computers, software, and training, as well as a lack of digital infrastructure in local neighborhoods. Furthermore, as these people are unable to acquire the skills required to use digital technology successfully, the lack of digital literacy training and education exacerbates these disparities.
These disparities have far-reaching repercussions. For instance, a lack of AI literacy may make it difficult for people to access essential data and services like employment possibilities and healthcare information. Additionally, it may restrict their capacity to engage in internet communication and trade. In addition, as many new industries now demand digital abilities, a lack of AI literacy might cause people to fall behind in the quickly evolving job market. As those who lack digital literacy are unable to access the tools and opportunities needed to improve their lives, this perpetuates a cycle of poverty.
Additionally, the lack of AI literacy among vulnerable populations has the potential to strengthen discriminatory behavior and maintain preexisting biases. Because AI algorithms can only be as objective as the data they are trained on, biased data will result in biased algorithms. Furthermore, misuse and exploitation of AI systems are possible due to a lack of understanding of how they operate. For instance, discrimination in hiring and loan acceptance due to biased algorithms can have a disastrous effect on disadvantaged groups, particularly communities of color.
However, AI has the ability to reduce existing inequalities and provide fresh opportunities for individuals who are unable to access traditional forms of education and employment. Some examples would be the personalized educational opportunities provided by online learning powered by AI which can help to eliminate the gap between students from different socioeconomic backgrounds, and labor market innovations enabled by AI that have the potential to create new opportunities for people, with various skills, and with various experiences, including those who have historically been excluded from the workforce.
Power abuse by governments and big corporates
With governments and big corporations increasingly harnessing AI’s power to carry out their objectives, being illiterate in AI leaves societies open to power abuse and exploitation. When it comes to AI, one concerning issues is the potential to use the technology to surveil, manipulate, and censor citizens or customers. Without a literate population that understands how AI works, it is difficult to recognize when and where it is being used in a way that is unethical or invasive.
There are numerous examples of how AI has been used to undermine democracy. The 2016 elections in the United-States saw AI being utilized to manipulate public opinion, spread misinformation and propaganda. Similarly, Russia and other political regimes have embraced AI in order to censor the population and suppress any content that is critical of the government. In China, a comprehensive surveillance system based mainly on AI has been created to limit freedom of expression. Moreover, in the Xinjiang region, facial recognition and DNA analysis have been employed to classify individuals according to ethnicity, with the purpose of identifying Uyghur Muslims, leading to the tracking and detainment of over one million Uyghurs. These examples highlight the need for adequate literacy in order better protect against abuses by political regimes. One could argue that AI literacy does not eliminate all forms of control by governments. It may also not be enough as citizens are required to have other types of literacy, such as in ethics, privacy, and power dynamics. However, AI literacy can do a lot to emphasize personal safety, such as guiding behaviors and interactions with technologies to reduce potential risks such as individual targeting.
The prevalence of illiteracy in the justice system further poses a substantial risk to democracy, as it amplifies the capacity for power abuse via the inability to hold businesses and governments responsible for the unethical utilization of AI. Algorithms and software are often opaque and complex, and, in parallel, policymakers and legal professionals are often not literate in AI. The Cambridge Analytica scandal serves as an illustration of this lack of oversight and inefficiency, as Congress displayed a profound lack of understanding regarding the functioning of Facebook, confusing the real matter, data privacy, with algorithms, as was evidenced by far-off questions such as Why am I suddenly seeing chocolate ads all over Facebook?
In summary, we argue that without AI literacy, the capacity to identify and respond to unethical or deceptive uses of AI by governments and corporates, as well as to implement responsibility for such activities is severely compromised.
Science-fiction narratives – a distraction from the real-world risks of AI
Lastly, we argue that poor literacy is a threat to democracy because it encourages the proliferation of sensationalized, science-fiction narratives and misplaces concerns about the ethics, possibilities, and real-world risks of AI. Specifically, it encourages futuristic and dystopian discourses which detracts from the attention needed to address real-world current issues such as perpetuation of historical and social bias and injustice in algorithms, to erosion of privacy and misuse by corporate and governmental entities and human welfare.
When the term artificial intelligence is discussed, many individuals instinctively form mental images of humanoid robots usurping humanity. AI is an unfamiliar concept to most people, so its representation in science-fiction media, such as movies and television shows, shapes how it is perceived. In reality, AI is far less spectacular, and far more pervasive than what is depicted in popular culture. For most people, studies indicate that it blurs the distinction between science and science-fiction. Not only is AI a broad field, encompassing natural language processing, computer vision and machine learning, it is much less visible than what people think. One study on american attitudes towards AI indicates that the public highly underestimate the prevalence of AI, disregarding its uses for search engines, social networks, news sites, targeted advertising, and recommendation systems. Without awareness of interactions with AI occurring, people are less likely to recognize that their activity and behavior are used for manipulation and exploitation purposes.
Another problem is poor AI literacy reduces the opportunity for meaningful discussion and action around the implications. Our stance is that the conversation around AI is largely misguided, with too much focus on philosophical issues surrounding super-intelligent, sentient, or conscious machines, rather than addressing the real-world risks posed by software and algorithms. While philosophical discourse on humanoid machines is certainly valuable, we believe it should not take up the bulk of the conversation. For instance, science-fiction narratives may encourage the public to invest time in debating topics such as robot rights, ignoring the rights of vulnerable human populations, such as women and minorities. This tendency to prioritize the hypothetical rights of robots over the lived experiences of real people reflects the narrow, Western, white male perspective of many scientists. By lumping humans together with robots, these conversations dehumanize those who are already marginalized and fail to address the tangible harms that existing AI systems are causing. Moreover, the proliferation of science-fictional narratives about AI with agency and a will of its own reduces human accountability for the systems, overshadowing that intelligent systems are created and driven by human interests and are enmeshed within societal power structures, also established by humans.
Some may argue that science-fiction contributes to AI literacy by encouraging conversations about the issues. However, while they may be more likely to happen, we argue in return these conversations are less meaningful and do not thoroughly explore the important issues.
Democracy is threatened by poor AI literacy, which is also a significant problem for modern society. Lack of AI literacy worsens already-existing inequities, leaves people and societies open to manipulation and abuse by governments and major businesses, and encourages science fiction stories that deflect attention from the dangers of AI in the real world. We must act today to address this problem and make sure that everyone has the information and skills necessary to comprehend and use AI in a moral and responsible manner. To provide training and education in AI literacy and to build a more just and inclusive digital future, governments, businesses, and educational institutions must collaborate. Only then can we guarantee that everyone enjoys the advantages of AI.