Picture a 2022 classroom: students enter the room which has a digital screen on the door, keeping track of the number of people inside, the air conditioning and the schedule for the day. There is smart lighting control, a smart blackboard and camera’s to keep track of potential students cheating. The class starts the day by watching a youtube video about irrigation in ancient Greece, which was suggested in the teacher’s recommended section on youtube. The teacher finds some students aren’t paying attention as they are too busy checking their social media feed. When the teacher calls them out, one notes that there’s just been a breaking news story about Chinese interference in the elections which he wants to read about. The teacher agrees that such news is fascinating, yet has a set lecture schedule which she has to uphold. She continues to talk about the story she found on Greek irrigation systems.
This short scene describes the state we’re in right now. We have given AI incredibly large roles in large parts of society. Yet, there is very little education on how AI actually works, why we’re doing it or how we could use it to our advantage in solving the problems we face. AI is everywhere, so having an education that matches that fact is absolutely essential if we want to keep up with today’s challenges as a society.
One of those challenges might be trans-national threats. The next world war won’t be fought with guns and nukes, but with keyboards and computers. As of today, the Russian Federation has about 120.000 troops stationed at the border with Ukraine. It is speculated that an invasion of the sovereign nation could take place as early as mid-february. While public debate seems to be heavily focussed on troop movements, another battlefield remains largely overlooked, the online battlefield. This is nothing new as this exact thing happened before in the 2014 invasion of Crimea, when alleged Russian hackers attacked Ukrainian online infrastructure. Ukrainian government, news media and social media websites were taken down, together with the disabling or gaining access to government officials phones, which further disrupted communications.
The increased amount of attacks in the previous decade, which ultimately culminated in the 2017 ransomware attacks where suspected North Korean hackers targeted over 150,000 computers worldwide for a total cost of $10B, led media to criticize the EU for having too little preemptive capability to combat cybercrime. This subsequently made the EU adopt a 6-point strategy to increase European cyber security, which included a proposed section to create a Network of European cybersecurity centers for industrial, technology and research institutes. These centers provide high-level knowledge for European Businesses and governments.
Where our European government fails (as it often has) however, is in education. The reason for this is twofold. First, an incredibly large part of cybersecurity defense comes down to individual users. Large-scale ransomware attacks, phishing or misinformation campaigns all target individual users en masse in the hope some of the virus or information gets through/believed. For example, criminals use phishing emails to trick people into giving hackers access to their phones. Key government officials or business leaders could be targeted this way and give access to confidential information. The rise of deepfake technology provides yet another example. Deepfake technology in misinformation campaigns can easily be used to mislead a population. As the technology used in these attacks is only getting more convincing, the need for the population to recognize these threats and defend themselves increases. Having a coordinated response is incredibly important, but having a population that is educated is a country’s greatest defense.
Second, as cyberattacks keep increasing, so does the strain on our network of cybersecurity centers. This means that more people are needed in cyber security positions. Providing European youth with an education can be a means to that goal. In 2021 code.org did a survey among U.S. highschools to see what percentage offers Computer Science related subjects. In 2018 this was at only 51%, which is still only half of the total number of schools. Increasing the number of students that are provided with access to computer science also increases the likelihood that they continue their studies in that field in university, which in the long run will have a positive influence on the number of cyber security experts. Education of the general public on AI can thus contribute to a better defense when it comes to threats from other countries.
Not only on a national level, also our personal protection could use some improvement. Even so, one is only able to protect itself, if one knows what to protect itself from. The title of the book “Je hebt wél iets te verbergen” (you do have something to hide), from research journalists Maurits Martijn en Dimitri Tokmetzi already implies how the general public feels like there is no special need to be protective over their data. Meanwhile, Martijn and Tokmetzi argue that privacy is our most valuable asset in modern times and that we let our boundaries be crossed way too easily. Apart from the fact that they see it as essential in being human (having a personality or personal relationships don’t exist without a ‘personal’), they state that you might not be (yet) aware of what to hide. Things that are okay here and now, could be interpreted very differently elsewhere or in the future. For example, being homosexual is reasonably accepted in the Netherlands. However in Russia a picture of you on gaypride might be a reason for a face recognition system to pick you out at the borders. Similarly, you would easily post a picture of you eating a hamburger in 2022. Then again, do you know what happens if it is still easily trackable in 2030, when eating meat is seen as a reason to fire you because of image damage? If you would have been more aware of how artificial intelligence systems work, then you might have not just accepted all terms and conditions of the social media that you feed your data to.
Bottom line, we don’t know what, where and how exactly our data is handled. And that makes us extremely vulnerable. With better education on AI we would be in a position to better assess risky consequences. Cookies and privacy statements might still be too long and fuzzy to read, but you would now know that if you don’t turn them off, you won’t be protected. You might even demonstrate for a law that forbids them to be turned on in the first place! Knowledge on AI usage gives us the opportunity to also stand for our rights in whether we agree with that usage or not. In a democratic society, it enables us to demonstrate if we don’t agree with something. Even if the exact definition and usage of artificial intelligence remains a mystery due to Big Tech policies that are impermeable, you would know at least what you don’t want it to do.
The past couple of years have shown that the awareness of data usage by big companies or the government lead the general public to be more actively asking for protection and regulations against misusage. When the childcare allowance scandal came to light in the Netherlands, it was discovered that thousands of people were unrightfully labeled as fraudsters. Apart from the internal mistakes on handling complaints, an automated system was responsible for a multitude of these mislabeled cases. It caused many people to question if even our political leaders are educated well enough on artificial intelligence when they fail to protect us against its deficits.
Even when the government or another party has a simple automation process in mind, it might turn out that your data is interpreted with a huge bias or is not well protected. However, it has been shown that the insight in these flaws, a better AI knowledge to be said, can give us the handles to implement new regulations that fit this ever growing influence on our lives. When we learned more about how big tech companies were collecting and processing our data without restraint, Europe enforced new regulations in the GDPR. As a result, Google was fined for not providing enough information to its users on their data usage as well as not really obtaining permittance for the processing of it. Let us all be educated on artificial intelligence, so that we see the need to protect and have the power to intervene ourselves.
Efficient mutual field knowledge
As AI grows in capabilities, so does its reach into new fields. A broader understanding of these capabilities can lead to benefits for both AI and the fields it touches. A broad uptake however still seems to be lacking. A 2018 research paper that surveyed 538,000 US businesses, revealed that the share of businesses that employed some kind of AI was only 8.9%. It also reported that the share of large companies (companies with more than 250 employees) that use AI is much larger: 24.8%. Obviously this provides a bit of a skewed image as smaller businesses like the local grocery, the Crusty Croissant and Joe’s Pizza might not have as big of a reason to implement AI. However, the rising capabilities of AI might change that. The local grocery might benefit from an algorithm that can predict buying patterns of its clients or Joe’s pizza might want to use machine learning for optimal dough creation. As of now it’s hard for these smaller businesses to recognize the place AI can take in their business. Larger firms can have access to consultancy services or even in-house experts to recognize these opportunities, which is something which smaller firms often don’t have.
Having a populace that is well educated on the capabilities and limits of AI can help in its uptake. A 2017 study by McKinsey Global Institute found that the more familiar companies become with AI, the more opportunity they see for it in their business. Having a baseline education on AI for the entire workforce can help all departments of larger firms, but especially smaller firms determine if there is room for AI in their business.
Educating people on AI can also bring benefits to the field of AI itself. AI is a broad field that takes its inspiration from many other subjects. Think about the invention of the Neural Net, for example, which takes its inspiration from the human brain. Another example would be ant colony optimization algorithms, which take their inspiration from the ant world. These algorithms use the pheromone-based way of communicating by ants as inspiration on how to find smart ways of solving computational problems which can be reduced to finding a good path on a graph. Had the data scientists that created these algorithms only taken inspiration from their own field, these things could’ve never been invented.
Furthermore, allowing fields outside of AI “in”, by educating them, may also help the direction AI is taking. Journalists report that AI is influencing the way that they write articles. They are forced to be more focussed on generating clicks on individual articles as those are the ones that generate the most advertising revenue and thus get promoted on platforms like Facebook or Instagram. Reporting on news that people are interested in is incentivized more than reporting on news that is newsworthy. The fact is, people who design algorithms often don’t know of all the consequences their designs can have in other fields. Bringing in people from a larger background can help them influence the directions these algorithms are taking. In this example journalists could more actively contribute to the discussions and ultimately design of these algorithms.
Wrong solution focus?
Now, you might read this article and think: I have had enough of all this artificial intelligence propaganda, for my job my two hands and own brain are sufficient and it gets way too much attention. Well, there is not much for us to disagree on in that sense. Some solutions might not ask for technology such as AI to solve it. The example mentioned about a hypothetical face recognition system that would label possible homosexual people is of course not problematic because of the technology by itself, but because of the way it is used. We might be better off trying to change the societal issue there with negotiations, education on sexuality or economical means of limitation, instead of focussing on the AI system that is just the means in this whole problem. The same even goes when using AI for a cause that is meant to be good. AI4SG is an upcoming trend to approach social, environmental and public health challenges with artificial intelligence solutions. Complete ecosystems are being modeled to see what species will be endangered in the future or what the most efficient way is to preserve woods. This doesn’t plead against artificial intelligence, it denotes that it can be used in many good ways that some even call game-changers for our future society. However, again the question can be asked if it is the right search space we are looking at to find our solution. Climate change, poverty, pandemics: technological advancements might help, but will it really fix the problem’s origin? We are probably better off with better regulations on pollution, anti-corruption negotiations, or a quick vaccine. An artificial intelligence system might augment our problems with biases and privacy-issues. Again, the answer to all our problems might lie in a different corner. AI is not the ultimate solution to all our headaches and including it in our daily learnings, might only shift our focus more and more thinking it is.
A better implementation
Nonetheless, a better education in AI for everyone might still be helpful. Because when you know how AI works, you know what it can do. More importantly, you know what it can not do. A better understanding of its possibilities and opportunities, puts you in a better position to estimate where it will be beneficial. It will also come with an enriched experience of AI-usage, making it easier to see its downfalls. A research into national AI-policies shows that a large number of countries are thinking about how to educate the public into AI better. Their motivations are mainly economical, as they want to prepare the workforce for AI. Moreover, they also see the essence in teaching risk and opportunity assessment. Lessons on AI should come with critical lessons on its ethics and pitfalls, so that when we choose to use it, it is used carefully.
A defense against international threats, a better ability for personal protection, increased opportunities for AI and other fields and the knowledge to make balanced decisions on the implementation provide arguments as to why education of the general public in AI is needed now rather than later. Jim Stolze already took a first step, by creating a free national course for AI. Which is to him, as important as getting a swimming diploma.
We’re rooting for a general education system change. Math, History, English and Geography have not become redundant, yet should be complemented with a basic knowledge on AI. When all these other disciplines become intertwined with intelligent systems, it is time that AI is no longer seen as a seperate one, but the basis of them all. Or even better, the top layer, which you can still take off, adjust and throw out of the system. A layer that we are all familiar with. To that extent, we can handle all our future challenges or threats adequately, with, without or maybe even against AI.
And if this hoped-for change is going to take some more time, then maybe start with just yourself today and increase your knowledge about AI. You might even decline the next cookie you’re offered.
Colette Wibaut & Hein Kolk