As your navigation app leads you through the streets of an unknown city you’re visiting for an interview, you check your social media. Your aunt posted a picture of her newborn. You think the baby is quite ugly, yet you comment “so cute!” below the post with a heart to express your love for the newborn and the mom. At the same time your friend texts you about some weirdly specific advertisement they saw online; you text them back and for a moment doubt whether the word “necessary” is written with two “c”‘s and one “s”, or the other way around. The next instant you realize you don’t really care because the autocorrection on your smartphone has already changed it to the correct spelling. You get off the tram and take a glance at your smartwatch: 1.000 steps left until you reach your daily goal. Knowing that you might risk being late, you still decide to walk the last bit to your interview.
Digitization and Society 5.0
This short example clearly captures how digital technology is already very intertwined in our daily lives, often without us realizing it. Facilitated by such technologies, our society is constantly focused on the improvement of oneself: we must become healthier with diet apps, fitter with apps that track our steps, and at the same time make our lives easier in general with apps from which we can control other “smart” devices from a distance. Continuous adaptations by society to make life more comfortable are not a new concept. We have witnessed societal shifts before, such as the shift from the hunter-gatherers’ society to the agricultural society. We then made our lives easier by the applications of machines in the industrial society, and now this role is fulfilled by technology in the “information society”, also known as Society 4.0. The digitization of our society, as roughly depicted above, has accelerated increasingly since the rise of Artificial Intelligence (AI). In a broad sense, AI-algorithms are used to assess what is relevant to our lives (e.g., movies you have (dis)liked before), make predictions (e.g., movies you might like based on those movies and other data), and decide what to do (e.g., show recommended movies). The rapid development within the AI field makes it possible to create endless applications that can serve our daily lives, consequently altering the role that our minds have traditionally played in shaping, ordering, and assessing our choices and actions. As we’re delegating more and more aspects of our lives to technology, the digitization of society has become a fact we no longer can ignore.
For the Japanese government, the next step within this digitization process is the rise of Society 5.0, a society that they have been promoting since 2015.
“Society 5.0 is a human-centered society that balances economic advancement with the resolution of social problems by a system that highly integrates cyberspace and physical space.”
Cabinet Office, Government of Japan (2015)
In this society, big data and AI governed technology are used to fulfill the needs of every individual. Its goal is to create equal opportunities by providing the right environment that stimulates one’s potential. Citizens will be deeply entwined with technology in terms of embodied AI, ambient intelligence, intelligent cyber-physical actors, augmented reality (AR), virtual reality (VR), and much more. Artificial agents will gain an increased amount of autonomy, proactively gathering data from their environment in order to make decisions and provide beneficial services for their fellow human citizens. For example, healthcare could be provided on-site by robots, which makes it easier for people with health issues to live on their own. Artificial agents may be helpful for people that suffer from loneliness by stimulating social interaction. But more global issues could be tackled too due to the extraordinary problem-solving capabilities of AI systems.
Why should this spark the interest of people who are not Japanese citizens? Firstly, although not as extreme as proposed by Society 5.0, digitization in western societies is already increasing rapidly. As digitization keeps increasing, the probability of other western countries following the principle of Society 5.0 is very likely. Whereas Society 5.0 is mostly considered as Japan’s growth strategy, its ultimate goals correspond with the sustainable development goals set by most western countries. The challenges that many different countries face are similar, such as aging population and aging infrastructure. A viable solution for global issues cannot merely be found when sectors are independent. They rather need a closely integrated system where data collection is not only limited to Japanese participants of Society 5.0: it requires other countries to participate in this process.
The paradox
The most important and reassuring claim that is made by the Japanese government is that the proposed digitized society will be human-centered. In the first place, this entails the goal of maintaining humans’ positions in society, even though the discussed technologies will be a major part of it as well. In the second place, but just as important, a digitized society like Society 5.0 aims to fulfill the needs of every individual. It captures the potential of a human-technology relationship in order to increase the quality of life, regardless of people’s region, age, sex, or religion. Both goals, central in the digitization of society, are believed to put humans first. Therefore, they are painted by the broader public as human-centered, which —of course—sounds very attractive for us as human beings. However, in order to accomplish these goals, one might find a paradox with regards to the meaning of human-centeredness. That is, AI-governed digitization in society might not be as human-centered as believed to be. On the contrary, especially at the speed at which it’s happening now, it poses many problems for the human individual. In this article, we will highlight these problems with the aim to nuance the perfect picture painted by the mainstream by pointing out the challenges of societal digitization for the human individual.
Human Privacy
A human value that is already widely discussed among the greater public in the context of digitization is privacy. Digital footprints are left all over the internet and this has only increased, especially since the start of the COVID-19 pandemic. The digital tools used for work, education, and recreation purposes have become a basic utility, and people who are resistant or unable to use these tools are left with a socioeconomic disadvantage. For instance, the increasing use of online monitoring technologies that carry many privacy risks may leave many students unwilling to participate in exams. Not only in an educational setting but in the workplace too, the absence of useful digital tools could lead to a decrease in work performance compared to other colleagues that do use such tools. However, even when one is able to opt out of such digital environments, dutifully deleting all cookies on each website, the chances are that their personal data is still collected by big tech companies. Think of Facebook’s “shadow profiles”, where through activities of friends that do use Facebook, data on someone who isn’t active on the platform can still be collected by Facebook.
That said, it is clear that a lot of your personal data, mostly without you knowing it, is being collected. This effect is further exacerbated by the emergence of more “smart” devices which are able to capture even more of our daily routines. Think of smartphone applications such as sleep applications, fitness applications like “Strava” or “Runkeeper”, but also “smart” products in general such as toothbrushes, washing machines, and TVs. The latter, concerning Samsung’s smart TVs specifically, has caused a fuss with regard to its 46-page privacy policy. In the policy, Samsung states that they register where, when, how, and what time you have your TV turned on, not to mention that they warn you to watch out what you say, as the cameras and microphones of the TVs use face and speech recognition. As quoted in the policy:
“Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party.”
Samsung’s Smart TV Privacy Policy
Like this eavesdropping “smart” device, all other “smart” instances collect data in various ways and subsequently sell this information to manufacturers in order to ultimately make a profit out of you, a clueless individual who — logically — has no desire to read a 46-page privacy policy. Complicated privacy scenarios like the ones mentioned here will only increase, bringing forth more problems with further digitization.
While some may rush to their smart TV or other smart devices to switch it off, most are not too worried. On the contrary, they find it convenient to see recommendations that are specifically targeted at them. At first, this does indeed seem harmless, but an insidious actor comes into play: addiction. As we’re living in an “attention economy”, today’s business models exploit our psychological vulnerabilities to maximize the time spent on a platform, with the risk of digital addiction. This problem has emerged as a significant research area due to its increasing prevalence. Besides the risk of addiction, another problem arises. As most aspects in this digitized society are driven by personal data, it becomes very valuable. Consequently, personal data is at high risk for malicious use by hackers or other wrongdoers. In other words, personal data is seen as a commodity which is mainly caused by the very limited rules that currently exist. As we become progressively transparent by the rapid increase in data collection, the balance in powers shifts along. Citizens become disempowered, which often leads to further abuses of power by the dominant parties. These include governments (e.g., the Dutch government violating the General Data Protection Regulation on a massive scale in the childcare allowance scandal, or “toeslagenaffaire”), but also large companies. Nowadays, multiple tech companies, such as Facebook and Google, rule the digital society. These private-sector companies keep maintaining and updating the platforms we use daily. However, when problems arise, the supplier of the technology has minimal responsibility due to a lack of legislation. In addition, there is no clear authority that monitors or regulates all the processes in a digitized society where decisions will be increasingly made not by humans, but by algorithms. There is no authority that monitors unauthorized data collection, and penalties for lawbreakers are minimal. For example, theft of goods has led to serious crime offline, but such crimes online may also affect people’s mental health and therefore need legislation. In 2007, a Dutch teenager was arrested for stealing €4000,- worth of virtual furniture in a game called ‘Habbo Hotel’. Unfortunately, penalties like these are rather unique. As we further shift into the digital realm, we must be prepared for more to come.
Human autonomy
Digitization has brought a new term that could potentially jeopardize our autonomy: technology paternalism. The word paternalism originates from Latin and is roughly translated as a father telling his offspring what they should and should not do (steering them to the desired behavior). A simple example that illustrates this is the beeping sound in the car when the driver does not wear his seatbelt. The irritating nature of this sound implicitly controls the driver’s behavior to put on the seatbelt and continue the journey safely. Of course, this example bears no harm, and the comment often made by the other party is that, in the end, it is really the individual him or herself who makes the decision. Therefore, it is argued that the largest part of their autonomy is intact. A common misinterpretation here, however, is the confusion of two forms of paternalism. On one hand, the individual can independently display the desired behavior. Here, the user is educated by technology with the goal of that person becoming autonomous (e.g., putting your seatbelt on because you know that it keeps you safe, not because of the annoying sound). On the other hand, there is a much more serious form of persuasion where an individual can become dependent on the controlling influence, often implicitly. As we increasingly rely on technology due to the digitization of society, these techniques become prone to serious abuse, the best known-form of which is online manipulation. Especially when the individual is not aware of it, the implicit enforcement or provocation of certain behavior by third parties could endanger one’s personal autonomy. These reductions in autonomy can have an impact on a small scale (e.g., buying something you wouldn’t buy if it hadn’t been for that little ‘push in the back’ advertisement), but they could also have an impact on a more profound scale. Regarding the latter, an experiment has been conducted in 2014 to investigate how emotional states could be affected through social networks. By altering the amount of positive and negative messages on Facebook’s “News Feed”, the study showed that —unwittingly— the state of mind of a significant amount of users was changed. A better known form of digitized manipulation involves the Cambridge Analytica scandal in which a company was able to influence people’s voting behavior by targeting them specifically with manipulative information. Big data and AI algorithms were used to create psychographic profiles and therefore were key to this project. One could imagine how our continued transition into the digital realm could serve as the perfect opportunity to further exploit such manipulative techniques.
Human dignity
As we move away further from the physical world, it is not wrong to re-evaluate our dignity as human beings. How should we define our role in the world if we no longer explore and shape reality ourselves, but instead employ AI algorithms to supplement our perceptions and thoughts? Of course, it is not all so gloomy: AI algorithms can enhance the human mind positively, from detecting malignant cancer cells which the best oncologists fail to detect, to educative implications by social robots. But in most areas, employing AI systems can make humans feel that they are not involved in the primary role that manages a situation. Think of a job seeker being rejected based on an AI-facilitated review, or an ex-convict which is subject to extra scrutiny because the AI system considers them as high-risk in terms of recidivism. The problem with many of these applications is the fact that the outcomes of most AI algorithms cannot be explained, that is, the algorithms deal with a “black box”. Moreover, for the examples above, it turned out that the AI algorithms were biased: women were wrongly rejected more often for Amazon’s job interviews and black ex-convicts were considered to be much higher risk than white ex-convicts (later, it turned out white re-offenders were mistakenly labeled low risk almost twice as often as black re-offenders). With this, we really must ask ourselves whether we want such algorithms to be a major part of decision-making in society.
A more fundamental issue arises when we dive further into the meaning of human dignity in times of digitization, namely dehumanization. This refers to the objectification and instrumentalization of human beings, a phenomenon that largely results from detachment from the physical world. It can be found in multiple domains. For example, the risk of dehumanization emerges when robots in healthcare are used to perform more intimate tasks (e.g., the care seekers can feel objectified when fed or lifted by such robots). The underlying argument is that empathic capacities and reciprocity of human relationships cannot be achieved by such devices. However, given the rapid developments in this area, this is yet to be seen. Nevertheless, dehumanization in society is expanding. A recent news article described how a woman was sexually harassed on the VR social media platform of Meta, the umbrella company formerly known as Facebook. In the early testing of this virtual environment —or, metaverse— a woman claimed to be groped by a stranger. It is plausible that the digital world they were in lowered the bar for the other person to do such a thing, as the woman was perhaps just seen as “an object” in this world. While one could argue that the act was not actually physical, we should keep in mind that sexual harassment does not have to be physical per se, and therefore can still have a serious impact on someone. As Meta claims the metaverse will become the next evolution in a long line of social technologies, we must be aware of the dangers of dehumanization and its unethical consequences.
What about the opportunities?
Of course, Society 5.0 is proposed for a reason, the claim being that further digitization can tackle more complicated problems. Through Society 5.0, Japan is aiming to find solutions for major global issues, such as climate change. Not only are they aiming for solutions, but adaptations as well by creating better links between multiple sectors and actors in society. However, even if proper solutions or adaptations could be found, most integrated AI systems still deal with a “black box”, as mentioned before. We must be aware of such problems before we allow these algorithms to increase their autonomy of decision-making and make us increasingly dependent on them. In addition, though most problems of the present time may be tackled, it could be possible that can’t fix every global issue and therefore still leave us with unsolved problems. Moreover, new forms of society often bring forth new problems. A growing problem that we can already oversee is the excessive production of electronic waste (e-waste) induced by a digitized society. In 2016, only 20% of all e-waste was documented and recycled. Remains were burnt or dumped, risking leakage of toxic materials in nature, thus contradicting the claim of a “sustainable society”. However, this is only one problem of many that we can currently oversee, not to mention the other problems that we cannot yet grasp.
So, what’s next?
With the further digitization and —ultimately— Society 5.0, where society will practically run on big data and AI, we should start to worry about primary human values such as privacy, autonomy, and dignity. Human-centeredness is promoted to be the aim of a digitized society. However, it rather seems like we’re losing our positions as central actors in such societies by the harm of our core values through the increased deployment of these technologies. Though solutions could be provided for today’s problems, there are still many catches. We should consider the opportunities as a luxury, but our core values as a necessity. This leads us to conclude that the growing digitization is actually presented as a wolf in sheep’s clothing, at least by the mainstream.
“We should consider the opportunities as a luxury, but our core values as a necessity.”
So, what can we do about it? Further digitization is inevitable, as is the rise of Society 5.0 (though perhaps in a different form for some countries). The biggest underlying problem of all is the speed at which digitization is taking place. We can see a trend in which regularization keeps lagging behind the new technology developments, as mentioned earlier. By raising awareness among the greater public, we hope it can shed light on the need for regularization and control; not only at the national level but also through international treaties. Whereas with most human values it is fairly evident what regulations need to be put in place (e.g., the right to own your personal data to ensure one’s privacy), other human values are perhaps too broad and sly to allow for specific regulation (e.g., dehumanization). Rather, the regularization and control must be built up step by step and the foundation for this can only be achieved when there is sufficient awareness for this matter. If we keep this in mind, we can achieve opportunities for all and minimal harm for the individual.