As a society, we have become dependent on Artificial Intelligence (AI) technologies in our everyday life. Some view AI as the way forward, branding it as the solution that will save humanity and the Earth. AI indeed has had some extremely beneficial developments, for example in the medical field with detection of different kinds of cancer. However, for all these good things, there are also disadvantageous developments that came with the rise of AI in society. Big tech companies tend to choose the path of least resistance when it comes to making profits. How the consumers and employees are affected is viewed as secondary or, worst case, irrelevant. Using different examples we will demonstrate that AI will not immediately and universally improve our quality of life if our societal system does not change, and what could be done in order for AI to truly be as beneficial to us as some have claimed.
Currently Used AI Algorithms Pose Health Risks
Quantified Self
Currently used AI algorithms can pose a serious health threat in multiple ways. One of these is the popularization of the “quantified self”. The aim of the quantified self is self-knowledge through self-tracking. Though you might not realize, this already features prominently in our lives. You might wonder if this is actually such a bad thing. Would it not be beneficial for your health to have a so called “sixth sense” that objectively gives you information about your body in the form of data? Take the example of a smartwatch. For someone with heart and vascular disease having a watch that measures your heart rate is medically recommended. For those among us that use it primarily to measure our body’s ‘output’, it might do more harm than good: With the quantification of the body, the need to improve those numbers inescapably follows. Losing trust in your own judgement, and only focusing on the data your body produces can have serious consequences. The danger lies in losing grip on reality and the ability to think intuitively about your body.
It is true that for some, using smart appliances can help you achieve certain goals by, for instance, using tracking apps to track your progress in losing weight or to see how your running has progressed over time. But there is also a different side to this. Take the Lumen, a smart device you breath in and consequently ‘measures’ your metabolism. If the device shows a 1 or 2, your are doing well. Is it a 4 or 5, not so much. This device makes good use of the gamification principle, inserting gameplay elements in non-gaming settings. This makes it easy to become addicted to smart appliances. Though experts have contested the accuracy of this device, after the umpteenth time of seeing a four, wouldn’t you change your behavior to try and get a much desired 1 or 2?
Not only can these smart devices be a danger to your health, they are also a goldmine for Big Data companies. These smart devices “harvest” heaps of data from their users, which is subsequently used to create an extensive profile of its user. This enables companies to accurately target this person by their design. Private information that you would never consider sharing, will become accessible for Big Data companies. The personal and societal consequences of this will be elaborated on later in this article.
Social Media
Dependence on smart appliances is not the only health threat we face today. Another topic concerns social media. On the one hand, it allows us to connect with friends, which is especially beneficial in times of Covid-19. It is a way to gain information about world events and to socialize with people all over the globe. But hand in hand with these benefits are the risks of social media use, including social media addiction. Right alongside it we are witnessing a rise in social media addiction among adolescents and young adults. Over 72% of adolescents use Instagram, 69% use Snapchat, 51% use Facebook, and 32% use Twitter. The prevalence of social media addiction seems to (it ranges from 0% to 82% across studies) average up to 24% of social media users around the world. The same meta analysis shows that the results skew towards younger users. A staggering 45% of adolescents admitted they are online almost constantly throughout the day. On top of peer pressure to use such platforms children and young adults are more vulnerable to actions that require inhibition because their prefrontal lobe is not yet completely developed. Social media addiction has been associated with a number of serious symptoms such as low self-esteem and depressive symptoms. Using the platforms for more than 5 hours a day is also associated with an increased risk of suicide.
But how come social media so addictive?
Well, mostly because AI can be used to continuously present the user with content that is most likely to keep them engaged on the platform. Each time such a recommendation algorithm by a big social media company like TikTok or YouTube gets leaked, it becomes apparent that the focus solely lies in increasing engagement time because this, in turn, increases profit.
With their own research, Meta (formerly Facebook) found that the use of Instagram correlates with an increase in suicidal ideations and eating disorders among teenagers. It was shown that these recommendation algorithms did not just cause addictive scrolling loops that lead to harmful upward social comparisons, but also that the specific recommended content becomes more unhealthy and extreme as engagement continues. You would expect that a company that discovers such a detrimental effect on its consumers caused by its own product would adjust its production. However, Meta did not work on their algorithms. No, even after the research files got leaked last year no significant change has been reported. Maybe Ironically instead, Meta is developing an Instagram version for below 13 year old children to expand the user base. So, instead of creating a safer environment for their already existing damaged user base, Meta aims at expanding the market to a vulnerable audience that is promising in regards to profits.
Individual Privacy Is Repeatedly Disregarded
Another problem we face today is ignorance and/or carelessness regarding online privacy in the population. If I were to ask anyone who does not have a related background in AI or computer sciences what they know about online privacy and data algorithms, I guarantee that for 9 out of 10 people their face will turn into a blank expression. A large part of the population simply does not understand how much data is gathered and the capabilities of the technology that processes it. Therefore, they are less concerned about online privacy and sharing personal data than they should be.
Let us use the example of a of a smart electrical meter. Such a meter obtains energy data. Using an AI algorithm this data can reveal what kind of electrical appliances a house contains, down to the precise model of the appliance. It can show how many there are and how often and at what time these appliances are used. All this data can be combined with the other information available online such as lifestyle and income to generate information about your private sensitive data. With the rapid advancement in AI in the last years, people do not realize just how easy it is to extract personal sensitive data when the technology is applied to large overlapping data sets. This same data can then again be used to train new and more accurate AI models, making them even more powerful.
In recent years data scandals and whistleblowers like Frances Haugen and Brittany Kaiser have given us an insight in how and to what end our online data is used. The Cambridge Analytica scandal elicited a worldwide media storm, when it became clear just how ‘useful’ our private data can be in the hands of big tech companies. Facebook had access to millions of data points from citizens who never gave their permission. This data was subsequently used to create a highly accurate profile of each of these users. With this information they could calculate what precise information they needed to ‘feed’ a person in order to sway them towards their desired views. This resulted in a calculated information stream to each individual target, who – unbeknownst to them – were influenced by a company to change their behavior. They used this during the Trump campaign of 2016, Brexit campaign and up to 68 countries including Brazil, Kenia, Ghana, Malaysia, Nigeria, India and Mexico to create civil unrest and skew elections according to the will of the highest bidder. The usage of online data therefore is nothing short of an absolute threat: not only for individuals but also for democracy and global peace.
Even though these ominous facts gained a lot of media attention, not everyone who has knowledge of these data models acts accordingly. Most people that are concerned about their online privacy say it has no influence on their information sharing behavior. Even if they are highly concerned about the misuse of their data, the majority still does not read the privacy policy of the online platform they wish to access. Most people have become so dependent on their accurately working and personalized online environment that they trade their privacy for their convenience.
Seeing how our online data can be used against us and the worldwide controversy it elicits, you might expect that Big Data companies and social platforms would want to help improve the current privacy policies. This is however not the case, and it once again becomes uncomfortably clear that making profits is the undisputed priority of these companies. It is estimated that in 2020, European businesses spent 70 billion on advertisements. Around two thirds of this went straight to Google and Facebook. Tracking software (cookies) enable companies to track online activity from Facebook and Google to other places on the internet. With that information they can predict your future commercial behavior. They promote these on sites with automated links to the products. The consequences of regulations became clear when Apple launched its new privacy setting, where people had to give explicit approval for this tracking software to track them to other websites. More than 80 percent refused to do so and Facebook, Snapchat, Twitter and YouTube subsequently made a loss of almost 8.5 billion euro. If mere tracking of our online activity generates a loss of 8.5 billion, it is no wonder the Big Data companies will do whatever it takes to keep hold of their precious inventory: us.
AI In The Workplace Increases The Gap Between The Rich And The Poor
Estimating accurate effects of AI on the job market of the future proves to be a difficult task. It is well established that there is a lack of well-informed models and high quality data as well as an understanding of how cognitive technologies interact with economic dynamics. This leads to disagreements among scientists regarding their estimations. However, we know that the advent of AI in the workplace is just a stepping stone in the history of digital revolution. Many varying predictions about the future are thus based on past job market developments due to technical interventions. Nonetheless, there seems to be an agreement on AI facilitating inequality in societies around the world. During the past years there seems to be a gradual decline in demand for cognitive skills on the job market. Therefore, workers are pushed into low-skilled jobs for which they are overqualified, which in turn pushes low-skilled workers out of the labor market. The previous middle class is thus further distanced from the upper class. On the other side we see that it takes fewer and fewer people to create a successful business. Thus, the profit is distributed among less people – another important factor that increases the gap between the rich and the poor.
It was found that the implementation of AI in the workplace can be beneficial during times of low inflation. But only if profits are used to generate new jobs instead of increasing salaries or company profit. Unfortunately though this is heavily dependent on the company leaders and their – usually profit-driven – decisions. While it is possible that the use of AI can create new jobs, this is bound to certain
situational conditions. But irrespective of whether AI creates jobs or not, the middle class is further pushed down and disparity increased.
The topics mentioned in this article have one thing in common: The strive for economic power leads individuals to abuse the system and act with a reckless disregard for consumers. Our future is thus dependent on how proactive and appropriate policymakers adjust.
Regulations That Limit The Use Of AI
So, one may be tempted to say that while all these things are bad indeed, the law will be adjusted soon enough in order to end the abuse of such systems. Let’s take a look at the aftermath of the scandals mentioned above. Do you think that all responsible individuals are met with justice and that laws are established to prevent something like this from ever happing again?
In the case of our black sheep, Meta, it was revealed that the company engaged in lobbying efforts to divide US lawmakers and “muddy the waters” in Congress, following the whistleblower leaks mentioned before. One might argue though that this is quite a recent incident and Meta will be confronted with actual consequences. So, let’s look at and learn from an older case instead: Cambridge Analytica (CA).
The staff from CA as well as staff from its parent company Strategic Communication Laboratories (SCL) seemed to scatter into a handful of successor companies with the same or similar aims.
SCL’s director of operations, Gaby van den Berg, formed a new company, Emic Consulting Limited. Emic offers training courses based on those developed by SCL to military clients in Canada and – well this is awkward – the Netherlands. They aim is to analyze and profile groups to find the best strategy to effectively influence a target audience’s behavior. Emic advertises the need for such “behavioral dynamics methodology” with clearly critical threats for a nation like illegal immigration. I hope you heard the sarcasm when you read that sentence. Emic is not the only company emerging from the ashes of CA. Another example is AUSPEX International, which applies similar principles with the aim to reduce polarization within society in an ethically based way, so they say. That is essentially the opposite from what CA did. But is it the right way to go if the company applies similar measures and influences people by presenting them with fake news? The forceful reduction of politically diverse viewpoints is as regressive as CA’s procedures.
There are a bunch of other successor firms that were all acquired by a holding firm called Emerdata Limited. Even though the scandal was six years ago, the responsible people have been punished leniently: For instance, the only consequence Alexander Nix has to face is that he is not allowed to be the director of any UK based company for another 5 1/2 years. He is one of three individuals who were responsible in the CA scandal and now own major parts of Emerdata.
So what do you think: How many regulations that limit the use of AI exist now?
None. Unfortunately, the process of creating and adjusting regulations proceeds leniently and slowly in response to technological advancements and the connected scandals. Our system is so inherently flawed that big companies are getting away with huge unethical crimes, merely because they can afford to.
Ways Forward
Workplace democracy. The highly authoritarian nature of many big tech companies leads to oppression of employees and makes it easier for leaders to make unethical decisions. If all employees have a say in the decisions and procedures of a company, unethical decisions are less likely to happen. Furthermore, individuals that are involved in the decision-making process are more open to the resulting change and an increase in overall productivity. Already Karl Marx rightfully complained that the way companies are structured is dehumanizing to laborers, especially those at the bottom of the hierarchy. Those workers cannot claim ownership to anything they produce and are easily replaceable in the name of profit. A democratic workplace allows employees to reclaim some control for themselves, the working process and the product of their work. Thus, the creation of democracy in the workplace has moral, economic, and political benefits.
Universal basic income. It goes without saying that the less people one needs to create a successful company, the less people should be dependent on jobs provided by companies. If more and more essential work can be staffed by AI or similar technologies, we should not have to work just because that is how the system used to work. A large-scale experiment showed that a universal basic income is effective, allowing people to turn to their passions, create something meaningful, and specialize in areas they are actually interested in without external pressures.
Education. We propose an addition to the current education system, where people learn from a young age about current technology. This will reduce the ignorance in society in the future about topics including data privacy. By teaching people about AI algorithms starting at a young age, they become better aware of the consequences AI algorithms can have and thus making it harder to companies to exploit the possibilities of AI.
Independent ethics committees. Making use of strictly predefined rules, an external, less bribable committee could monitor the algorithms to see if are they are harmful to users. If that becomes the case, they have the power to sanction companies that ignore the rules and regulations.
Applying these methods to try and eliminate the harmful effects of AI, we truly believe AI algorithms can have a great beneficial impact on our future.