How AI negatively impacts you and everyone else in society

Without people knowing it, artificial intelligence (AI) is more and more impacting our society. AI is being integrated into our social media platforms, it is transforming our businesses, our safety, and privacy. These changes obviously have a positive impact as we are able to communicate more efficiently and solve difficult problems because of more powerful systems and algorithms. However, AI also comes with a great deal of danger and serious downsides. This article will uncover the negative impact AI has on society and will provide actionable and scalable solutions for different areas where AI plays an essential part.

What are the consequences of an AI-built social network?

AI can be seen as a fundamental part of social media platforms that are used every day. To name a few things, it is used to target users with advertising and recognizing faces in photos on Facebook. But also, to offer job recommendations and show people you might want to connect with on LinkedIn.

Because of these platforms, people are able to interact with people all around the world and create new connections. Social media platforms can be considered a key tool of communication. Especially for young adolescents, who are able to find support on these platforms that may not be found otherwise. It also provides them with an environment to explore friendships and social status, to share ideas and thoughts beyond the geographical boundaries, and unites people for the achievement of specific goals. This can be considered a positive change in society.  It is safe to say that these platforms would not be the same without AI.  However, several studies also show the negative consequences it can have. Peer rejection and a lack of close friends can be considered a forecast of depression and negative self-image by adolescents. The use of social media also creates an opportunity for emotional distress from receiving threats, haunting or humiliating comments.

Furthermore, the compulsive use of social media platforms seems to result in mental health problems as it affects all three negative emotional states; depression, anxiety, and stress. AI plays a role in people becoming too dependent on social media platforms as the design of social media is meant to nurture an addition. People are experiencing a feeling of restlessness when they are not able to check notifications of social media platforms, giving rise to the phantom vibration syndrome. As a result, people are experiencing depressive and anxious feelings, having problems with regulating phone usage, emotional control, and inadequate sleep as well as poor sleep quality as a result of FOMO or ‘Fear Of Missing Out’. This shows the serious downsides of integrating AI in social media platforms.

To address these mental health issues, resources are needed for social services and mental health experts to extend their expertise to online spaces and work with other members of the community to identify vulnerable people and intervene before the damage is already done. Furthermore, Internet providers such as Google, and social networking platforms, such as Facebook and YouTube, need to continue to work with policymakers to create awareness and to develop technologies that can assist people in staying safe.

Should an AI-integrated Workforce Worry Us? 

AI will have a huge impact on the workforce as automation and AI will transform the nature of work. Especially in developing countries where repetitive and manual work is performed which can be seen as most vulnerable to automation. AI is accelerating the automation of the factory’s workforce. This means that manufacturing workers will be replaced by machines when outsourcing manufacturing to these developing countries is no longer the cheapest option. Eventually, machines will even do a better job than humans as machines are able to work with higher speed, higher precision, and human error being eliminated.

هل سيضيف «الذكاء الاصطناعي» المزيد من فرص العمل؟

Not only developing countries will face the consequences of accelerating automation because of AI. Potential job losses will also be seen in middle-income jobs in developed countries as these are highly automatable activities, such as accounting.

According to the McKinsey Global Institute, 30 percent of the activities in 60 percent of all occupations could be automated. They also expect that 15 percent of the global workforce could be displaced by automation. However, they estimate that if the pace and scope of adoption are in the highest gear, 30 percent, or 800 million workers will be displaced.

Even though new employment opportunities will emerge with these changing workspaces, the replaced human workers are not able to work alongside these machines as they simply do not have the technological skills. Adjusting to these new jobs would require reskilling and upskilling the workforce by further education. This means that education systems should evolve and people need to learn for a changing workplace and the workspace will need to adapt to this new era where people work alongside machines. Furthermore, governments will need to consider stepping up investments that contribute to the demand for work for middle-income jobs that are most affected by automation.

Does the technological side of AI influence our society?

According to Moore’s law, the number of data on a microchip is doubling every two years and every day quintillions of bytes of data are generated. This means that the amount of data on the internet is growing every day. Research from IBM has stated that 90% of the data that is now on the internet is created since 2016. In 2016 is 44 GB data generated per day, the expectation is that in 2025 the amount of data that is generated per day is 463 GB. This means that the amount of data that will be generated per day will increase with an incredibly large amount. This can have risks on the technological side of the society. The data that is generated can be very valuable data, for example personal information from a user. This is the reason why people care a lot about their personal privacy. However, the growth of the amount of data and the development of Artificial Intelligence systems can bring enormous risks with it. People are scared that their privacy will be violated. This fear is grounded when for example looking at systems like facial recognition and recommendation systems.

Data privacy risks when using AI

With the rise of AI systems, facial recognition systems will also grow. Facial recognition systems are systems that can recognize a person even when he/she is wearing a hat or when the person is not looking straight into the camera. Nowadays facial recognition systems are already being used in America and China. In America they use these systems in airports and cities. In China the systems are used as a control tool. With those systems all over the world, people’s privacy is invaded. People are being watched with cameras everywhere e.g. in train stations, shopping malls and public places.

Besides that, not only facial recognition systems are growing, but also the recommendation systems are. In a recommendation system every user, item and client has an ID. With the help of the system the user can retrieve a recommended item ID. This item can be a related item, similar item or related user. In this way items and other users are recommended for the user. However, those recommendation systems come with several risks. First of all there are direct risks, those are risks where private information about a person is leaked to someone who was not supposed to get this information. Those direct risks can lead to identity theft. Besides that, it is possible that with the information retrieved by recommendation systems re-identification can happen. This means that with information about a person from one system, the person can be identified in another system. With this re-identification, a person’s privacy is violated. It violates people’s privacy, because people want to keep the different disciplines of their lives separated. Furthermore, recommendation systems can also function as quasi-identifiers. A quasi-identifier is pieces of data that are not unique, but the pieces together can lead to a unique ID. In this way the pieces, for example the zip code, gender and birthdate, together can identify a user. If the quasi-identifier compares its information with another (public) dataset, the privacy of the personal information of this person will be violated, because more information than supposed is known now about the person. So, in short all those risks lead to the fact that there is a certain fear for unwanted exposure of that personal information. Furthermore, recommendation systems work better if there is more data. Unfortunately, this means that if more data is used, the probability for unwanted exposure will also increase.

However, facial recognition and recommendation systems are also very useful. For example, everyone enjoys the recommendation function on netflix and everyone also experiences the benefit of opening their iPhones with facial recognition. For the privacy issues, the privacy of a user can be guaranteed if the algorithm that is used for the system can ensure that any record does not alter the probability of any outcome. This means that the algorithm guarantees privacy and that the system is resistant. Besides that, should systems like facial recognition only be used for law enforcement.

The malicious use of AI

Not only can the privacy of data of users be violated by AI systems, AI systems can also be misused. Autonomous weapons are a good example for AI systems that can have a high risk to be misused. Autonomous weapons are weapons that select and engage targets without a human who servants the weapon. This can be very dangerous and has a lot of concerns. The concerns are divided in three fields; the ethical field, the legal field and the security field. Firstly, in the ethical field the biggest concern is if an algorithm should make a decision of life and death. A machine does not know how valuable a life of a person is. Secondly, in the legal field there is the concern that there is unclarity about who is responsible if the autonomous weapon kills someone. Lastly, the security field yields the fact that accidents can happen with autonomous weapons, they can eventually lead to escalations of conflicts. Those accidents are called “accidental misuse”. However, there exists also another kind of misuse that is called “intentional misuse”. Intentional misuse is for example done by hackers or tyrants. Hackers are writing free software and facilitate access to information wherever possible. Hackers are thus actually computer criminals, they access information to which they actually do not have access to. It is even possible for hackers to turn to AI and use it to weaponize malware. This all can threaten in the end the digital security. Not only is digital security threatened, political security (e.g. profiling and repression) and physical security (e.g. non-state actors weaponizing consumer drones) are also threatened. This misuse is only growing, decades ago most of the data losses happened because of human errors, while now most of the data losses are often the result of hacking

US government hack is 'significant' FBI says as Russia blamed for attacking  Treasury and other federal agencies

The misuse of AI systems regarding autonomous weapons can only be solved by making laws on when those weapons can be used. To solve this misuse of data we must fight AI with AI. Defending against misuse can best be done by automation of our cyber-defense systems. If more companies are pursuing this strategy the data can be defended from hacking.

What should be changed?

As reviewed in this article, the integration of AI into our society will have some serious downsides. It poses not only threats to our mental health, but also our workforce. To change the outcome of this, we should ensure the availability of resources to help the people that are most vulnerable to mental health problems as a consequence of social media usage. Furthermore, we should ensure that social networking platforms continue to create awareness and to develop technologies that can assist people in staying safe. Concerning the impact on the workforce, we must fight to ensure that there is a possibility for education to be able to adjust to the new AI jobs that will rise when automation is taking over routine jobs. We will need to adapt to this new era where people work alongside machines.

Besides that, the development of AI systems also comes with some technological risks. On the one hand facial recognition and recommendation systems are developing, which could be very useful in daily life. However, on the other hand those systems come with privacy issues and concerns. The privacy of the users can be guaranteed by implementing algorithms in those systems that ensure the privacy of a person by not altering the probability of any outcome by a record. Besides that, systems like facial recognition should only be used by law enforcement. AI could also have the technological risk that it could be misused. This misuse of AI must be fighted using AI systems. An option to fight against the misuse is automation of cyber-defense systems. In this way the data could be defended from people trying to hack the data.

Implementing these solutions could perish the negative impact AI has on society. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More