Ethical Issues of AI technology and how to solve it.

Today’s most impressive and revolutionizing inventions are mostly artificial intelligent technologies. They change the world at a dramatic pace. The technologies allowed the world to be more connected and time-efficient than ever before. Apple’s Siri, Tesla’s autopilot, Amazon’s Alexa, Netflix’s and Spotify’s recommendation systems. Predominantly, they change the world for the good. However, these technologies also raise some serious ethical issues, and this gains more and more attention.

Image Source

The year 2018 was a year when the unethical side of AI became visible. Major scandals were revealed, such as the Cambridge Analytica scandal. In this scandal, a data analytics company used the data of more than 50 million people to create psychographics. With this psychographics, which is a process of classifying people based on psychological traits, targeted ads were deployed to interfere with elections such as the Brexit referendum and the US 2016 presidential elections in 2016. This led to a major court testimony where Cambridge Analytica was accused and convicted of misleading the users of Facebook. The Cambridge Analytica shows painfully how AI technologies can harm western democratic values.

Other examples of unethical AI or AI scandals are the self-driving uber taxi killing a pedestrian, the fatal crash of Tesla’s Autopilot, Google’s Project Maven, which aimed at deploying Google’s AI technology to analyze, US Army drone surveillance footage, and the role of Facebook in Myanmar genocide. All those issues show how AI technologies can lead to very unethical situations. The negative impacts of these scandals are relatively clear. However, more subtle and less notable negative impacts of AI technology are happening as well, and society is becoming more and more aware of this. 

Image Source

The Director of the Ada Lovelace institute Carly Kind calls this the third wave of AI ethics, and suggests we are moving into a new form of societal engagement:

“Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology. It is taking us beyond the principled and the technical, to practical mechanisms for rectifying power imbalances and achieving individual and societal justice.”

Risk of replacing humans with AI
In late May 2020, Microsoft announced that it would fire 50 editors who screened and curated news and stories, replacing them with AI editors who had been working with human editors for some time. Affected by this, about 27 editors of the British Press Association responsible for maintaining its news homepage on the MSN website and Microsoft Edge browser were told that they would be fired at the end of June. Although Microsoft specifically stated that the layoffs and the decline in advertising revenue of news media caused by the COVID-19 epidemic are not directly related, it is an indisputable fact that AI technology is used to reduce the labor cost of news teams.

In the report, an editor who is about to be fired emphasized that replacing humans entirely with AI is risky, as only human editors can ensure that sites will not display violent or inappropriate content to users. This will indeed be a problem for AI because it is better at recommending human-like content, but it cannot identify some potential socio-ethical risks. It has been 27 years since Microsoft launched MSN News in 1995. There are at least 800 editors around the world who are still screening and recommending news. In the future, Microsoft’s news team will still collaborate with human editors and AI editors, but the trend of AI replacing human editors may be accelerating significantly.

Importance of freedom of information
Power, according to renowned academic Manuel Castells, is founded on control of communication and information, whether for the state or media companies. The objective of diverse power contests has always been to control communication and information. The freedom of information, it might be stated, controls power in the information exchange process. The discipline of news transmission, in particular, is predicated on the human demand for information freedom. The right to freedom of information is a global right that many nations believe to be a fundamental human right.

Large volumes of data are used to power AI technology. The development of algorithmic technology completely matches the requirement for point-to-point precision consumption in news items. It also makes it possible for predecessors’ forecasts to be easily realized. This appears to be an improvement, but it has also buried the underlying hazards, as McLuhan stated, “in the beginning, we shaped the tools, and eventually the tools, in turn, shaped us.” As a result, algorithm technology obstructs the free flow of information.

Information Cocoon effect
Furthermore, in the age of artificial intelligence, algorithmic technology has brought about a reversed frog effect, which restricts the exchange of information and results in the Information Cocoon effect. People’s information reading habits will be driven by their interests, according to the cocoon effect. They will wrap themselves in a standardized, procedural cocoon if they continue to receive information from algorithm recommendations.

The user only sees what he wants to see, only hears perspectives with which he agrees, and voices with similar viewpoints are repeated, resulting in a somewhat restricted environment. These risks deprive individuals of their right to self-determination, undermine information diversity, manipulate the basis of information dissemination, and as a result, generate a monopoly of knowledge caused by mechanization brought about by scientific and technological advancements, which Harold Innis is concerned about. Although algorithmic technology regulates communication rights, it is only a gradual danger, the free flow of information is critical to human rights protection.

Infringement of privacy
To identify and gather personal privacy, personalized and tailored procedures employ big data and algorithmic technologies. As a result, as AI and algorithmic technology advance, how to govern privacy protection should be explored. For example, Facebook “likes” might reveal a great deal about a person. According to a 2019 study conducted by the American social media company, researchers demonstrated that they could utilize Facebook ‘like’ to properly forecast personal information such as a user’s gender and ethnicity. The researchers were also able to determine a person’s age, intellect, and religious and political beliefs.

The study was based on information from 58,000 volunteers who provided “likes” to Facebook. The findings were published in the Proceedings of the National Academy of Sciences. Many companies encourage users to log on to their websites with Facebook or other social media accounts. This, in turn, provides companies with a complete picture of the user’s birthday, list of friends, schools attended, and other personal information. To sell products and enhance services, marketers frequently exploit Facebook “likes” and other digital data records.

The researchers expressed concern about the potential for digital records and personal information to be misused. They said that corporations, governmental agencies, and even one’s Facebook friends may collect information that the user did not wish to disclose. As a consequence, Facebook should take the appropriate steps to ensure that the personal information of its users and their friends is effectively protected.

Risk for bias in AI
AI systems are only as good as the data we put into them. Poor data or data quality may also contain implicit racial, gender, or ideological bias. Imagine the impact on a credit institution’s brand if it were found to be frequently rejecting applications because of bias in AI training. Therefore, it is critical to develop and train these systems with unbiased data, as well as to develop easily interpretable algorithms. Several research groups, including IBM Research, are already developing ways to reduce biases that may exist in training datasets.

Why is unethical AI happening, and what is the industry’s response?
We may conclude that many different unethical issues of AI are yet identified. But the question remains, on a fundamental level, why do these unethical issues occur. The most obvious explanation is economic constraints. When enough money is on the table, people tend to make unethical decisions, even if we know the wickedness of the outcome. A good example of this is the tobacco industry. It is known for years that tobacco leads to all kinds of undesirable outcomes such as cancer, cardiovascular diseases, and infertility. Still, governments and societies did not succeed in completely banning this product and industry. 

But is the purely economic and the purely profit-driven vision of big tech companies not short-sighted? What if these scandals keep occurring? Will governments and society keep quiet?

If we look a little closer to the way big tech companies reacted to the controversy around some of the issues, we might gain some information about how companies deal with the consequences of ethical AI. For example, after Google’s Project Maven became public, more than 3000 of their employees signed an open protest letter against the project. Under this pressure, Google decided to stop project Maven. In response, Google developed a code of ethics. This code of ethics defines ethical principles that the company supports and pursues. This trend in itself implies that Google is doing efforts to do better on ethical issues in the future. 

However, some people are posting critical notes to Google’s response. For example, Cansu Canca, an AI ethics expert, says: “Google’s AI principles only made general and vague claims”. According to Canca, these principles can mean anything and are mainly built to avoid regulation instead of the intended target, avoiding unethical AI. 

Moreover, Canca argues that big tech companies are not effectively using AI ethics in their company workflow. She argues that ethics boards are most often too far away from the researchers and developers, therefore the people responsible for ethics in a company are lacking the power and knowledge to be powerful-decision makers. Also, using a list of principles is not an effective way to deal with ethical issues since there is no hierarchy. Ethical boards using such a list will inevitably walk into ethical conflicts where multiple ethical values collide. 

Another perspective for dealing with ethical issues is regulation. A big player in the big tech industry is Brad Smith, being the chief legal officer of Microsoft. Mr. Smith is pledging for regulation in the big tech industry. At the New Work Summit in Half Moon Bay, Brad Smith argued: “We don’t want to see a commercial race to the bottom” and “Law is Needed”. 

But we also need to be clear about the consequences of regulations. If we for example look at the medical industry, we see a successful, but a very slow system of medicine legislation. Pharmacists need to proceed through complex and very slow processes of clinical trials and safety monitoring. With this process, the number of scandals and ethical issues reduces a lot, but it also slows down the rate of development. 

Furthermore, Society is also very much benefiting and relying on AI technologies. Take for example the development of Tesla’s Autopilot or the AI application in medicine, which helps in the detection of many diseases. Those technologies are life-saving. We literally cannot live without them. Not to speak about the influence of social media on our daily lives. This also displays the tremendous power the companies that develop these technologies have. Regulation is therefore hard if not impossible. 

What can we do about it?
According to Cacna, a culture shift is needed. Ethical AI should not be a yes or no stamp process. Ethics should be an essential part of research and development. It should be about contracting full-time (consultant) philosophers and ethicists as part of the project team. Furthermore, all company employees should be trained to see and address ethical issues. The developers of an AI product are often the people with the most knowledge of the designed product. Ethics must be understood. Luckily, the wheel does not to be reïnvented Canca mentioned. To understand AI ethics, we can use and deplete the field of applied ethics which exist for over two millennia. Of course, modifications have to be made to align with the problem at hand. 

Secondly, the systematical ethical analysis should be a part of the companies product and development workflow. Even when a product is already deployed. Not all ethical issues can be predicted, when they arise, they should not only be addressed as soon as possible but the issues need to be solved. The sooner the better, therefore if it is needed, already deployed technologies should be updated. Finally, companies need to define their ethical standards. They do not only need to define their ethical principles vaguely and in general as Google did. The companies should also bring hierarchical order to the different principles to avoid ethical conflicts. Again, this process can be guided by the already existing knowledge of the field of applied ethics. 

The AI ethics lab developed a model that is based on the above three concepts of understanding ethics, ethical analysis, and developing the companies ethics values. 

If AI companies keep refusing to take ethics seriously and don’t take their responsibility in creating ethical AI regulation will be inevitable. We will move towards a situation where products need to be approved to allow them on the market as in medicine. We will probably get a federal or international organization like the Food & Drug Administration (FDA) or European Medicines Agency. Not only big tech companies will suffer when this emerges, but society as a whole. Therefore, this should be avoided at all costs. AI companies have to take responsibility. The AI ethics lab solution to unethical AI, by adding ethical AI to product research and development is an idea whose time has come, and an idea that is worth spreading.

Leave a Reply

Your email address will not be published. Required fields are marked *


A Call for Basic Human Rights: EU Should Mandate Minimum Wage for Click Workers

In the era of advanced technology and artificial intelligence, the gig economy has seen a significant rise, with an increasing number of individuals engaging in click work – performing small tasks online for minimal pay. This phenomenon raises ethical concerns about the treatment of click workers and the need for a minimum wage mandate in […]

Read More
Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More