If AI Systems Decide Our Future – We Are Doomed

What Exactly Is the Problem?

Humans have a limited capability of processing large amounts of data and drawing sensible conclusions from them. It seems that AI may address this issue and give us better insight into the available resources and therefore ensure that decisions we make, based on the information we have, is the most reasonable one. People rely more and more on AI recommendations, analysis, predictions, and autonomous decisions, which may raise a question: what’s the next stage? Should we delegate even more decisions to the systems that seem to know us sometimes better than we know ourselves? First, it’s worth exploring the idea of autonomy as it is directly connected with the topic of human decision-making processes. An individual has autonomy when one has a capacity to live one’s life according to reasons and motives that have the source is a given person and are not a product of manipulation and other external factors. Similar definition applies to independent decision-making. One may notice that indeed we are always under some kind of influence, this question here may be about the scale of this event. As we provide the AI more and more our data to process and suggest us the most probable outcome in a given scenario, we are somehow increasing its influence on our final decision. This may have its good sides but according to many observations, there are many serious issues with such power and autonomy delegation. If we let AI lead our life on an even greater scale than it’s done today – we may be in great trouble.

Who Creates the Algorithms and Why?

YouTube recommendations are driven by Google Brain, which was recently opensourced as TensorFlow.
Photo : Unsplash

Most of us are familiar with recommendation options on platforms such as YouTube or Spotify. Based on our preferences it is supposed to recommend the users the content that may be interesting for them. However, in order for the AI to learn what it means that it does a good job, it needs a metric, a reference that guides it. The metric may totally change how the AI “understands” its performance. According to article, the recommendation algorithm implemented in YouTube was based on the engagement of the users which led to various consequences. It seems that the borderline content was more engaging, leading to recommendations of videos regarding topics such as conspiracy theories, fake news, flat-Earther and so on. Besides the fact that not the best quality content was recommended, the other issue mentioned in an article is that many people get caught in the loop of clicking more and more videos as they can play one after the other. The ex-developers suggest that the algorithms primarly goal is to make people addicted to the platform, spend there a lot of time and watch many advertisements along the way. As the AI gets more refined, the users should get more aware of the real goals of the systems they use. The profit for the corporations is apparently the most important outcome – regardless of the harmful side effects, the platforms may bring to society.

Wearable technology is the general name of technological devices that can be worn by humans and is loaded with smart sensors that monitor body movements.
Photo : Unsplash

Music or video recommendations are not the only areas where AI may influence our decision-making. The so-called wearables are another source of potential risk where human decision autonomy may be largely influenced by corporations. The article claims that one can read about the potential risks connected with the wearable devices. As they gather a lot of users data and process it, they can also recommend health habits and judge the quality of the users eg. sleep. People in order to decide whether they sleep well, feel well and so on, often use their intuition, the feelings they have “inside” their minds. The wearables act as external judges whose access to our bodies measurements are to be superior to our personal connection to the body. It brings a risk of being disconnected and not trusting our own judgments that regard ourselves. Additionally, the way in which those devices are created may lead to addiction issues. It’s similar to other popular applications where “gamification” reward systems are used to encourage the user to do certain things. It’s like giving a child candy for good behavior, in the case of apps, these are points or other virtual resources. This may not only addict people but also make them objectify their own lives. If every action or habit is treated from the perspective of potential profits, similar to money, but virtual coins, then people are more like products than living beings. This issue is strongly connected with another aspect that AI decision-making may enhance, which is the total productivity of society. The wearables encourage constant improvement as if the way people are now is not enough. If we give the authority to decide how “enough” we are to the AI systems that are meant for the profit of the companies that made them, then we can gradually lose our autonomy and self-worth. As advertisements are becoming more and more refined and they create our desires often in subconscious ways, we should put more attention and care towards the final decision-making step. Making decisions is challenging but this is what makes humans conscious and autonomous beings. Our minds and decision making processes seem to be the last batalion in the fight with the more and more automated world.

AI Also Makes Mistakes

In the modern world, decision-making systems are preferred over human decisions because they do not have some of the flaws or shortcomings that humans have. Algorithms play a crucial part in the operation of decision-making systems, however, there have been worries regarding algorithmic bias. We should seek to utilize the most impartial algorithms possible. But what is an algorithmic bias, exactly? An algorithmic bias is a term used to describe systematic and repetitive inefficiencies in a computer system that result in unethical outcomes, such as prioritizing one set of users over another.

The prospect of algorithmic bias is especially concerning for autonomous or semi-autonomous systems, as these systems do not require the presence of a human who can identify and adjust for algorithmic bias. Furthermore, as systems get more complex, it may become more difficult to govern how decision-making systems make judgments, which are mostly influenced by biases. The question which the necessary attention has not been paid is what moral criteria should be applied when autonomous systems are faced with life-or-death judgments on how to distribute risk among humans?

Data That Impacts Your Future

In a data-driven environment, building the knowledge and aptitude to recognize biased algorithms, as well as determining the most appropriate ways of rectifying them, may be difficult.
Photo : Unsplash

How biased decision-making algorithms can affect people’s lives can be seen in risk assessment algorithms. Predictive risk assessment algorithms have been designed to estimate the destinies of millions of people in the criminal justice system during the last two decades, choosing whether a defendant should be jailed pending trial or released on parole based on an algorithmic risk estimate. Researchers found that the system was predicting wrong almost half of the cases. Although these prediction algorithms have been acclaimed as “more objective” or “fairer,” they have systematic biases against specific ethnic groups or genders, owing to the fact that they encode greater systemic concerns. With the system’s accuracy rate just a few percentage points greater than that of people with no prior judicial expertise, several judges wondered if they should avoid utilizing algorithms entirely.

Inequality and injustice in algorithms have many reasons, but biases in organizational decisions about individuals are frequently one of them. Numerous examples of algorithms establishing or enhancing historical biases, or even creating new kinds of bias or unfairness, have appeared as a result of new types of decision-making. It is obvious that we need actions to anticipate risks and prevent destructive outcomes are required to avoid.

But it’s not over, the wrong and uncontrolled decision mechanism of artificial intelligence may not only throw you in jail but also play with your health. A study showed that an algorithm designed for predicting which hospital patients will have pneumonia problems performed well. But it made one serious error and instructed doctors to send asthmatic patients home even if they were in the high-risk category. Whereas, with the decision of a human doctor, the hospital would automatically send asthmatic patients to intensive care and ensure that these people receive the necessary advanced care. 

The use of algorithms to help society rather than harm it is extremely important in the decision-making mechanisms used by states. Because most of the time, the future of the individual is determined by the decisions taken by the government about them.

According to the meeting document of UNESCO,  decision-making algorithms are posing serious problems. As can be seen, it must always be able to assign ethical and legal responsibility to physical people or existent legal entities at any point in the AI system’s life cycle. As a result, human supervision involves not just individual human oversight but also public oversight, as needed. It’s possible that humans will have to rely on AI systems in the future for reasons of efficacy. But the decision to hand over control in limited circumstances remains a human decision, as humans can use AI systems in decision-making and acting. But it is known that an AI system can never take over ultimate human responsibility and accountability.

Okay, But Who Is to Blame?

Every decision-making process has bias, but in the digital industry, programmers‘ biases may have particularly negative consequences for the people who their products are designed to help.
Photo : Unsplash

So, who can we blame for a possible error? A human developer must code the rules that guide a computer algorithm as well as the variables that will be used. Every individual has conscious and unconscious biases that influence everything they do, and the AI programmers and their code are no exception. According to a research, today’s computer scientists are predominantly male and white. In the case of using AI to rank and select college applicants, coders with this profile may lack the contextual and cultural knowledge necessary to comprehend the life of a female student or a person of color; these developer’s insight of school and what represents a successful applicant may be limited to their own personal experience. This isn’t limited to gender and race, since there might be a variety of other prejudices at play, such as socioeconomic class. On the whole, the humans who create and program AI are ultimately the ones who have the most power to modify it.

As algorithms and the datasets that feed them get more complicated the risk increases. In a data-driven environment, building the knowledge and aptitude to recognize biased algorithms, as well as determining the most appropriate ways of rectifying them, may be difficult. A group of individuals with the ability to navigate between the analytical approaches that reveal prejudice and the ethical and legal concerns that guide the optimal responses is required. Some organizations may be able to do so internally, while others will need to be able to consult with outside specialists. According to an independent report, senior decision-makers in organizations must comprehend the trade-offs that come with implementing an algorithm. They should require and expect a sufficient explanation of how an algorithm works so that they can make educated judgments about how to balance risks and opportunities when using it in a decision-making process.

Algorithmic bias refers to a computer system’s systematic and recurring flaws that result in unjust outcomes, such as favoring one arbitrary set of users over another.
Photo : Unsplash

But what is the root of the bias? Is it possible that algorithms discriminate not because software developers are biased, but because the underlying data utilized to train the algorithms have faults? Some argue that the issue is with the scientific idea itself, not with the developers who apply it or the algorithm that implements it. On the other hand, the root of bad decision-making is not only a software engineer’s malice. Most of the time the bias that is already present in the data is the reason. Similarly, regulators need to demand greater access to the actual underlying algorithms, models, and data being used by large tech companies since they are responsible for the process. Regulatory authorities, on the other hand, do not have the power to declare that they wish to check the data and models.

So What Should Be Done?

The algorithms behind vastly implemented systems should be publicly available and known. If the algorithms behind vastly implemented systems should be publicly available and known. If corporations hide the true effects and assumptions in their solutions, then people have limited possibilities to consciously decide whether they really want to use a given device or program. Governmental and corporation systems that are responsible for important decision-making should have more transparency. It will allow independent organizations to verify whether the working of a given AI solution is ethical. Additionally, certain regulations should be made so that corporations don’t even try to manipulate people as it is illegal.

As stated in the article there are different types of human and machine cooperation when decision-making takes place. There are four categories: Human in the loop (HITL), Human in the loop for exceptions (HITLFE), Human on the loop (HOTL) and Human Out of the Loop (HOOTL). The last option is the most dangerous one, as here the machine makes all of the decisions. Such autonomy and power is a risk, as the outcomes may be biased and there is no human to instantly evaluate it before its consequences are vivid. Additionally as shown in the article, when the system starts to perform bad, there is no way to tell why and explain the outcome as it is a black box algorithm most of the time. It makes it more problematic to state who is responsible for the bad decisions. The solution to this could be the usage of simple algorithms that are self explanatory or the presence of human at the final stage of the decision if it’s possible. There should be more awareness among data scientists about discrimination issues and how the bias can be unconsciously provided to the model.

Policymakers must understand how algorithms function in order to guarantee that they are utilized appropriately. It is quite feasible to create transparent and interpretable algorithms, and doing so is critical to assisting companies in gaining and maintaining public trust. Organizational leaders and scientists must make it clear that they are responsible for all decisions taken by their organizations, whether they are made by an algorithm or a team of humans on a daily basis.

The Endless Dilemma, Can We Trust Humans?

Leaving the control of a machine to make ethical judgments is a huge dilemma at this moment. When we look at the systems, we can see that humans are one of the biggest causes of biased data. This, without a doubt, emphasizes the relevance of ethics and morality in the use of artificial intelligence. The cycle will continue if the individual or organization does not value ethics and morals at this stage. As technology advances, humans should ask questions and exercise control over this issue.

Governments should adopt a regulatory framework that establishes a procedure for conducting ethical impact assessments on decision-making systems.
Photo : Unsplash

Organizations may utilize data to shed light on existing processes and determine what is causing bias. Wherever there is a possibility of bias causing harm, there is an ethical need to intervene and produce fairer better decisions. Clear guidelines should be established for predicting and monitoring bias, evaluating algorithms, and resolving issues. Although there are certain broad principles, the specifics of these standards must be decided for each industry and use case. Creators should consistently classify the content of training data sets with defined metadata to assist uncover causes of bias, according to the Data Nutrition Project. Several research organizations believe that training data sets should include information on how the data was obtained and annotated. If the data includes personal information, summary statistics on geography, gender, race, and other demographic data should be supplied. If crowdsourcing is used to label data, basic information on the crowd participants should be supplied, as well as the particular request or instruction they were given.

Additionally, governments should adopt a regulatory framework that establishes a procedure for conducting ethical impact assessments on decision-making systems, particularly for public authorities, in order to predict consequences, mitigate risks, avoid harmful consequences, facilitate citizen participation, and address societal challenges. However, it’s crucial to remember that algorithms can’t do everything. Some parts of decision-making, such as the capacity to be sensitive and adaptable to an individual’s particular circumstances, will continue to rely on human judgment. Although biased decision-making is not unique to AI, as many experts have pointed out, the increasing breadth of AI makes it even more important to solve. Indeed, the scale of the problem necessitates scientific answers. As a consequence, to ensure that each person receives fair and acceptable outcomes, society should determine that decision-making processes should be designed in such a way that human judgment may intervene when necessary, guided by individual evidence.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More