How AI can help us make a leap towards a bias-free world

COMPAS software is used across the country to predict future criminals, and it's biased against black people.

You may not already know, but you are biased. Biased by the environment you grew up in, your gender, the things you watch on TV or YouTube, your skin tone, who you follow on Instagram, what type of papers you read and so on. The fact that you are reading this paper already tells that you are privileged to study or work at a university in the Netherlands. Also, you probably unconsciously or unintentionally discriminated against someone in your life. This is perhaps not the most comforting paragraph to read and to make it up to you: everyone in your environment has probably discriminated against someone—often unintended or unconsciously—in their life.

You might think: if everyone is biased and discrimination is not intended most of the time, can’t we just accept the fact that this is part of our psychology and let it stay this way? The answer is: no—the fact that everyone is biased, and most people do not intend to discriminate, actually makes it a more serious issue. This unnoticed systematic discrimination which is so deeply rooted in our society and was tolerated and maybe even accepted for a long time is also the most damaging. Namely, these little biased treatments pile up to a constructive disadvantage of particular groups.

What does this have to do with artificial intelligence (AI)? Well, AI is widely implemented in our society. Some examples are the judgement of loan applications, facial recognition, and diagnostics in healthcare. These widely implemented AI-systems contain human bias, which makes its decisions biased as well. These biased decisions can cause problems for the same groups disadvantaged by human bias.  

One example is Robert Julian-Borchak Williams—a 42-year-old black male living in Detroit. In 2020, he was wrongfully accused because the local police department used AI facial recognition to find the man who stole watches from a retail store. He was handcuffed in front of his house with his wife and children watching. The actual robber was captured on film and also was a black man but looked nothing like him. AI failed him. And he is probably not the only one. Facial recognition is accurate 99 per cent of the time—if you are a white man. Up to 12 per cent of darker-skinned males is misidentified by the same system, and for darker-skinned females, this is 35 per cent.

Robert Julian-Borchak Williams was wrongfully accused and arrested due to the use of AI facial recognition.


In conclusion, we need unbiased AI. What if that existed? And what if that could help us humans recognize and revise our biased thoughts? Is that possible?

When AI originated in the 1950s, the world had high expectations—AI was going to solve many of our (societal) problems. AI did develop in many ways and has a pervasive and wide reach. However, it still does not meet the expectations society had 70 years ago. The bad reputation AI developed is partly due to the fact that AI was the source of many scandals over the past last few years; surveilling without consent, discriminating algorithms, assisting the development of automated weapons, polarizing society by fake news and deep fakes, fueling killer robots, and so on.

However, we think we should not give up on the high expectations we had 70 years ago. AI can help humans in many ways and specifically in recognizing and revising their bias. If trust is restored and AI is developed in the right manner, we believe AI can reduce or even solve human bias.

Of course, there are different points of view on this. We will discuss the ones we think are most prominently presented in the debate on if and how unbiased AI is possible. Namely the possibility of unbiased AI, the problem of biased data, the need for a consensus on unbiased AI, and the trust and investments needed for a world without biased AI. Finally, we will discuss how this could help reduce human bias.


Does unbiased AI exist?

Firstly, some people argue that unbiased AI does not and will never exist. According to Lisanne Maatman, it is not possible to develop AI without embedded human bias. She states that machines learn from (biased) human behavior. So even with the best intentions, algorithms fail to eliminate our human biases.

However, we think the fact that almost all AI is developed in North America, Europe and Asia by middle-aged Caucasian or Asian male engineers, plays a big role in assuming we are not able to eliminate human bias in algorithms since the developers are not at all representative of all humans reached by AI.

We state that with help of an interdisciplinary, diverse, and, therefore, world representative team of engineers, law-specialist, ethicists and human-right experts, it must be possible to produce unbiased AI. In this process, it is necessary to create transparency and evaluate over and over again in different testing environments, in order to ensure that algorithms that enter the world are not harming, disadvantaging or discriminating against anyone. Only then, AI is created with the best intentions. There are many incentives trying to achieve this, like AI Civic Lab, Black in Computing, HAI Standford, which agree it is possible to develop unbiased AI if a lot of work is put into this. So, let’s put in the work and make AI for everyone.


Unbiased data: the big solution?

Secondly—apart from the development of AI—one of the main reasons for AI being biased is biased data. Why? Well, AI needs data. This data is collected by companies, the government, etc., and is then fed into AI systems. The AI that nowadays is used the most to make predictions about humans is machine learning (ML), which is based on tons of data. ML subtracts patterns from huge amounts of data and uses those patterns to make predictions by focusing on certain variables. This way of analyzing data, finding patterns, and making predictions, is something that truly outperforms a human brain.

However, ML needs lots of data to accomplish accurate predictions and, therefore, suffers from a huge data-hunger—the more data, the sharper the predictions. As AI is growing, there are more and more computers build to store the data to feed ML systems and improve their performance.

Unfortunately, this does not seem to solve the problem of biased predictions and probably actually makes it worse. ML performs better on more data, so having more and more data helps ML more accurately find the biased patterns embedded in the data to base its predictions upon. So actually, ML puts a magnifying glass on human biases.

Talk in which Sennay Ghebreab talks about discriminating algorithms and they function as a mirror of society.


One example is wrongfully accused Mr Williams from Detroit, another is an Amazon job‑applicants-review-algorithm only suggesting to hire male candidates and wrongfully accuse crime perpetrators based on skin tone. And unfortunately, there are many more. These biased ML predictions are often referred to by the saying: ‘Garbage in, garbage out’—where garbage represents the biased data. Therefore, ML is often referred to as a ‘mirror of society’, especially if data is collected by social media platforms.

The fact that ML provides us with a mirror of society is one of the most frequently heard counter-arguments to our positive view of AI solving the bias problem. As the problem roots in the human biases in society, some argue that the only way to tackle this issue is to change society.

However, we do not agree. Notwithstanding that it is true that we live in an unequal society and change is desirable, it is not true that the possibilities to solve the problem of biased AI only is possible on a society-level. Society is not adjustable and flexible enough to tackle inequality at the right pace. Society changes slowly and unwieldy—it does not keep up with the rapid development of AI. This can also be interpreted from the famous quote stated by Andrew McAfee of MIT:


‘If you want the bias out, get the algorithms in.’

– Andrew McAfee, MIT


In order to tackle the bias in AI, first, a lot of tools were developed to check whether data has a bias. However, there are already some developing technologies in order to create unbiased data or fair AI systems. One on proceedings study—performed by Google Research—introduces a new mathematical framework for fairness. The researchers propose an algorithm that protects certain groups by re-weighting the training data and equalizing the odds. Even though the study is not finished, the researchers state that these algorithms lead to fair predictions.

Andrew McAfee from MIT who stated the famous quote: ‘If you want the bias out, get the algorithms in’.


The last way how bias can be extracted from AI data is with the use of synthetic data. Slate’s writer Todd Feathers states that if we allow it, synthetic data can really solve the problem of human bias. The synthetic data can be created by AI for AI, making it cheap and accessible. It generates patterns from real data creating an entirely new dataset with the same statistical characteristics.

However, the data could also represent the world how it should be instead of how it is now, which will reduce the bias embedded in the actual world. Another big benefit of synthetic data is that it is less likely to violate privacy laws, thus label-mistakes or leaks are less pervasive. Nonetheless, the creation of synthetic data should be developed with caution, since we do not want human bias to sneak back in.

We hope we now convinced you that it is possible to develop unbiased AI—via data and diverse teams of AI developers—it is possible. But who decides if AI is unbiased enough? For example, on the amazon job-applicants-review-algorithm, do we want the algorithm to select 50% females and 50% men? Is society ready for such an abrupt change? Or do we want the same division as we have now, which we gradually adjust towards a fairer distribution?


Towards a consensus on unbiased data

Experts disagree on how the fairness of an AI system should be guaranteed. The way we see it, this is solved the same way as the development of unbiased AI: a strong interdisciplinary collaboration to make sure that all ethnicities, cultures, and mindsets are taken into account to collectively decide whether the data is unbiased enough.


Trust in AI

Now, we have unbiased AI on which everyone agrees it is unbiased. The next step is making sure that society wants to keep using AI. Because in order to ensure that ethically correct AI helps humans revising their biases, humans must be willing to accept advice from an AI system. Trust in the performance of the system is crucial in order to get the full potential out of it. And it is exactly that trust that has been violated by biased algorithms. As mentioned before, discriminating experiences and the bad image sketched by the press over the last few years made the overall trust in AI decrease.

An underlying reason for the growing distrust in AI is the black-box principle, which basically says there is little (general) knowledge on the exact performance of AI algorithms, except for the in- and output. The black-box principle even applies in some cases for engineers themselves, since predictive systems are mostly self-learning. ML is one of those techniques where the system makes a lot of connections that are not traceable afterwards. It is actually one of the strengths of ML because it can out-think humans, which is the reason engineers love to use it all the time.

However, when people have a low understanding of something, they are less likely to trust it. Therefore, if people have no knowledge about what happens inside the black box, distrust in the performance and outcome of this black box will grow. And since many people without any knowledge of AI are exposed to AI systems—and they know that even the engineers do not exactly know how the system came up with its output—they are more likely to be sceptical about its output.

We think this problem can be solved by increasing transparency about the development of AI. When a company buys a certain AI system, by law the company must be able to inform its employees and customers on how it is developed and how it operates. There needs to be transparency and education about the black-box-principle and the possible pitfalls of the system. We believe this will increase the trust in AI and will therefore eventually result in positive experiences. However, this process will take time. As the famous Dutch saying goes: “Vertrouwen komt te voet en gaat te paard”, which can be freely translated to “Trust comes slowly and goes quickly”.  


How to manage the costs?

You may think: OK, good idea, would be great, by how are we going to pay for all this? And that is actually a very good question. The development of unbiased AI is more expensive for companies and governments than the way AI is currently made. The interdisciplinary and diverse expertise, collaboration, education, transparency, investigation on fairness, creating/gathering unbiased data, extended testing, and verifying simply costs time and the implementation does not come cheaply.

However, companies like Google and Microsoft already warned their investors that the currently available (biased) AI could harm their brands market value. Besides the bad press and image damage for violating and disadvantaging people, the companies also invested time in a product that they cannot use anymore. Indicating that developing ethically correct AI—in the long term—is more profitable. The AI systems will be much more valuable, reusable and free of image damage, bad press and potential future lawsuits or claims.


Decreasing human bias

In short, we need unbiased data and unbiased development of AI on the technical side. With the trust of users and willingness to invest in the reduction of bias in AI, we can meet the high expectations of 70 years ago.

But how does all this decrease human bias? Well, if companies and governments decide to invest in the development of unbiased AI, this can eventually help humans to recognize and revise their own biases: unbiased AI systems can serve as a good example for people and their biases.

Besides, it is likely that it will at first help with the obvious discrimination in society. But it has the potential to also kill the unconscious and unintended human bias overtime. Less bias in AI means less biased choices by all of our systems used, which means a more equal society in the long haul. And if society becomes more equal, data become more equal, which would eventually make technologies like synthetic data unnecessary. This may sound like a utopia but every step towards an equal society is one.

Still, it is important to keep in mind that biases can sneak into AI systems at all times. Therefore, it is important to keep on revising and discussing. AI is a recursive process that needs supervision and revision, it is not a polished product. However, not only the product but also the process will encourage people to think about the inequality in society and how their (unconscious or unintended) biases have disadvantaged others.

Leave a Reply

Your email address will not be published. Required fields are marked *