MY IDENTITY, MY FACE: Ban Facial Recognition

Facial Recognition Technology (FRT) enables the detection of faces in videos and photos. It has made a large number of applications possible, ranging from rapid identification of individuals to applying camera filters. However, there is a darker side to this technology which one might not initially consider. In this opinion article, Selina Wong and Arthur Goetzee delve into the darker side of FRT. They propose that FRT has the potential to erode trust in our society, to threaten our basic privacy rights and that it could facilitate the transformation to a surveillance state. It is for these reasons that they urge that FRT should be banned entirely.

FRT is a technology that has spread massively in the last two decades. The software uses biometrics to map and analyze facial features, and then confirms the identity of a face in a photograph or video. It is tagging your friends on Facebook, verifying your identity at airports and unlocking your phone with your face. Although these are seemingly harmless technological developments to make life more convenient, the way governments and companies use FRT will have a far greater impact on people’s lives. Public organizations and businesses use FRT for a variety of purposes: in relation to law enforcement, border control, photo editing and social networking. There are also indications that FRT could be used for marketing purposes by commercial agencies.

Facial Recognition Technology is used at various checkpoints within the airport

But how much control do we have over our own image? Are the uses of this technology acceptable as they are now? And how will these uses evolve over time? Can we really trust governments, companies and even individuals with the power that this technology gives them? A lot of these questions popped up into our minds as we were researching FRT for this opinion article. We were first drawn to this subject due to the news spreading that the People’s Republic of China has been developing a surveillance system that uses FRT to track its citizens virtually everywhere, and is currently testing it in a few of its megacities. The comparisons to George Orwell’s book ‘1984’ were pretty self-evident, and it prompted us to do more research into the capabilities of FRT. Upon starting our research, the technology was familiar to us in the form of funny filters on social media, or even detecting the faces of your friends in photos. However, from researching this seemingly innocent technology, we were lead down a gigantic rabbit hole of privacy issues, non-consensual pornography and unlawful use by law enforcement. We even discovered that we ourselves are most likely already part of a biometrical database. It is for these reasons that we felt compelled to write this opinion article.

As AI students, we are in the unique position where we can examine the scientific uses, research and literature, and yet still remain involved in the discussions and views in the popular press. Therefore, we can take a more objective view than experienced industry experts would, as they are involved in the development and application of the technology. As of writing, we have not been involved in any research nor have we been employed by companies that use FRT. Thus, we can make an impartial judgement on the use of FRT. On that note, we argue that FRT is a disastrous technology and since it needs to be regulated so strictly, it should be banned. We will clarify our arguments for this, as well refute some of the arguments in favor of FRT, which we found to be very weak in comparison.

A swamp of privacy issues
First of all, there are many privacy issues regarding the use of FRT. One of them is the act of collection itself – it’s very simple for law enforcement or companies to collect photos, but nearly impossible for an individual to avoid having their image taken. FRT can be implemented in many public places, such as corporate and government buildings, busy sidewalks, sports events and airports. While passing through these areas, your face could be scanned with no clear signs that that’s happening, and without your consent. In one Metropolitan Police trial with live FRT, even pedestrians who simply tried to cover their faces to avoid the non-consensual facial recognition test were arrested. There is no way to object to it.

“There is no way to object to it.”

But facial data is not only collected from public cameras. In the US, law enforcement agencies make use of the facial recognition software from Clearview AI, which devised a groundbreaking facial recognition app. When you take a picture of someone and upload it with this app, you will get to see public photos of that person, including links to where those photos appeared. The backbone of this software is a database of more than three billion images that Clearview AI has scraped from sources across the internet, including social media, news sites and employment sites. This is a huge infringement of privacy, as someone’s personal photos are stored in a database unbeknownst to the person in question. However, Clearview AI is merely an outlier in that it has faced public scrutiny. There exist equally less ethical software companies that will sell their software to local law enforcement. One of the reasons large technology firms get involved in supplying AI surveillance technology to governments is that they expect to collect a mass amount of data in order to improve their algorithms. Usually this happens with no oversight into where the photos come from or how the identification algorithms work. Since there are currently no global regulations, agreements or industry standards for using FRT, governments and companies can do as they please.

The use of FRT will thus result in numerous private and public databases of information, which may be sold, shared or used in ways that you don’t even know of or consent to. These databases are not insusceptible to security breaches, information leaks by careless or corrupt employees or hackers. Once the damage is done, it is irreversible, thereby creating a constant fear of information and identity theft.

But the privacy issues don’t stop here. Your facial data – or rather, your identity – can be used to put you into situations that are, in fact, not real by creating so-called ‘deepfakes’. Not only do these have enormous privacy issues, they also pose a larger threat to society itself.

“The use of facial recognition technology will thus result in numerous private and public databases of information, which may be sold, shared or used in ways that you don’t even know of or consent to.

Deepfakes make it difficult to distinguish fantasy from reality
FRT is capable of creating these deepfakes. These are videos where an individual’s face is superimposed onto another individual in a different video. This is done by training a neural network on some source material of both individuals, and then swapping the neural networks with each other. Many other sophisticated tricks are used to make the swapped faces look as realistic and natural as possible. This is quite often used in conjunction with neural networks trained on the voice of the same swapped individuals, effectively enabling the creator of a deepfake to put someone into any situation they wish, saying things they did not say in reality. 

Deepfake technology has led to quite a lot of humorous videos online, often involving politicians saying swear words. There were also a few other uses, such as assisting people who have recently abruptly lost a loved one. However, it is not hard to imagine how this enables somebody else’s identity to be used or even weaponized.

Deepfakes can be used to create convincing fake videos.

Deepfakes have been used as a tool to spread false information, making politicians or celebrities say things which they in reality didn’t. The purpose of these videos isn’t to entertain, but rather to serve as ‘false evidence’ for fake news. In the current fast-moving 24 hour news cycle, news is increasingly being consumed in short intervals on endlessly scrolling pages of social media. Combined with clickbait titles that play on emotions coupled with angry comments, FRT is a very dangerous development since people have no way to verify the video’s authenticity. In fact, researchers from the University of Amsterdam showed that deepfake videos certainly do influence people’s political opinions.

Other than exacerbating the already large problem of spreading fake news, deepfakes have also enabled an incredibly large influx of pornographic movies featuring celebrities. These videos are non-consensually produced. Other than celebrities, deepfakes have also exacerbated the problem of ‘revenge-porn’, where people now use the faces of their past lovers in pornography in order to get back at them. Sensity, a company that identifies and tracks deepfake videos online, claims that 96% of all it’s identified videos were pornographic in nature. Deepfake pornography can also be used to slander and discredit people, as was the case with an Indian journalist.

Taken all together, deepfakes pose a very large danger to society and even to individuals on a personal level, and can even erode trust in media we consume on- and offline. In conjunction with the existing privacy issues surrounding FRT, deepfakes pose an enormous and immediate problem, as individuals’ faces may be scraped from completely manufactured situations, which could possibly link them to crimes which they did not commit in police databases.

“Deepfakes pose a very large danger to society and even to individuals on a personal level.”

One might think that this last scenario may be highly improbable – our governments certainly do not monitor us using FRT, right? That’s something that happens in dictatorships, not in modern Western Society, no? And even if they did use FRT, this would surely only be used in the most extreme of situations? Unfortunately however, this could not be further from the truth. FRT is already being employed by governments and law enforcement on a massive scale.

Are we on our way to becoming a surveillance state?
As a way to increase public safety, FRT is being used in public spaces. When FRT becomes a tool of mass surveillance, anything you do can be criticized for public shaming and punishment. The use of this technology severely damages the ability of regular people to maintain their anonymity and right to be forgotten in the public space. The latter right is based on the fundamental need of an individual to “determine the development of their life in an autonomous way, without being perpetually or periodically stigmatized as a consequence of a specific action performed in the past”.

From picking up medicine, to taking public transport, to visiting a supermarket, or hanging out with certain people – every movement you make is tracked by the government. A person’s public movement provides a wealth of information regarding someone’s familial, political, professional, religious, and sexual associations. It reveals your highly personal visits to the psychiatrist, the plastic surgeon, the abortion clinic, the strip club, the court, the mosque, synagogue or church, or the gay bar . You will get the uncomfortable feeling of being watched every single day by invisible eyes following you, so that no matter what you do, you always hesitate.

This persistent monitoring of our behavior is no longer a scenario from the dystopian future – it is already present and applied widely right now in modern China. The network of cameras necessary to make FRT work will further engulf society in the fear that Big Brother is always watching. If we do not ban the use of FRT, our country, too, will develop into such an Orwellian surveillance state, where privacy could be rendered extinct and we lose our ability to be ourselves.

CCTV Camera with facial recognition features in China

You might think that the security benefits of this technology outweigh the drawbacks of privacy loss. FRT systems are often marketed with outrageous claims: they are supposed to identify potential suspects more quickly, ensure that only authorized personnel can access secured buildings, prevent potentially dangerous people from entering schools or catch people with false documents. Proponents of FRT also state that it gives an added feeling of security on the streets.

While we don’t deny that it could be beneficial for reducing crime and enhancing security and the quality of life in neighborhoods, the costs to privacy and freedom of using FRT are greater than the gains in public safety. There is no scientific evidence that the use of ‘safe’ or ‘smart’ cities actually reduces crime more than ordinary video cameras do. A study by the University of Essex found that only one in five matches by the Metropolitan Police’s system can be considered accurate. South Wales Police claimed that its use of FRT has enabled 450 arrests. In reality, only 50 arrests were made using live FRT. The rest was made possible due to conventional CCTV or having officers on the street. So contrary to the positive claims, FRT doesn’t work that well on a practical level.

Another argument in favor of FRT is that it can be used by law enforcement to catch criminals and terrorists. But law enforcement must recognize that FRT raises unique and legal questions.

Biased algorithms and abuse of power
Intrinsic racial and gender biases in FRT algorithms may exacerbate discrimination problems, especially with non-white and non-male faces. The darker the skin, the more errors arise. According to a study that measures how FRT works on people of different races and gender, errors can go up to nearly 35 percent for images of darker skinned women. So if you’re black and/or a woman, you’re more likely to be falsely matched with a suspect on a watchlist. This is because most datasets for FRT in western countries are estimated to be more than 75 percent male and more than 80 percent white. AI is only as smart as the data that is used to train it.

Not only does the use of FRT reinforce bias, it is also prone to misuse. Historically, technologies that helped in finding criminals have disproportionately hurt innocent people that are sometimes already in a bad socioeconomic position. If the system results in matches that wrongly identify innocent citizens as criminals, their civil liberties are violated. For instance, FRT has been used by law enforcement in cities like Detroit, and it is aggravating forms of racial profiling and discrimination in all areas, from public housing to criminal justice. Another example is the intentional use of FRT by the Chinese government for racial profiling of Uyghur Muslims, as a way to commit atrocities against them. Regardless of all the benefits, FRT is thereby the perfect tool for oppression and social control.

“Facial recognition technology is intrinsically socially toxic, regardless of the good intentions of its creators.”

In conclusion, FRT is intrinsically socially toxic, regardless of the good intentions of its creators. Such technology needs to be controlled so strict that it should be banned for almost all practical purposes. The benefits are often exaggerated or have yet to be proven, while the disadvantages of FRT are potentially too destructive to our lives as social beings. Moreover, the public knows very little about these systems that are ostensibly for their benefit. The ignorance of the general public regarding this issue is of incredible concern. While teenagers are happily taking photos of themselves using filters, big tech-corporations and governments are doing as they please. The way that companies and governments have interacted with FRT thus far is not only scandalous, it is also hypocritical. These institutions pretend that they are protecting our safety and safeguarding our privacy by using FRT, while in reality they are actually creating more and more privacy and safety issues by employing it. We have already seen the early symptoms of this sick technology in our society, and how it can get worse, or rather dystopian, in countries like China. The only remedy is for people around the world to recognize how adversarial, destructive and catastrophic facial recognition technology has become, currently is, and how much worse it will become in the future if we do not ban it. It is our collective responsibility to safeguard our freedom.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More