Facial Recognition Should Be Banned

Facial recognition: proponents will view it as an important tool to track down criminals and terrorists, while tech companies will say it’s for convenience to identify people in photos and unlock phones. However, civil liberty experts warn that this technology can be used to track people without their knowledge and can lead to omnipresent surveillance. Facial recognition is dangerous, and it should be banned by the government.  

Facial recognition is the process of identifying or verifying the identity of a person using their face. It captures, analyzes and compares patterns based on the person’s facial details. Undoubtedly, facial recognition has its favoring aspects, such as identifying and tracking criminals, finding missing children or disoriented elderly. However, the question is if these pros outweigh the cons. Allowing the use of facial recognition software also means allowing ourselves less privacy. It can result in being hacked and misuse of our facial pictures regarding deep fakes and can cause misidentification which can lead to false accusations and arrests. 

In this essay we try to discuss these arguments, bring to light what it means to allow facial recognition software and especially, why it should be banned for now.

Misidentification & Bias

Facial recognition technology (FRT) has previously been considered something straight out of science fiction but is now a widely used and integrated part of our daily lives. Major industries have benefited from FRT: law enforcement uses it to keep communities safe and mobile industries make a profit out of it. Despite seeming very recent and new, FRT is a technology that has been used for quite some time now. Woody Bledsoe was the earliest known pioneer of facial recognition technology. His work regarding FRT started in the 1960s with a RAND tablet: a system that could classify photos of facial pictures. Woody’s initial goal was to help law enforcement agencies quickly sift through databases of mug shots and portraits, looking for matches. Henceforth, FRT became a very useful tool to track down criminals, combat and prevent crime and keep surveillance.

In 2015, Google had to apologize after their facial recognition photo app labeled African Americans as ‘’gorillas’’.

However, with the birth of facial recognition technology also came the risk of potential abuses. For instance, many of the biases that we may write off as being relics of Woody’s time, continue to dog the technology today. In 2015, Google had to apologize after their facial recognition photo app labeled African Americans as ‘’gorillas’’. Facial recognition is about 99% accurate if you’re a white guy. However, the darker the skin, the more errors arise. This percentage goes up to 12% for darker-skinned men and 35% for darker-skinned women. A researcher at M.I.T. Media Lab shows that biases existing in the real world can seep into AI. The problem mostly lies in the fact that the data sets used for training are predominantly male (75%) and white (80%) and the system is, unfortunately, only as smart as the data used to train it. 

Gender was misidentified in up to 1 percent of lighter-skinned males in a set of 385 photos.
Gender was misidentified in 35 percent of darker-skinned females in a set of 271 photos.

This bias in facial recognition is particularly problematic within law enforcement, where these kinds of errors could lead to false accusations and arrests (New York Times, 2019). The National Institute of Standards and Technology reported that systems falsely identified Asian and African American faces 10 to 100 times more than white faces. In some places in California use of this technology has actually been banned with a policy analyst stating that one false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrest, or worse. Besides this, this software was shown to be less accurate in facial recognition than humans. Within this, a problem that arises is automation bias. With our heavy reliance on technology, we tend to go against our intuition telling us this person is not Paul Lopez when the system tells you that it is.

Another problem that can occur is that of gender misidentification. Looking at this from the perspective of a machine with a binary gender classification system, this could pose a problem, since the person it’s trying to classify might have male characteristics (i.e. bone structure) and the person will therefore be classified as male even though this person is, for example, transgender and identifies and goes through life as a female. This could specifically cause complications in situations where gender is a main criterion, for example, in letting someone enter certain places, like bath- or dressing rooms.

Misgendering is something that can be very hurtful to a person, and research has shown that being misgendered by a machine is actually considered to be more hurtful than being misgendered by a person. This is because a lot of people consider machines to be objective and correct. Also often these systems don’t allow a user to correct the system’s errors. Being misgendered by an automated gender classification system is a reinforcement of gendered standards and just another source of invalidation to those given the wrong gender label. A transgender person interviewed for a research paper also explicitly said that they’re scared of these types of possible ill-intentioned uses, mentioning the possibility of systems flagging transgender individuals and lumping them in with people trying to commit fraud or deception.

Privacy

Facial recognition relies on capturing, extracting, storing, and sometimes sharing people’s biometric facial data, often in absence of explicit consent or prior notice. It is, once again, often argued that the use of facial recognition technology is used to aid security and in some cases, this can indeed be helpful.

This was proven when a group of hackers in the USA decided to create a public website that included nearly 6000 facial images of the rioters that stormed the U.S. Capitol on January 6. The creators of the website told the platform Wired that they used open-source facial detection software to extract images from videos posted on Parler, a social media platform popular amongst conservatives. However, as also mentioned on the website, the images can also include faces of people that were not participating in the riot. Besides the fact that this website is accessible for everyone – and can therefore lead to harassment or harm of innocent people whose facial images were included amongst those of the rioters –  it is also justifying the use of facial recognition software, and this is problematic.

First of all, such events don’t occur on a daily basis and therefore the use of facial recognition isn’t necessary on a daily basis. However, if this technology is already implemented and justified, what is to stop the government from using it on a daily basis? This is especially problematic in public areas where people are not consenting to the creation or capture of facial data. In this case, the data is also used to identify as opposed to just authenticate.

In most cases, facial identification is being used to protect one’s private data, but when deployed in public spaces the software starts being used to identify as opposed to just authenticate. Most people will say that you don’t really have a right to privacy in a public space, but consider that even in a public space, you have a certain amount of anonymity in that vast majority of people. However, if a security camera looks at and identifies you, it can tie your physical identity to your digital identity, which it can do without your consent.

It might be argued that people act differently when they know they’re being watched, but the rioters’ case showed that this is not true. Nevertheless, knowing you’re being watched can also cause a chilling effect on expression and free speech, two important aspects of democracy we aim to live by. All this shows is that more needs to be done to keep this kind of technology in check.

Hacking & Deepfakes

Many people use facial recognition technology as a security tool for locking personal devices or for personal surveillance cameras. Facial recognition is claimed to be socially acceptable because a lot of people use it on their smartphones. This is by their own consent and it’s used to protect the data on their phones. Besides this, their biometric plate and later facial samples are not leaving their device. Above we discussed governmental use of facial recognition. While it may be harder because of higher-grade encryptions, even on a governmental level this technology can be hacked.

Once you’re hacked, the misuse of your biometrical data can lead to deepfakes. A deepfake is a specific kind of synthetic media where a person’s face in an image or video is switched with another person’s and can be used to manipulate and threaten individuals and corporations. Other versions include the voice on the other end of the line sounding like your boss when, in fact, it’s not. Deepfakes don’t even have to be that convincing to leave an impression. As long as the viewer can identify a person and see that they’re doing or saying something, it leaves an imprint, which can hurt or destroy someone’s reputation. For example, faces of celebrities being swapped onto pornographic videos. Besides this, deepfakes can’t just be used to disguise fake events as real, but also gives opportunity for people to dismiss real events as fake. One of the reasons for this is that it’s very challenging for face recognition software and other existing detection methods to distinguish these fake videos from real ones, and the further this face-swapping technology develops, the harder it will become.

Within this, another problem that arises is identity theft. Stealing a person’s biometric data isn’t as far-fetched as some may think. It’s already been shown how easy it is to plant false DNA evidence. People who have been a victim of identity theft say that it takes 3 to 5 years on average to get their life back in order. When someone steals your credit card you can just get a new one within two weeks, but who’s going to be replacing your face when it has been stolen?

Biometrics have a few faults. For example, the reason you’re not allowed to smile on your passport photo is because it can distort other facial features that the biometric scanner needs to identify you. 

US researchers also found that Facebook’s facial recognition might not be as harmless as they thought when combined with the personal information found on the website. A professor at Heinz college also stated that it’s definitely possible to obtain a person’s personal information, even their social security number, by using face recognition software on their social media profiles.

In another experiment they identified students walking on campus based on the photos they used on their social media page. The team then predicted their interest and even social security numbers, even though they only started out using their profile photo.

Besides this, after the London riots, an amateur law enforcers Google Group used a face recognition program to find the rioters on Facebook, which led to concerns of vigilantism.

Possible Solutions

The developments of facial recognition have concerned and alarmed a lot of parties. Some have shown their concern by developing solutions to disrupt or avoid the use of facial recognition software. For example, the University of Toronto has developed an algorithm to disrupt facial recognition software (a.k.a. privacy filter), a German company revealed a hack to bypass facial authentication of Windows 10 and the Sand Lab from the University of Chicago created an app called “Fawkes” which distorts your pictures on social media.

Luckily, the escalating concerns regarding civil and privacy rights have reached governments and resulted in the banning of facial recognition in several cities in the US, including San Francisco, Boston and San Diego. This debate has also made its way to Europe, after which Sweden’s Data Protection Authority decided to ban facial recognition technology in schools.

The EU has done much more, such as implementing the GDPR. The General Data Protection Regulation (GDPR) is a European regulation in EU law regarding data protection, privacy and use of personal data. The GDPR was adopted in 2016 and became enforceable in 2018. The GDPR is an excellent development, as it specifies how consumer data should be used and protected, and also has a separate directive on data protection for the police and the judiciary. However, the GDPR still has limitations, especially regarding biometric data.  

The GDPR considers facial images biometric data, and classifies it as ‘sensitive personal data’. Use of sensitive personal data is highly restricted and requires explicit consent from the subject – unless the processing of this data meets exceptional circumstances, such as public security. This resulted in the UK allowing police to use facial recognition, as it meets “the threshold of strict necessity for law enforcement purposes.”. According to European Commission’s executive VP for digital affairs, Margrethe Vestager,  automated facial recognition breaches the GDPR guidelines, as it fails to meet the requirement for consent. She states: “as it seems right now, GDPR would say: don’t use it, because you cannot get consent.”

Given the aforementioned arguments, we propose banning the use of facial recognition technology completely in the EU, at least until newer developments allow a saver use of facial recognition software. 

In addition, given that the GDPR is limited to the EU, we would (preferably) propose a universal ban. However, given that such a proposal is not realistic as some countries see it as a disadvantage,  an agreement regarding the use of such technology might be a more practical solution. 

Conclusion

To summarise, one of the arguments for the use of facial recognition is to help with the tracking of criminals and terrorists. While this may be true to an extent it has been proven that the use of this technology can often lead to the misidentification of people and therefore, false accusations and arrests. Second of all facial data can be hacked and lead to identity theft and the creation of deepfakes. And, besides all this, it can be a massive invasion of privacy. Especially when your facial images become publicly available.

As stated above, there are governmental regulations in some cities or states, but they are by far not enough to reduce the risks that facial recognition technology has at the moment. With current developments, flawed or even lacking laws and regulations, the power of governments is growing. The systems of tech firms are becoming faster and their databases are becoming bigger. Our society and governments are going to have to work through significant challenges regarding our privacy and civil liberties. Nevertheless, we hope that other states, cities and/or countries follow the examples of San Francisco and Sydney, and ban facial recognition, preferably fully.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More