Smile, Your Identity Is Exposed: Why Facial Recognition Technology Should Be Extremely Regulated

Photo by Etienne Girardet on Unsplash

Facial recognition technology (FRT), which can analyze our facial features and identify us, has become a key component of our daily lives. This technology is no longer considered as something from a science fiction movie. On the contrary, it is already used to unlock our smartphones, identify criminal suspects, identify victims, verify our identities at airports, and even regulate access to facilities such as labs, bank vaults, and other sensitive locations. The impact of FRT is not limited to the physical world either. It is being used increasingly in the digital world to secure online transactions, it has been used in the past  to inform people when their photos are being posted,  and can even predict the emotions of individuals. It all seems like a useful tool that can be used to make our life more convenient. This makes you wonder: at what cost?

It is important to note that while the applications of FRT are numerous and impactful, it can also be intrusive and discriminatory. Not only is its accuracy not always 100% and it can be tricked, it is actually biased;  it’s less effective at identifying women and people of color than white men. Moreover, the technology could be misused by several parties and negatively impact our freedom. Due to the multitude of detrimental consequences of this technology, we need to ensure that we have public policies and regulations in place to guide the use of this powerful tool. FRT could harm our society, and therefore, we think FRT should be extremely regulated.

Misidentification: technology can be fooled

FRT is not perfect; it has been proved to be less accurate in identifying individuals after plastic surgery or in authenticating identical twins. Besides, FRTs are not always able to match face prints to the database properly. Errors are typically caused by poor image quality, camera angles, illumination conditions, or a lack of information in the database. According to a research conducted by the University of Essex, 81% of suspects identified by face recognition technology used by the Metropolitan Police in London were innocent. Furthermore,  wearing a full face mask or using 3D printed human face masks, or even hiding your face with your hand will enable you to avoid being recognized by FRT. The use of professional and heavy makeup is yet another efficient method of FRT deception.

Photo by Markus Spiske on Unsplash

The increasing popularity of plastic surgery poses a serious challenge to even the most advanced face recognition algorithms. According to the findings of this study, determining the correlation between pre- and post-surgery facial geometry is hard since these procedures alter the shape and texture of facial features. As shown in the same study, the geometric correlation between face features changes after surgery, and there is no technique for detecting and measuring such changes. ​​Although these surgical treatments benefit patients suffering from structural or functional impairment of facial features, they can also be abused by those attempting to conceal their identity in order to conduct fraud or elude law enforcement. As a result, facial recognition algorithms must be able to distinguish between legitimate and stolen identities. Furthermore, research in this field is limited due to the sensitive nature of the procedure since it is related to the medical history of an individual, which is protected by law, “invasion of privacy,” making it exceedingly difficult to construct a face database that consists of images before and after surgery.

Distinguishing between monozygotic twins is one of the most difficult challenges in facial recognition. This is because monozygotic twins typically have extremely identical face features, making it difficult for face recognition systems to distinguish them. Researchers have proposed a variety of methods that may improve the ability of face recognition algorithms to recognize twins. However, there is a lack of an effective approach that can distinguish twins from a huge generic database containing both twins and non-twins. Also, commercially available facial recognition systems such as Cognitec, Pitpatt, and Verilook fail to generate true positives, resulting in more false positives under abnormal imaging conditions.

“The algorithms underlying the FaceVACS product line can tell identical twins apart even if they have very small differences in features that would be difficult to detect with the naked eye; however, when twins are completely identical, the technology fails.”

Elke Oberg, marketing manager at Cognitec Systems, which developed FaceVACS facial recognition technology.
Video showcasing IPhone X’s FaceID fails in distinguishing twins

As a result, we may conclude that there are several challenges in creating a flawless FRT, with 100% accuracy. It seems likely that there will always be cases of misidentification. 

Ethical concerns

Aside from the possibility of misidentification, the unregulated use of FRT also leads to a multitude of ethical concerns. Very prominent among these concerns is the susceptibility of FRT to bias. Facial-recognition systems display racial bias, with Asian and African-American people being significantly more misidentified by these systems than Caucasian people. Aside from the racial bias, both women were harder to identify than men, and older adults were more often misidentified than younger adults. Thus, FRT is still susceptible to several kinds of biases, which can have severe consequences. An example of this is the arrest of Robert Williams in 2020, when a Black man was put in jail after being wrongly identified by FRT.  In a future with unregulated FRT, these incidents may become more common.

Not only does FRT contain biases, but it can also perpetuate harmful gender stereotypes, which mainly affects non-binary and transgender people. Systems that perform facial analysis and classify people based on gender have a restricted view of gender, and do not accurately represent gender diversity. The algorithms performed worse on transgender people, and as systems used no classifications aside from ‘male’ and ‘female’, they never classified non-binary people correctly. Therefore, the usage of FRT can lead to people feeling misgendered. Moreover, it could have more severe consequences such as limiting research for these groups of people and causing problems at the airport.

Another major ethical concern encompasses the way the data for FRT is obtained and stored. Most creators of algorithms for FRT nowadays collect images of people’s faces without their permission, for example from image websites. In 2016, it was reported half of the people in America were already in a law enforcement database, for example with their ID or driver’s license photo. This within itself is already extremely problematic since, in the past, these databases were made up of criminals data, but now people who have never committed a crime before are also considered.

Thus, without knowing or consenting, your face could end up in one (or multiple) of these databases. Unfortunately, your face cannot be reset as easily as a password, as it is unique information to identify you with. So, even if it would be beneficial for society, is it responsible to create databases this way? And what happens if this data is misused?

We must keep in mind that once these databases are hacked, our identity is at risk. A huge scandal concerning the hack of Clearview AI, a face-collecting firm which worked with many US law enforcement agencies, showed that these concerns are incredibly relevant. In this hack only the list of customers was stolen, but it clearly shows that even these companies, that own a ton of sensitive information, are vulnerable to cybercrime. For example, in 2019, the Department of Homeland Security stated that photographs used for facial recognition appeared on the Dark Web, after one of their subcontractors was hacked. With technology such as deep fakes, which can bypass FRT, on the rise, having your facial identity information leaked, can have severe consequences such as fraud.

However, misuse of this data can also happen when the data is stored in a safe way, as FRT can be used to profile civilians.  An immensely notorious use of profiling can be seen in China, where FRT is being used to track Uighurs, a (mostly Muslim) minority,  and analyze their movement. Using this example, it is easy to see how profiling can be used to keep tabs on minorities. Moreover, if an oppressive government uses surveillance and can follow journalists and people that oppose their standpoints, they are able to diminish every form of criticism about their actions. In brief, it will put our democracy in danger.

Social freedom

Adding on to this, FRT can impact the way people behave as well. As FRT could be used in many different kinds of locations, such as in retail,  your identity could be linked with the places you visit and you might want to stay away from certain locations. Therefore, we might be impacted on a personal level; our freedom is at risk. Example given, people might want to go to a protest to criticize the government, but stay away as they fear ending up on a watchlist. Protesting could become much riskier. For example, Amnesty International notes that FRT was used to target protestors during the Black Lives Matter protests in 2020. Videos taken by protestors or bystanders during a protest could be used to identify the people that were there. Thus, when this information is combined with the possibility of tracking minorities and journalists, unregulated FRT seems to harm our democracy by negatively affecting our freedom of speech. Combined with the knowledge that FRT could possibly identify your political orientation, it could be perfect for authoritarian control, as all citizens could be monitored by the government.

Citizens could become afraid in public spaces. The use of FRT might threaten people who have been discriminated against in society, such as members of the LGBT community. Data breaches could leak one’s sexual orientation or gender identity, which could be used to bring harm to their life, especially if an authoritarian government is against these marginalized identities. As citizens might not feel able to move as freely anymore, we think FRT should be extremely regulated.

The bright side of facial recognition technology

As of now, an overwhelmingly negative view of FRT has been sketched. However, we must keep in mind that FRT also has uses that are beneficial to society. Research suggests that, when thinking of FRT, convenience and security are most prominent in the minds of citizens of China, the United Kingdom, Germany, and the United States. On an individual level, using your face to unlock your phone or to enter a building could be much more convenient than needing to remember passwords.

FRT could also be beneficial for security. Airport security could enhance and shorten procedures, as your face could confirm your identity. The International Air Transport association reported that passengers would want to use biometric identification, (which includes FRT) if it speeds up travel processes. This suggests that citizens might be willing to consent to using their data if it is beneficial for them. Another use for security might be for mobile banking, as using FRT could strengthen its security, as e.g., passwords for online banking could be stolen. Furthermore, home security can also be improved using FRT. Nowadays, as well as In the future, many more applications of FRT for security could be found.

Aside from personal convenience and security, FRT can be used to fight crime as well, for example when the police have no other ways of tracking and identifying criminals. In China, the police were able to find a wanted man amid a concert crowd, containing almost 60.000 people, using FRT. In Orlando, an arrested man without an ID card, with removed fingerprints, was identified using FRT. Multiple cases have been solved with the use of FRT, as it provides officers with leads for investigations, however, it is not talked openly about. Therefore, there seems to be an issue with transparency regarding these systems.

Moreover, FRT can be used to identify missing people, by, example given, comparing pictures of a missing person to photos taken by the public of potential matches. Multiple countries plan on using FRT for this purpose, such as the UKIn India, the police were able to find a missing woman using it, even though she was wearing a mask. She was identified by matching her photos with the CCTV at a railway station and was found after the staff was notified. Another example is a Chinese man who was reunited with his family 32 years after he was kidnapped, as FRT could match him to an aged-up simulation of him as a toddler.  Therefore, we believe that FRT can be used for good causes as well.

Conclusion

There are different sides to the discussion about FRT. On one hand, it could make our lives easier and be immensely useful for police and security. On the other hand, it is still full of biases and can reinforce inequality, our data can be misused, and it could heavily impact our (perception of) freedom. So, a system which would primarily be used to ensure the safety of citizens and improve their quality of life, could in practice decrease the wellbeing of law-abiding citizens. Therefore, we believe the risks vastly outweigh the advantages of FRT at this point in time. 

Does this mean we have to prohibit FRT, due to its detrimental consequences? No; rather, it only emphasizes the importance of strictly regulating the use of FRT. As more governments and companies have shown interest in using this technology, laws and policies to control its usage could provide us with many opportunities to get all of the advantages out of FRT, while decreasing its damaging effects. 

We plead for mandatory accuracy tests for companies developing FRT, for example to show the bias in these systems is neglectable, or to publicize the assessment, showing its accuracy and biases. The deployers of the technology can use it more responsibly if they are aware of the flaws in the system. 

Regarding the misidentification following plastic surgery, on an individual level, people who cannot be identified by FRT anymore could be given official documents proving their features have been altered, in case they get in trouble at, e.g., borders which heavily rely on FRT. However, for identical twins, more reliable technologies such as iris scans and multiple fingerprints that are unique to each identical twin could be employed.

Furthermore, full transparency is needed, so it is clear when and for what FRT is being used. The European Parliament Research Service also discussed transparency in their FRT regulation analysis, showing that for policy making, transparency is an important point. We believe that people deserve to know how their data is being used, how it is collected, and where FRT is being used. 

Photo by Matthew Henry on Unsplash

Lastly, we believe real-time surveillance should be restricted. For example, it could only be used in emergencies. As continuous use of surveillance technology could harm society and our freedom, and it could accumulate a lot of sensitive data on individuals with possibilities of misuse, only using it in cases where it is explicitly shown to be crucial to protect citizens could already help minimize the discomfort of many. 

On a final note, although we recognize that FRT has a lot of potential, we must all be aware of its weaknesses. FRT could be safely used, if properly regulated. We want to be able to control FRT, after all; we do not want FRT controlling us.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine

Digital Sugar: Consequences of unethical recommender systems

Introduction We are spending more and more time online. The average internet user spends over 2 hours on social networking platforms daily. These platforms are powered by recommendation systems, complex algorithms that use machine learning to determine what content should be shown to the user based on their personal data and usage history. In the […]

Read More
Human & Machine

Robots Among Us: The Future of Human-Robot Relationships

The fast-paced evolution of social robots is leading to discussion on various aspects of our lives. In this article, we want to highlight the question: What effects might human-robot relationships have on our psychological well-being, and what are the risks and benefits involved? Humans form all sorts of relationships – with each other, animals, and […]

Read More
Human & Machine Labour & Ownership

Don’t panic: AGI may steal your coffee mug, but it’ll also make sure you have time for that coffee break.

AGI in the Future Workplace In envisioning the future of work in the era of Artificial General Intelligence (AGI), there exists apprehension among individuals regarding the potential displacement of their employment roles by AGI or AI in general. AGI is an artificial general intelligence that can be used in different fields, as it is defined […]

Read More