The Danger of Facial Recognition Software in Surveillance Cameras

Face recognition in surveillance cameras is a form of biometrics. Every face, just like every fingerprint, is unique. Developments in this field are fast, facial recognition technology is getting better and better. It doesn’t require active human monitoring and has access to big data sets, making it efficient and able to identify humans in real-time. Just look at (most) of the mobile phones that are equipped with similar software to provide the ability of unlocking your phone with your face in a couple of seconds. There are companies, like Clearview AI, that develop and sell this software specifically for security and surveillance cameras. 

At the moment, the Dutch government mainly uses camera surveillance to ensure safety in public places. This often involves camera surveillance in the center of cities and entertainment areas. However, if there is sufficient reason for it, cameras can also be placed in residential areas, stations and other public places. When the police are subsequently looking for a specific person of interest as a result of criminal activity, there is no automatic process to scan all the footage from the useful cameras. They can however manually obtain all the necessary video footage and further analyse this with a face recognition algorithm. Because this is a time-consuming activity, cameras equipped with facial recognition software are desired. They could, for example, recognize suspects much faster so the police can arrest them more quickly. In a research by Steven Feldstein, a policy researcher at the Carnegie Endowment for International Peace in Washington DC, it was found that in 2019 already 64 countries were using facial recognition systems for surveillance purposes. Of these countries, those with authoritarian systems are investing heavily in the AI surveillance techniques. Although smart surveillance can have many benefits, it is important to look at how these systems are deployed, used and in which direction it develops. Even CEOs of big tech companies such as Google support temporary bans of facial recognition systems until governments come up with concrete plans to regulate it. We believe that the negative consequences of facial recognition software in surveillance cameras implementation outweigh the positive ones and should thus be banned. 

“We are rapidly entering the age of no privacy, where everyone is open to surveillance at all times; where there are no secrets from the government.”

William O. Douglas

Rules and Legislation

Since this is a rather ‘new’ concept, most countries do not have specific legislation to regulate the use of this technology yet. A big exception is of course China, where these kinds of surveillance techniques are fully integrated within society. Last year, Washington state enacted a law governing the use of facial recognition by governments. It prohibits state agencies and law enforcement from collecting or using a biometric identifier without providing notice and obtaining an individual’s consent. Law enforcement agencies must obtain warrants before using the technology in investigations, except for in emergencies. In Europe, the use of facial recognition has been implemented in the General Data Protection Regulation (GDPR) for a while now. This European regulation standardizes the rules for the processing of personal data by private companies and public authorities. It states that facial recognition software may only be used when the individuals have given informed consent or when it is necessary for authentication or security. It is however still unclear when exactly law enforcement agencies can obtain such a warrant and what exactly is considered as ‘necessary authentication or security’. Naturally, such laws and regulations will be tightened up and adjusted over time. The problem with live facial recognition  in public places remains however, that it is impossible to request an informed consent from anyone who might be visible in the video footage.

“Study after study has shown that human behavior changes when we know we’re being watched. Under observation, we act less free, which means we effectively are less free.”

Edward Snowden

Since it is impossible to ask for an informed consent to everyone, implementing facial recognition software in public surveillance cameras can be seen as an invasion of privacy. There is, of course, a legitimate discussion about giving up one’s own privacy for a bit of security in return. The most common argument for implementing facial recognition software is the security it can possibly give, as in the situation described above, criminals could be caught easier and faster. However the analysis of faces by software will never be 100% accurate and prone to bias, so the possibility that people will be wrongfully accused is very realistic. This is already happening now, as in this case in the US and another case where a man was wrongfully arrested in the UK. Companies such as Amazon and IBM have already (temporarily) banned police use of their facial recognition systems due to racial bias. In addition, public cameras are visible to everyone. Therefore, people that are participating in an illegal activity will choose places that do not have cameras to commit certain crimes, such as residential areas or industrial sites. Even if a crime is filmed by surveillance cameras, a face mask would be enough to make yourself unrecognizable to the software. Where innocent people that just value their privacy will not wear a face mask constantly when going outside. Criminal activity will evolve like the technology does and as it always has been doing. Instead of investing in the facial recognition software for surveillance cameras, it is better to invest in crime prevention like smart sensors or research about the best placements of normal surveillance cameras.

Security Threats

For facial recognition systems to work, facial data has to be shared in a big network. For this, data has to be collected, stored and analyzed. This is often done by sending the data to big cloud servers. These cloud servers are big data centers equipped with massive storage and computing power for data and analysis. As with all sensitive data that is stored, there are a lot of security threats. If thousands of points of data are collected of you each day, it starts to paint a vivid picture of who you are and what you do. Information like this can be of use for many different reasons such as for companies to target their advertisement and products. There would also be a risk if the data comes into the hands of the wrong people such as new governments or groups of people using the data for ill-intent. As seen here, data leaks aren’t anything unusual. This is often about many records and can happen to even big companies like Microsoft or Apple where a lot of private photos were leaked to the public. There is also not one way that data breaches happen, it can happen accidentally or intentionally by employees, or by cyber attacks. In China such leaks of surveillance data have already been reported with a leakage of personal data such as ID-number, address and tracking locations. The more is known about people, the higher the consequences when such data isn’t stored properly and goes out in public. Although encryption of data makes it a lot more secure, it also has downsides as computing over encrypted data has challenges and is inefficient. It is also still possible for hackers to acquire data before encryption, or be able to access encryption keys, not to speak of the leakage of data from the inside. This means data will always be vulnerable to becoming public which can have major consequences on people’s privacy. 

When using facial recognition technology in surveillance, there is more than just having to trust your (local) government with the access and storage of your personal data. In the United States, the white house has already made allegations against the Chinese tech company Huawei of stealing sensitive data collected by their technology used outside of China. After Trump’s legal battle against Huawei, Joe Biden takes over and says he will make sure American telecom companies won’t be using their equipment. In the Netherlands, where Huawei’s equipment for facial recognition is also used, the General Intelligence and Security Service also names China as one of the countries spying on the Netherlands. This shows that not only governments have to be trusted, but also the companies delivering the technology used by said governments. Switching to more trusted companies, even located in the same country as where it is used would be a better option. However, even national companies are not guaranteed to be safe for sensitive and personal data.

Misuse of the Technology

It is also important to think about to what point facial recognition software is seen as necessary. As facial recognition software requires very little hands-on work, it is very easy to deploy systems in many different settings. Many people would agree that identifying people committing serious crimes in certain public places would be a good thing for security, but there would have to be some point where these systems are an overkill and start feeling like you’re being watched at all times without a good excuse. For example jaywalkers in China, who are also being detected by facial recognition, with a fine as a result and even the possibility to have your face on a led-screen for public shaming. Looking further at all the other possible threats facial recognition software in public places can cause, with all the data collected by the cameras, it is also possible for the government to use it with other non-ethical purposes. An example is to keep an eye on certain groups of people, like the way in which the Chinese government tracks all Uyghur residents, because they would pose a so-called threat to Chinese culture. You could say that the use of the cameras at such an extreme level will never happen in the Netherlands or Europe, but this software from China is already being sold to other countries as stated in this paper about the global expansion of AI surveillance. In addition, it is impossible to subsequently verify the use, as there are no protocols or legislation yet as previously mentioned. 

Conclusion

Briefly said, facial recognition software in surveillance cameras can be used for more purposes than only to fight crime. The different utilities come with great risks of even further invading our privacy and increasing the security risks due to the sensitive personal data. To prevent this from happening we should ban this software in surveillance cameras and invest more in other ways of fighting crime like smart sensors. We should look at the surveillance state China and see this as a warning.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More