As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience also comes with a concern for personal privacy.
These AI systems require massive amounts of personal data to be trained and customised for user preferences. AI algorithms deployed by big tech companies to personalise content and provide targeted advertisements to users rely heavily on data profiling. Every click, scroll, and search leaves a digital footprint that is analysed to tailor online experiences, blurring the line between personalisation and intrusion.
Sam Altman, CEO of OpenAI, the organisation behind ChatGPT, recently in an interview with Axios spoke about how ChatGPT will “evolve in uncomfortable ways”. Altman believes that the future of AI will need to allow individual customisation. He wants to make ChatGPT’s responses more personalised, having different people receive different answers based on their values and preferences. This feature would require collecting and analysing users’ sensitive or personal information. As promising as this customisation is, there are risks associated with it such as misuse of personal information, unauthorised access, or data breaches. Emphasising the need for privacy protection guidelines on data usage.
AI see you.
In addition to personalisation efforts, the integration of AI into monitoring systems like facial recognition and smart policing also raises privacy concerns. Although surveillance isn’t a bad thing. On one hand, these tools offer great potential in areas like security, and law enforcement enhancing public safety and national security. But on the other hand, this also raises the question of what is lawful monitoring and what is abuse of power. Governments often defend these monitoring practices on the basis of safeguarding public safety or preventing crime. However, The Office of the UN High Commissioner for Human Rights (OHCHR) cautions that these measures may “unjustifiably or arbitrarily” restrict citizens’ freedom and rights to privacy.
Furthermore, according to Dr Mark van Rijmenam, CSP these surveillance systems are not always transparent, making it difficult for individuals to know when they are being monitored and for what purpose. This lack of transparency not only affects people’s trust in law enforcement but also presents a cause for concern amongst people.
Another challenge that remains unresolved is the use of AI to generate inferential data. Inferential data is the data that is derived from analysing user data and finding patterns to infer some additional information. Say, the user gives some information and the algorithm uses this data to make inferences and that inference could be some sensitive information like their personal beliefs or preferences. The user has not explicitly given this information nor do they have the knowledge that the organisation has it. The lack of transparency surrounding the collection of inferential data, as users, unknowingly relinquish control over their personal information, allowing organisations to make inferences that could compromise their privacy rights.
How do Targeted ads work?
The internet has evolved into an indispensable assistant, seamlessly integrated into our daily lives. While having an assistant can be useful, it also means they can track most of your information, if not all of it. Similarly, when we visit a website or search for particular products it becomes part of your search history. But how does that search work its way into the ads that pop up on your screen? Search engines use an auction-based system where advertisers or companies bid on certain keywords and if that word is part of your search history, the ad pops up. According to Statista, As of July 2023, online search engine Bing accounted for 9.19 percent of the global desktop search market, while market leader Google had a share of around 83.49 percent. Meanwhile, Yahoo’s market share was 2.72 percent. Considering Google is our front runner, we will look closely into it. Google knows your personal information such as your name, gender and age because we have willingly given it to them as Personal Information. It is easy to track down your activity including:
- Google Maps – The locations you’ve looked up or been.
- YouTube – The videos you’ve watched or searched.
- If your rewards card is linked to Google Pay, the way you budget.
- How much time you spend on an app (if you use an Android).
- The questions asked to Google Assistant.
- The articles you read on Google News.
How does Google employ AI for ads
Google Ads is an online advertising platform developed by Google for companies to create effective dynamic ads to be displayed on Google’s search engine results pages (SERPs) and other websites under the Google Display Network. Google brings more specific and relevant results as the user searches for certain keywords or key phrases. Accordingly, businesses can create targeted ads for their audience, resulting in more leads.
In 2011, Google commenced the Google Brain Project and built a “neural network” for a consecutive year. With regular algorithm updates, Google uses AI to understand what the user wants to search for, along with the context and relevance of the related keywords. Notably, Google’s BERT (Bidirectional Encoder Representations from Transformers) update trains its question-and-answer system by employing natural language processing. (source: SiteCentre)
Consent is not enough as a privacy mechanism.
Cookies are small pieces of text sent to your browser by a website you visit. They help that website remember information about your visit, which can both make it easier to visit the site again and make the site more useful to you according to Google Privacy and Terms.
Very few are truly educated about online privacy and have the understanding of what to consent to when they click on “Manage Cookies”. Even when one does know, sometimes we as humans just lack the patience to find out what we are sharing. In an interview with Harvard Business Review, Dr. Helen Nissenbaum, a Stanford alumni and professor of Information Science at Cornell Tech says “‘Hey, we use cookies — click here.’ This doesn’t help. You have no idea what you’re doing, what you’re consenting to.”
She further elaborates on how even companies that do not intend to misuse the data collected cannot guarantee on how the data they have will be used. She highlights the importance of Post Consent approaches that will still rely on consent but not only on consent. These changes need to be discussed and imposed on companies globally. However, that is a mammoth task but we must acknowledge that some countries are taking a step in the right direction.
General Data Protection Regulation (GDPR) is Europe’s effort to work towards protecting the privacy of its citizens. In 2020, the French data protection regulator fined Google 50 million euros for violating the GDPR. The European Union has also proposed a new regulation called the Digital Services Act (DSA). The main goal of DSA is to create a safer digital space, protecting the rights of the users using digital services. These digital services range from simple websites to internet infrastructure services and online platforms. The aim is to give users more control over their data.