Targeted ads, untargeted harm

In popular culture, the dangers posed by AI are associated with a conscious super-intelligence that becomes hostile to humanity. There is often talk of a technological singularity, a point where an artificially intelligent consciousness has the ability to make itself smarter and more capable without human intervention. Such speculative discussions, while captivating, are distracting from the more mundane but equally dangerous application of AI technologies widely deployed today.

The EU defines targeted ads / behavioural advertising as:

“[ads] based on the observation of the online behavior of individuals over time. It seeks to study the characteristics of this behavior through their actions (repeated site visits, interactions, keywords, online content production, etc.) to develop a specific profile and thus provide data subjects with advertisements tailored to match their inferred interests.”

Regulating targeted and behavioural advertising in digital services

Online advertisement is big business. Targeted advertisements are the driving force behind trendy terms such as “big data”, “the data economy” and “the attention economy”. When more users complete a transaction starting from an ad, more users see an ad or users look at an ad for a longer period of time, then the advertiser can charge more for running that ad. Profits are increased when users look at as many ads as possible and when users are presented with ads which they are likely to engage with. Interesting to note that the user engagement with the ad is not just a question of choosing the most appropriate ad from the repertoire, but also factors into the decision of what non-ad content (e.g. videos, social media posts) to show users in order to make them more susceptible to ads.

”In today’s world, Artificial Intelligence controls what the world is watching.”

Guillaume Chaslot – founder algotransparency.org

Companies such as Google and Facebook, relying on online ads as their main revenue stream have invested heavily in developing cutting edge machine learning based content recommendation and ad targeting algorithms. These technologies are extremely data hungry, and rely on extensive monitoring and tracking of user activity both online (e.g. browsing history) as well as off-line (e.g. location). For a Machine Learning practitioner it is not difficult to see how this data can be used to train models aiming to maximize the value of the ads, as measured by metrics such as the ones outlined above.

The source of danger in the use of AI for targeted ads is that of misaligned goals in AI. The problems boils down to the fact that AI will solve the problem we ask it to, not the problem we want it to solve. An analogy would be that of the mythical king Midas, who asked the gods for whatever he touches to be turned into gold. His wish was granted and soon king Midas desperately begged the gods to undo his power, as he could not eat or drink anymore.

Accessible explanation of the problem of “misaligned goals in AI”

Stuart Russel considers content recommendation algorithms, at the heart of targeted advertisement, to be an instance of the problem of misaligned goals in AI. He claims that through optimizing ad profitability, recommendation algorithms have spread false narratives and conspiracy theories.

In this article we argue that targeted advertisements are a business model with irresistible incentives for large scale user surveillance and profiling that leads to a wide range of negative societal consequences. The sheer amount of profit to be made from running targeted ads makes the task of mitigating negative societal consequences a never-ending game of whack-a-mole between regulators and big tech companies. A game that is practically unwinnable for the regulators. We argue that an outright ban on targeted advertisement is a necessary – but perhaps insufficient – condition for eliminating the perverse incentive of tech companies to perform mass tracking/surveillance of their users.

User Passivity & Addiction

Are recommendation engines turning us into zombies?

Have you ever found yourself mindlessly scrolling through social media apps, only to realize that more than an hour has passed? Do you get anxiety when you realize that you don’t have your mobile phone with you? How long are you on your phone before you go to sleep?
A survey of 1000 American over 18 conducted by reviews.org asked these questions and many others. It found that, on average, Americans check their phone 344 times per day, 48% feel a sense of panic or anxiety if their phone battery drops below 20%, 64% of respondents check their phone on the toilet, 71% reported checking their phone within 10 minutes of waking up and respondents spend 50 minutes on the phone before going to sleep. All-in-all, surveyed individuals report spending on average 3 hours and 19 minutes daily on their phones. Even though the survey doesn’t break down the time spent on advertisement-driven apps, it does specify that people spent 1.5 hours on social media, 1.25 hours on games and other apps and only 36 minutes calling or texting. Finally, 48% of surveyed Americans consider themselves addicted to their phones.

The Attention Economy – talk at the Royal Society for Arts
James Williams, a former Google product strategies who quit to become a philosopher at Oxford University

Wasting time glued to our phones might seem harmless, but more than 2 hours per day could surely be put to better use, at least for resting or exercising if not anything else. In recent years, sleep quality has become a popular topic in the media and recent research reports that around 69% of 13-16 year old Europeans do not get the recommended amount of sleep.

It is important to point out that AI driven recommendation engines are just a part of the broad tactics ad platforms use to capture user’s attention and time. Deliberate features and design choices such as “infinite scrolling” highlight the incentives online advertisement companies have to keep users hooked in order to maximize the effectiveness of ads.

Surveillance & Profiling

Have you ever wondered why Google spends billions on the development of the Android mobile phone Operating System only to make it open source and give it away for free? Looking at the sheer amount and scope of data collected in your Google profile, concerns about surveillance and profiling are more than justified.

An indirect, but significant downside of targeted advertising is its invasion of privacy sensitive data. Because advertisers actively seek, buy and collect information about customers in order to reach the specific target audience they’re after, they gather data specific to an individual, rather than their demographics. This often happens without the awareness and consent of those individuals and in turn contributes to the creation of their ‘online profiles’. As these profiles are stored in servers and data centers, they come with potential risks of getting leaked and with that the risk of ending up in the wrong hands. Aside from that, by accepting cookies and other agreements, people take the risk of being tracked digitally. Scientists warn this use of behavioral data could spiral out of control into harmful social control.

Studies show the potential danger of data being leaked to unknown third parties when mobile targeted ads are placed by major ad networks (e.g. Google, AdMob). By collecting both the profile and observed browsing and app activities, these ads indirectly contain a highly personalized profile on both the users’ demographic and interest. It is also shown that personalized in-app advertising can leak potentially sensitive personal information to any third party app that hosts ads. Attacks that exploit Facebook’s ads also happen frequently. These attacks infer and extract private user information, which highlight the potential dangers that come along with the indirect profiling targeted advertising brings to the table.

“But, I have nothing to hide…” is a frequently used argument when someone is confronted with the importance of privacy. As everything you do, write and say online will almost certainly be there forever, it is of high importance sensitive and private data will stay private. Certain standards and morals that are acceptable today, might change in the future and we might get judged by them. We all have been young and foolish and we should value not getting chased by our past actions. While getting publicly ‘canceled’ due to one’s browsing history seems like a dystopian scenario, it could well play out to become reality in the near future. Getting back to the argument stated above, some people might still feel like they have nothing to hide, even after finding out about the privacy violations of profiling caused by targeted ads. Therefore, the issue here is not particularly profiling itself, but rather one of consent.

Echo-chambers & Polarization

Even though the main purpose of tracking and profiling by ad companies is to increase traffic around their products, user data collected through these technologies can also be used to manipulate user opinions. In this process, the creation of so called “echo chambers”, people continue to see their preferred information that in itself reinforces further selective exposure. Because people focus on their adopted worldview, when shown such ads, they tend to absorb confirming claims better than disproving ones. This in turn helps to create group polarization between communities, which has harmful implications for democratic societies.

According to Google, around 60% of YouTube’s video watch time comes from recommended content. In the Netflix documentary Behind the Curve, Mark Sargent (the “Flat Earth King”) said that videos suggested to him by YouTube’s recommendation engine helped him to finally believe the earth was flat and find like-minded individuals to form an organization. Even though believing in conspiracies such as the earth being flat does not seem harmful at first sight, the underlying principles could do harm when believing in theories that can affect public health or can lead to violence. In fact, some studies show that people who believe in conspiracy theories are more likely to endorse violent actions, such as terrorist attacks. 

Negative consequences of targeted advertising include political polarization and the loss of political freedom. Citizens may be shown resources that are more likely to push them toward certain desired political perspectives and voting choices. The infamous Cambridge Analytica scandal was arguably the first high profile scandal related to targeted advertising and the use of social media in ethically dubious ways. From 2015 to 2017, stories started surfacing exposing the shadowy business practices of the eponymous company. These stories revealed how psychological profiles of more than 50 million US Facebook users, gathered without their consent, were used for targeted political ads. Some of these were aimed to demotivate rival candidate supporters from voting. It is reported that 32 countries across all continents have been manipulated in similar ways by the company. 

In addition to the previous malicious exploitation of user data, studies showed that political ads from the Russian Intelligence Research Agency (IRA) run prior to 2016 U.S. elections exploited Facebook’s targeted advertising infrastructure to efficiently target ads on divisive or polarizing topics (e.g., immigration, race-based policing) at vulnerable populations. These ads were specifically targeted at people that felt unhappy about the status quo, which shows how behavioral ads impact vulnerable people in particular. The effectiveness of divisive targeted ads is in part due to the fact that these don’t make the intention of the author explicit. A targeted political ad may present itself as an ad for something totally unrelated. When someone runs into that ad, they might view it with less criticism compared to when they would have known the true nature of the ad. 

Another problem with these targeted ads is that they often make use of misinformation. Recent studies conducted by the EU show how misinformation directly and indirectly impacts human rights such as freedom of speech and the right to freedom and privacy. For example, it is reported that personal mindsets about Covid vaccinations have been widely manipulated through these ads. It is estimated that over 80 million COVID-19 related ads containing misinformation have been removed by Google alone in 2020.

Is a complete ban the only way?

A recent study commissioned by the European Parliament, identified 5 areas of risk to democracy brought about the business practices of social media. In the report, the risks are discussed in terms of democracy and social media, however these risks are in fact present with any sort of business relying on targeted ads and have implications beyond democracy.

1. Surveillance: social media platforms extract and combine user data to keep users engaged and make profit from selling targeted advertising.
2. Personalization: social media provide personalized content to increase the relevance of information for each user and to bolster engagement.
3. Disinformation: social media facilitate the spread of false information either as an unintended consequence or due to certain users’ efforts to manipulate the platforms.
4. Moderation: social media platforms commonly remove or downgrade content and ban users in order to enforce internal rules and prevent alleged harms.
5. Micro-targeting: social media enable targeted advertising that uses granular behavioural data to profile people and to covertly influence their choices.

EPRS – Key social media risks to democracy

Alternative: banning political ads only

Why ban all targeted ads when the majority of the problems are caused by targeted political ads? These form the real threat to society, right?

First and foremost, even if political ads would be the only problem, it is wildly implausible to give them a definition precise enough to enact a fair and effective ban. An ad can be political even if it doesn’t mention a candidate or a party explicitly. It can raise advertise policies specific to a party or can promote certain prejudices and ideologies.
A report by Duke University called The Only People Who Got Hurt Were the Small Guys: Assessing Winners and Losers from the 2020 Platform Political Ad Bans,” found that banning political ads in 2020 by big tech companies (e.g. Google, Facebook, Amazon amongst others) failed in the goal of slowing the spread of misinformation and even hurt poorer campaigns and Democrats more than wealthier campaigns and Republicans. The report actually recommends lifting the ban on political bans for the 2022 midterm elections, but suggests that “dissemination of misinformation intended to suppress the vote” should be criminalized.

What if we improve legislation around consent to tracking and data logging, the rationally being that as long as the user understand agrees to the terms and conditions or has the option to opt out, would targeted ads be acceptable then? The user agreed to it after all, right?

According to another recent EPRS study, the current legal basis of targeted advertisement that is covered by GDPR legislation relies on user consent. The article describes how users’ consent is mostly abused as “businesses are able to induce most users, in most situations, to consent to any kind of processing for advertising purposes.” Users are misled by either obfuscating the terms and conditions in lengthy incomprehensible documents or by conditioning the availability of services on the acceptance of said terms and conditions related to personal data logging and behavior tracking.

The popularity of a Google Chrome browser extension called “I don’t care about cookies“, which is used to automatically accept all cookies on all websites, is an indicator that for many internet users the action of consenting is too inconvenient regardless of terms and conditions.

In order to prevent negative societal consequences of targeted ads, most users need to be protected. It is becoming clear that although existing regulations related to consent do technically offer an opt-out to many tracking mechanisms, these opt-outs are rarely used and represent a burden on the user as well as on companies not directly involved with targeted ads.

Alternative: ban targeting based on a specific traits

The European Parliament has made moves to ban targeted ads based on sensitive variables such as health, religion or sexual orientation to prevent future invasive advertising from happening. When fully agreed upon by the parliament, the final rules could come into force in 2023. While these initiatives are commendable, they are naive. Even though a variable explicitly mentioning gender might be banned, it can in practice be predicted with high accuracy from other correlated variables, such as shopping habits for example.

Summing it up

The European Parliament is currently working on two legislative packages the Digital Markets Act (DMA) and Digital Services Act (DSA) and part of this legislative work it has commissioned independent studies to be published through the European Parliamentary Research Service (EPSR).
The Key social media risks to democracy EPSR study highlights the fact that certain negative effects of social media are by-products of business models focused on engagement at all costs engagement required for targeted ads.

Interestingly, the European Data Protection Supervisor published an opinion on the draft form of the DSA urging legislators to go beyond transparency requirements and consider

a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking”.

The democratic party of the United States of America went even further last month by proposing the The Banning Surveillance Advertising Act, which should ban personalized ads altogether without exception.

One could argue that the negative effects of targeted advertisement can be mitigated through developments in technology or by legislation to protect specific aspects such as privacy or to impose stringent content moderation requirements in order to suppress disinformation and hate speech. In contrast, if the negative effects cannot be realistically and sustainably decoupled from the business model, then the only remaining course of action is to ban it. Given initiatives such as the Banning Surveillance Advertising Act as well as in the opinion of the European Data Protection Supervisor, it seems that the latter position is gaining traction.

Conclusion & Call to action

Trying to mitigate the negative societal consequences of targeted advertisement by regulating “around the edges” is faced with both technical difficulties as well as financial and political pressure. From a technical point of view, trying to regulate away the negative consequences of recommendation algorithms is fundamentally hampered by the “goal misalignment problem of AI”, which is currently an open research problem. From a financial and economic perspective, any attempt at regulation is faced with a powerful lobby to water down requirements. Once enacted, big tech companies have the legal consulting resources to ensure that they comply just enough to avoid sanctions, without necessarily complying with the spirit of the regulations.

Just like the problem of misaligned goals in artificial intelligence, in the business of targeted advertisement there is a misalignment between the need of governments and regulators to mitigate negative societal impacts and the business needs of the company. Attempting to develop regulation to constrain how an extremely complex undertaking such as targeted advertisement can operate will result in overly complex regulation, which becomes both ineffective and burdensome to business.

The Digital Services Act (DSA), currently being worked on by the European Parliament, touches upon recommender systems and targeted advertising and includes regulations around these subjects, but it falls short of asking for an outright ban on targeted ads. As such, the perverse incentives brought about by targeted advertisement still stand. Because it lacks an outright ban, the regulations in the DSA proposal risk getting watered down into a form that is just business as usual for large tech companies. It is imperative that anyone who is interested in the impact of AI on society today forms an opinion on the articles of the DSA and reaches out to their MEP to highlight any shortcomings. The time to act is now, before the legislation gets approved!

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More