October 25, 2020

Report: The Dangers of AI-Powered Advertising (And How to Address Them)

Fresh research from Mozilla Fellow Harriet Kingaby examines how the digital advertising ecosystem can surveil, misinform, and endanger consumers around the globe. Kingaby also offers insight on potential reform

Twenty years ago, digital ads were little more than online billboards — pesky pop-ups that didn’t know who was seeing the ad, or why.

Today’s AI-powered digital advertisements are exponentially more sophisticated. These ads can profile consumers and segment them into astonishingly precise audiences. And these ads are highly personalized, from the language and images used to the price of the item being sold.

This AI-powered advertising provides consumers around the world with “free” access to products and services. It’s highly effective for advertisers, and highly lucrative for platforms. But there are grave harms, and consumers bear the brunt of them.

Today, fresh research by Mozilla Fellow Harriet Kingaby examines these harms on a global scale. In the report titled “AI & Advertising: A Consumer Perspective,” Kingaby identifies seven major threats that AI-powered ads present to consumers, from discrimination to misinformation. Kingaby details the consequences of those threats, and how they will likely grow worse as AI technology advances.

The report also identifies five major reforms that could mitigate these harms, and the steps that civil society, regulators, and industry must take to realize them.

Says Kingaby: “Digital advertising is a booming industry: over $300 billion in 2019 alone. It’s also the primary business model sustaining the internet, humanity’s most important communications tool. But as AI-powered advertising grows more pervasive and sophisticated, it is doing so without guardrails. There are few rules to ensure it doesn’t surveil, misinform, or exclude consumers. If the industry doesn’t undergo major reform, these problems will only grow more pronounced.”

Kingaby is a UK-based Mozilla Fellow embedded at Consumers International, where she researches the consequences of AI-enhanced advertising. Kingaby is also co-chair of the Conscious Advertising Network, a coalition of organizations supporting ethics in advertising.

The seven key harms identified in the report are:

● Excessive data collection. In order to tailor individual ads, multiple companies must collect and store huge amounts of data on consumers. Meanwhile, consumers are totally passive actors in adtech systems. They are something to be profiled and targeted, and are not given meaningful choices about how much data they would like to hand over, to whom, and for what.

● Discrimination. The personalisation of ads inherently restricts the products, services, and content consumers see. This can potentially lower consumers’ aspirations, restrict lifestyle choices, and hide products, services, or events from groups of consumers.

● Harm to the vulnerable. Digital advertising may encourage compulsive and harmful behaviour, mental health issues, or unsustainable consumption. Data that predicts when consumers are in particular emotional states is already in use, and targeting can also be used to single out consumers or groups who are particularly vulnerable or otherwise receptive.

● Online scams and misinformation. Fake news and misinformation have a lucrative business model via advertising, which favours content which garners a reaction. Social media sites, where disinformation can spread, have ad-based business models, and “addictive” interventions are designed to keep consumers on the sites for longer, enabling platforms to serve more ads.

● Limited agency. Consent mechanisms for advertising under GDPR and CCPA are poorly designed, and often nudge consumers into making choices which favor advertisers. Privacy policies and other terms and conditions are overly long, sometimes non-compliant, and frequently fall short of educating consumers. It is demonstrably not clear how the system works to the average consumer, making it hard, if not impossible, to make informed choices or obtain redress in the case of harm.

● Environmental harm. The number of data centres worldwide has grown from 500,000 in 2012 to more than 8 million today. The amount of energy used by them continues to double every four years, meaning they have the fastest-growing carbon footprint of any area within the IT sector. Researchers estimate that the tech sector will contribute 3.0–3.6% of global greenhouse emissions by 2020, more than double what the sector produced in 2007 and the estimated 2020 global footprint is comparable to that of the aviation industry. Online advertising consumed between 20.38 to 282.75 TWh of energy in 2019, and 11.53 – 159.93 million tons of CO2e was emitted to produce the electricity consumed.

● Hate speech. Global hate crimes are on the rise, and have been linked to social media, polarisation caused by personalisation, and “filter bubbles.” Platforms have struggled to keep up with policing hateful content, and AI is not yet sophisticated enough to be efficient. Far-right commentators and other hate preachers are continuing to make money through digital advertising on the open web or through platforms such as YouTube — which in turn radicalizes young people.

The report also identifies a lack of cross-sector collaboration as a critical issue holding back progress. It calls for cross-disciplinary, mediated forums to be created, comprising digital rights groups, consumer protection experts, funders, publishers and advertisers.

Forums should ensure ethics by design in AI-powered advertising, identifying harms and creating new initiatives to solve them as they evolve; as well as monitoring ‘unknown unknowns’ which arise as a result of new technologies.

Forum priorities to include:

● Maintaining consumer protection and human rights, using these as core design principles for new AI technologies.
● Proactive AI stewardship, using AI sparingly, tracking and acting on the emergence of harms in real time.
● Supply chain accountability, ensuring advertisers are able to take responsibility for their digital supply chains in the same way as their physical ones.
● Funding a healthy internet, directing ad budgets to support diverse voices, quality content, and accountable platforms.