Our research over the last 18 months has identified more than forty distinct methods, of which micro-targeting is only one part of, being used by political actors and private companies to influence your votes, based on data collected on you. Here, we give an overview of the kinds of techniques used by the political influence industry, how they are being deployed, by whom and to what ends.
Your personal data is being used to target you in elections.
It’s common to find such sweeping statements in some of the hundreds of articles that have been written over the past ten years on the use of personal data in elections. Yet few of these articles break down the details of where this data comes from, how it is used and for what purpose. It is precisely these details however, that would make such statements meaningful. Knowing, for example, that in the 2017 Kenyan election, individual mobile phone numbers were bought to reach certain groups of people1; or that in the UK’s Brexit referendum, credit scores were bought to pitch certain individuals on leaving the EU2; or that in the 2000 US presidential election, Republicans mined data on those who registered for fishing and shooting licenses to identify potential allies3. This kind of knowledge would help to more adequately frame the question: should we as citizens care about how our data is being used to target us in political campaigns?
In light of recent controversies about Cambridge Analytica and Facebook’s roles in elections worldwide, we need to more deeply understand the data-driven methods that political parties, campaigns, candidates and private companies are using to influence us. Whilst many media reports have focused on a technique known as ‘micro-targeting’, our research has found that there are over forty different data-driven methods being employed by political campaigns worldwide.
Understanding these different methods is essential. It allows us to contextualise these practices in light of the current Cambridge Analytica and Facebook scandal, but also to examine the broader political influence industry. This research initiative has found that there are over 250 companies trading in digital and data-driven technologies for political influence.
There are some crucial differences between these political data companies and Cambridge Analytica. Most importantly, some of the activities Cambridge Analytica are alleged to have engaged in are clearly illegal. In particular, their overreach into Facebook personal data collection through the personality test app ‘This Is Your Digital Life’ and their alleged sub-contracting to Israeli hackers in the Nigerian elections4. Our research has found that outside of these illegal methods, most of the techniques Cambridge Analytica and their alleged local affiliates, sub-contractors and consultants were selling are actually common throughout the political influence industry5. These include: a) the use of data modelling to look for trends and assess attitudes across populations, b) the use of personal data to identify and single out targets, c) profiling individuals based on their personality and behavioural tendencies, d) a range of testing and adaptation tools, such as surveys and A/B message testing, and e) the use of platforms such as Facebook and Google’s search engine for micro-targeting. Indeed, many of these techniques have a longer history in the context of pre-digital political campaigning.
Another important difference between Cambridge Analytica and many of the other companies we reviewed in our research is what they were willing to do with the methods they were using. Whilst the practices may have been commonplace and indeed have a longer history, only a sub-set of companies have been and would be willing to go as far as Cambridge Analytica did in the level of manipulation in their messaging, their choices of clients and what they were willing to do for them. For example, most political parties use ‘search influence’ as a way to sway voters’ opinions about the position of candidates or parties on certain issues. But search results that were attributed by the press to Cambridge Analytica’s campaign show some were bordering on fake news6.
Overall, Cambridge Analytica were outliers in the sector, but not necessarily due to the tools and techniques they used; rather, because of the way they navigated the ethical and legal boundaries of using personal data for political influence. As such, we need to separate the strategies and tactics (what they were willing to do and how) from their tools and techniques (what available technologies and methods they used). Doing so elucidates key questions: what is the real harm to the democratic process caused by Cambridge Analytica and Facebook? How does this compare to the broader industry and where are the ethical lines?
To give an overview of the tools and techniques used by the political influence industry, we have organised our findings into three main categories. We chose these categories not according to industry language or technical frameworks, but rather for their political significance:
Data as a Political Asset: valuable stores of existing data on potential voters exchanged between political candidates, acquired from national repositories or sold or exposed to those who want to leverage them.
Data as Political Intelligence: data that is accumulated and interpreted by political campaigns to learn about voters’ political preferences and to inform campaign strategies and priorities, including creating voter profiles and testing campaign messaging.
Data as Political Influence: data that is collected, analysed and used to target and reach potential voters with the aim of influencing or manipulating their views or votes.
We explain each of these methods in more detail, including examples of how they are used in the political process. We have started our series with an example of the use of ‘data as intelligence’ with a feature on psychometric profiling.
We want this in-depth exploration of methods to advance the discussion about what needs to be done to better understand and regulate the influence industry, so that:
- Regulators, lawyers and policy makers can put checks and balances in place, clarify their ethical boundaries and reduce abuses in the future.
- Political and social scientists, researchers and journalists can advance discourse about what impact these methods have, if any, on the political process, democracy and society at large.
- Political parties and strategists can navigate the choices in front of them, understand the relevance of their actions and balance questions of effectiveness with appropriateness.
- The political data and influence industry can better distinguish between legitimate methods and unethical practices, both for the benefit of their clients and for the decisions they make about what they are and what they are not willing to do.
- Individuals can understand why, when and how they are being profiled, monitored and targeted in the context of political influence.
SEN. GARDNER: "… Mr. Zuckerberg, a couple of other questions I think that gets to the heart of this expectation gap as I call it, with — with the users. Facebook, as I understand it, if you’re logged in to Facebook with a separate browser and you log in to another — log in to another article, open a new tab in the browser while you have the Facebook tab open, and that new tab has a Facebook button on it, you track the article that you're reading. Is that correct?"
ZUCKERBERG: "Senator, I ..."
SEN. GARDNER: "In the new tab."
ZUCKERBERG: "... I think that there — there is functionality like that, yes."
SEN. GARDNER: "Do you think users understand that?"
ZUCKERBERG: "Senator, I think that they — that there is a reasonable — the — I think the answer’s probably yes for the following reason, because when we show a “Like” button on a website, we show social context there. So, it says here are your friends who liked that. So in order to do that, we would have to ..."
SEN. GARDNER: "But if — but if you’ve got your Facebook browser open and you open up the article in the Denver Post, and it has a Facebook button on it, you think they know — consumers, users know, that Facebook now knows what article you’re reading in the Denver Post?"
ZUCKERBERG: "Well, we would need to have that in order to serve up that — the — the Like button and show you who your friends were who had also liked that."
SEN. GARDNER: "So, I — I — I — and I think that goes to the heart of this expectation gap because I don’t think consumers, users necessarily understand that. I mean, in going through this user agreement, as others have, you do need a lawyer to understand it. And I hope that you can close that expectation gap by simplifying the user agreement, making sure that people understand their privacy."
The Facebook hearings, held in Washington D.C. and Brussels in early 2018 and broadcast globally, shed new light on the way data-driven technology companies think about themselves. For thousands of people, this was the first time they had real insights into the mechanisms that transform their personal data into profit. And it may have been the first time they had considered how, somewhat counterintuitively, Facebook has become one of the richest companies in the world by giving away free services.
This was a public outing of a largely opaque, multi-billion-dollar sector built on harvesting and analysing individuals’ data, revealing how Facebook failed to protect that data when others sought to leverage it in political campaigning. Why did it take such a public scandal to hold Facebook’s practices to account?
The brief exchange between Mark Zuckerberg and Senator Cory Gardner outlined above shows just how far technology companies are from understanding the concerns and competencies of their users. And, conversely, just how far citizens are from understanding what they are signing up for when they click ‘I agree’. If this is the case when it comes to users understanding how they are being tracked when browsing the web, then it is certainly the case when it comes to political advertising.
But how could users fully comprehend how their data is being collected and used? Due to the closed nature of the industry, it is nearly impossible to find out exactly how technology companies like Facebook process data and how their clients utilise it. Because of this, technologists, journalists, lawyers and activists over the past few years have been forced to try to ‘reverse-engineer’ Facebook’s platform to speculate how it may work7. Some of these efforts are particularly focused on trying to understand political ad tracking.
Facebook has since made a series of concessions on how it displays political ads and there is some debate that the changes in European data protection law may further resolve some long-term issues. But fundamentally the system is broken. Technology companies are not transparent about their practices and it takes an enormous amount of ‘translation’ for these insights to make sense to normal users. In addition, technology companies want to take credit for the positive impacts they have on society – of which there are arguably many – but don’t want to take responsibility for the negative impact they have. This stance raises questions not only about their levels of transparency, but also accountability and responsibility.
Similarly, the Cambridge Analytica hearings in the UK in April this year by the House of Commons Digital, Culture, Media and Sport Committee were a lesson not only in what technology and data-driven companies have been doing in the political context, but also a sign of the gap in understanding between the companies and those leading the enquiries. Testimony by ex-staff from Cambridge Analytica described a set of practices and dealings with dubious characters and events, combined with examples of how the company utilised political marketing techniques in combination with data analytics8. Cambridge Analytica’s ex-business development director, Brittany Kaiser, explained the large-scale purchase of commercial data and the use of this data to target people through platforms such as Facebook and Google. In her testimony, she stated that most people may not know that Google Search results can be paid for in the form of advertising. But she also pointed out that it was ‘an old-school tactic’ that was widely accepted in political campaigns as a way of reaching voters in the run up to an election9.
The scandal was accompanied with a set of leaked slide-decks, emails and memos. Those that circulated in the media and on the internet, showing the sales pitch of Cambridge Analytica to the Trump campaign, gave a ‘Snowden-leak’–type atmosphere to the story. Yet many of the techniques outlined in these documents were already familiar to the digital marketing and advertising sector, leading some to wonder whether the term ‘whistleblower’ that had been presented by the media was appropriate at all and why it would take leaks to inform the government, the media and the citizens about such activities. In democratic countries, surely such information should be transparent and freely available? Especially after a campaign has been won.
In the days following the scandal, web pages providing case studies of political data services in action by a variety of small companies and large scale platforms, which had previously been public, began disappearing from the internet. These were web pages that this research project had been following and in some cases archiving for the past 18 months. These disappearing pages had showcased how different kinds of data companies, including Facebook, had used data-driven technologies in political campaigns and elections around the world, including the UK Conservative party, the Scottish National Party, the Canadian Liberal Party, the elections in Mexico, US senatorial races, and gubernatorial races in Mexico. Their overnight disappearance raised new questions. If harvesting data in political campaigns really is ‘business as usual’ and a legitimate part of the political campaigning process, then why are public web pages removed in the aftermath of a scandal?
Below is a screenshot from Facebook's page on marketing success stories. As can be seen "Government and Politics" was a distinct industry category.
Some "success stories" were featured with more details on the tools and techniques on offer by Facebook. Here we see highlights of Claudia Pavlovich's campaign to "reach and influence" Mexican voters.
More recently, Facebook has removed the "Government and Politics" category as well as all traces of the once featured success stories.
Not all parts of the political influence industry are as opaque. Some of the companies mapped in this research project openly promote their tools and techniques and provide clear details about how they work. In some cases they publish case studies, explainer videos, or diagrams of the services they sell, or offer those who sign-up a ‘demo’. Some of the companies featured in this research highlight their methods, but do not go into detail about how they work, or they publish information on only some of their services and clients. Others provide little to no information, or take down their websites after a particular campaign or news story hits the headlines.
Because of the variations in transparency and available information, this research project has augmented online research with other sources. This includes research through media articles and academic studies, direct interviews with companies and digital strategists, and participant observation at industry conferences and presentations. As such, this report reflects an in-depth, global search and analysis of publicly available information combined with information gained through industry events and interviews at the national and international level.
Through our research to date we have identified more than forty different methods that use personal data in the political process, but we expect this to be a dynamic and growing list.
In our reports, we examine what can be learned from the outside, explore how each method works and highlight examples of its use in political campaigns around the world. You can expect to see an overview of each of these showing their application in the political sphere worldwide over the coming weeks. We have started this series with a piece on ‘Data as Intelligence: Data collected on you based on What Kind of Person You Are: Psychometric Profiling.’
Below is a breakdown of our analysis of methods used by political parties. While the methods are not strictly linear and can occur in tandem with each other, we've identified a top level progression of data acquisition (data as an asset), analysis (data as intelligence) and uses for influence (data as influence).
|How your data is used to influence you in political campaigns|
|DATA AS AN ASSET|
Data acquired about you as a political asset
Valuable stores of existing data on potential voters, and how they are exchanged between political candidates, acquired from national repositories or sold or exposed to those who want to leverage them.
|1||Political data about you|
Data accumulated directly by political parties on their constituencies, supporters and allies.
|• Voter data bases|
• Trade and accumulation of data on voters within parties and across candidates
• User data from party- and politician-led voting and canvassing apps
• Voter-customers matcher and custom audiences
|2||Consumer data about you|
Data about you collected commercially and privately which is then analysed, packed and sold as political data.
|• Lifestyle and consumer habits (including merging of offline and online data sets), e.g. periodicals, mobile phone location data|
• Private service data, e.g. insurance data, credit rating data
|3||Public data about you|
Data about you collected on the open internet or through public data collection, which is packaged and sold for political use.
|• Public data that is sold to the commercial sector, combined with other data, and then sold back to political parties e.g. transport, health, ID, census, social security, licenses|
• Public data that is accessed by political parties
• Inferred public data from open information on the internet, for example data on Google Maps used to infer neighbourhood or household type
|4||Exposed data about you|
Voter databases or party databases which are leaked, breached, hacked or exposed.
|Examples of this exist worldwide, some databases are purely of voter lists, others are for specific communities such as expats or migrant workers, and can also contain detailed information, including passport scans and fingerprints.|
|DATA AS INTELLIGENCE|
Data collected on you and used for digital listening to gain insights and make strategic decisions within a political campaign.
How data is accumulated and interpreted by political campaigns to learn about voters' political preferences and to inform campaign strategies and priorities, including creating voter profiles and testing campaign messaging.
|5||What kind of person you are|
Data collected on you and then analysed with the purpose of ascertaining your values, what motivates you and what you may respond to.
|• Psychometric tests, such as OCEAN (IBM Watson, Facebook, etc)|
• Other kinds of behavioural and personality-based analytics
|6||What you are interested in|
Data collected on you as you browse the web and use different devices and apps with the purpose of gaining insights into your habits, behaviours, values, interests, hobbies and affiliations.
|• Cookies and third-party cookies|
• Cross-device targeting
• ISP data
• Tracking pixels
|7||What you are talking about online|
Data collected, aggregated and analysed on what you are discussing and doing online, often referred to as ‘digital listening’ or ‘social media listening’, with the purpose of gleaning sentiment, political opinions and positions.
|• Social media and online forum monitoring|
• Sentiment analysis
|8||What you respond to|
Mass testing and modification of narratives, messages, advertisements or visual communications with the purpose of monitoring and analysing actions taken.
|• A/B testing|
• Dynamically and algorithmically generated ads
• Tracking performance of ads, ‘engagement’ and ‘conversion rates’
|DATA AS INFLUENCE|
Data collected on you and analysed with the aim of targeting you as an individual and influencing your actions.
How data is analysed and used to target and reach potential voters, with the aim of influencing or manipulating their views or votes.
|9||Who you are|
Using personal profiles and behavioural data to more effectively micro-target and attempt to influence you through posts, messages and adverts.
|• Social network advertisements and dark posts (e.g. Facebook, Snapchat)|
• Direct mail and email
• Mobile advertising through apps and social platforms
|10||Where you are|
Using data on your whereabouts to influence and target you
|• Geo-fencing based micro-targeting through platforms like Google or Facebook|
• Geo-fencing apps
• Location based apps and services
• Enhanced canvassing
|11||What you are looking for|
Using data about what you're searching for online to influence and target you
|• Black-hat search engine optimisation|
• Advertising driven search engine results
• Ad exchange
|12||What you are watching and reading online|
Using your viewing habits to influence and target you
|Including micro-targeting through:|
• Online video services, e.g. YouTube
• Network television
• Online media
|13||Who you know|
Using your networks and the groups and communities you associate with online to target and influence you
|• Voter/canvassing apps that leverage your networks|
• Targeting political influencers
• Messaging and targeting based on communities, such as WhatsApp and Tinder
|14||Experiments in using your data for political influence|
New or developing experiments and attempts to use your data for any of the above reasons, or in combination with other techniques.
|• Artificial intelligence (AI) applications to deliver more effective marketing, such as AI- Powered Voter Intelligence|
• AI-enabled “chatbots”
• Identity management and marketing automation
For the purposes of this project, we first identified methods according to how they are commonly referred to in the industry and then divided them into categories based on their individual and political significance. Because of this, a brief explanation of our rationale may be helpful:
- We chose to categorise techniques based on their political value and their significance to citizens, not how these methods are marketed or sold by the political influence industry, nor how they are generally referred to technically. We believe this is the lens through which these methods should be looked at and we found a distinct lack of such categorisation in other studies. Our intention is to separate elements in order to better understand them from a political point of view, and to understand their purpose and where this could lead as new technologies are developed.
- As with all categorisations, there are no perfect solutions. We have chosen the following criteria in selecting the methods we studied:
- We are aware of the danger of over-simplifying tools and techniques. In choosing to place a method in a particular category, we have tried to emphasise what it is ‘mostly’ used for and where it has the most value. Our categories loosely map onto the process of working with data-driven technologies: acquisition (assets), analysis (intelligence) and application (influence). That said, many of these methods can be used in multiple ways: they may be deployed independently or in combination with others; some can be used simultaneously; some are combined with analogue political campaigning techniques.
- There are still important questions about the effectiveness of these methods, which warrant in-depth enquiry. In the meantime, a few proxy studies and indicators can give some insight into the answers. Our upcoming piece, ‘Does data-driven political campaigning actually work?’ is a first step in this analysis.
- Another necessary next step is an analysis of how these methods overlap and are used in combination with other techniques for digital manipulation, such as: fake news, bots and fake followers, hacking and internet and mobile phone network throttling and shutdown.
- A final element of our report represents developing technologies that are currently being experimented with in the political sphere. These experiments indicate not only what may develop in the coming years, but perhaps most importantly what attempts at digital political influence are being made regardless of their uptake or effectiveness.
Stephanie Hankey is the co-founder and Executive Director of Tactical Tech and is currently a Visiting Industry Associate at the Oxford Internet Institute at the University of Oxford. Her work combines her background in technology, design and activism. She currently writes, consults and teaches on the politics of data and ethics in technology design.
1 Tactical Tech's report "Data and Digital Election Campaigning in Kenya" ↩
2 Digital, Culture, Media and Sport Committee Tuesday 17 April 2018 - Parliamentlive.Tv. ↩
3 Bruce Newman, The Marketing Revolution In Politics: What Recent U.S. Presidential Campaigns Can Teach Us About Effective Marketing (Toronto Buffalo London: Rotman-UTP Publishing, 2016) ↩
4 Carole Cadwalladr, “Revealed: Graphic Video Used by Cambridge Analytica to Influence Nigerian Election,” The Guardian, April 4, 2018, sec. UK news ↩
5 See archived website of Cambridge Analytica and Channel 4's documentary ↩
6 Paul Lewis and Paul Hilder, “Leaked: Cambridge Analytica’s Blueprint for Trump Victory,” The Guardian, March 23, 2018, sec. UK news ↩
7 See the work of ‘Facebook.Tracking.Exposed’, ‘Who Targets Me’ and ‘Data Selfie’, Share Lab, Pro-Publica and Max Schrems↩
8 Digital, Culture, Media and Sport Committee Tuesday 17 April 2018 - Parliamentlive.Tv ↩
9 Lewis and Hilder, “Leaked.” ↩
Published 29 May 2018