Cambridge Analytica: Only the Tip of the Iceberg


Facebook co-founder and CEO Mark Zuckerberg has had to issue an apology over the Cambridge Analytica scandal. Courtesy of the White House


The Cambridge Analytica revelations demonstrate the growing pressure on social media companies to restrict the use of personal data for commercial purposes.

Revelations about Cambridge Analytica’s (CA) data mining and profiling practices have raised both legal and ethical questions. However, the current evidence suggests that CA’s activity was not an illegal data breach.

The methodology used to bring big data and social media together to create psychological profiles is legal, has been in use for several years, and its increased use should not come as a surprise.

However, while it is commonplace for commercial advertisers to use big data analytics to ‘hyper-target’ and ‘micro-target’ consumers based on their psychological profiles, it is more concerning when the overall aim of this is to influence, change or reinforce political views.

A large part of the social media business model is the use and re-sale of personal information. Social media platforms such as Facebook enable advertisers to reach people in specific age brackets or with certain political affiliations or interests. Third parties who buy anonymised data can then analyse it and design marketing campaigns to reach a certain demographic.

For instance, Facebook’s Data Policy states that,

 [W]e use all of the information we have about you to show you relevant ads. We do not share information that personally identifies you […] For example, we may tell an advertiser how its ads performed, or how many people viewed their ads or installed an app after seeing an ad, or provide non-personally identifying demographic information (such as 25 year old female, in Madrid, who likes software engineering).

The way Facebook permits advertisers to target users has evolved over the years. Initially, an advertiser could select from certain demographic options and could reach anyone who was deemed to be involved with that demographic.

The uncomfortable truth is that manipulation and aggregation of data were never going to be used simply to persuade an individual to buy the latest pair of trainers or to book a holiday

This process has become increasingly granular over time. Now, marketers can ‘cookie’ visitors to a website and display relevant ads to those people when they browse Facebook – a method known as ‘tracking’. Facebook even has a function that allows advertisers to target ‘lookalike’ audiences with similar characteristics to those that they have previously targeted.

The ability to deliver marketing content at scale to specific interest-based segments in a network is understandably perceived to be a progressive tool by marketers. However, the uncomfortable truth is that this manipulation and aggregation of data were never going to be used simply to persuade an individual to buy the latest pair of trainers or to book a holiday.

It was inevitable that, as ‘hyper-targeting’ became more advanced, it would also offer a low cost, high reward opportunity for those wishing to influence, change or reinforce political views.

The insidious outcome of micro-targeting is that individuals are now provided with content that is specifically tailored to their psychological and sociological profile

Again, these methods are nothing new – in 2012 it was reported that US President Barack Obama’s second presidential electoral campaign had made extensive use of big data profiling methods to classify and target individual voters.

The insidious outcome of this micro-targeting is that individuals are now provided with content that is specifically tailored to their psychological and sociological profile. Different users see different types of material depending on their interests, preferences, ideologies, fears and insecurities.

Some argue that this level of personal targeting amounts to psychological manipulation, which is particularly concerning given the fact that most internet users likely do not realise that they are being targeted in this way.

If there is one lesson to be learned from the CA revelations, it is the need for social media users to become more aware of the fact that they are presented with a carefully constructed and tailored version of the ‘truth’. Internet media must therefore be ‘consumed’ with a much greater degree of scepticism than traditional forms of media.

The CA files also exemplify the weak regulation for the use of personal data for commercial purposes when compared with the legal framework governing the authorities’ use of data capabilities.

When new legislation is needed to keep pace with technological developments for the prevention of crime and terrorism, it is immediately greeted with public outcry and accusations of ‘extreme surveillance’

If law enforcement or intelligence agencies in Britain wanted to collect and analyse personal data using such methods, they would most probably require a warrant to do so. They would need to be able to demonstrate that such an invasion of privacy was necessary in the interests of national security or for the prevention of serious crime, in accordance with the UK’s Regulation of Investigatory Powers Act 2000 or the Investigatory Powers Act 2016.

The private sector is not nearly as restricted in its use of invasive collection and analysis methods and isn’t as besieged by public opinion. Yet, when new legislation is needed to keep pace with technological developments for the prevention of crime and terrorism, it is immediately greeted with public outcry and accusations of ‘extreme surveillance’.

Why, then, should the commercial sector – motivated entirely by the desire to maximise profit – not be held to the same level of scrutiny?

In the past two weeks, we have seen the words ‘breach’, ‘scraped’ and ‘harvested’ used to describe the activities undertaken by employees of CA. These are words usually associated with cybercrime activity. However, there is no evidence to suggest that CA’s methods were unlawful.

On 17 March, Facebook published a press release stating that ‘The claim that this is a data breach is completely false. Aleksandr Kogan [the Cambridge University academic indirectly involved with CA] requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent’. It added that ‘no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked’.

So, rather than circumventing platform policies, CA were able to exploit the widespread lack of awareness among social media users of how their personal data is used for commercial purposes. It relied on the fact that most users who installed the app would not take the time to dissect the terms of service before granting it full access to their personal data.

The revelations about CA should therefore serve as another wake-up call to all internet users

The recently published 2018 Digital Attitudes Report compiled by Doteveryone, a UK-based think tank, found that only a third of people were aware that data they had not actively chosen to share has been collected, while a quarter had no idea how internet companies made their money.

The revelations about CA should therefore serve as another wake-up call to all internet users. With greater education and awareness, consumers may think twice about uploading certain information or expressing certain views on the internet and signing up to platforms without reading the terms and conditions.

While future regulation may restrict the use of personal data for ‘psychographic’ profiling, we must accept that there are limits to how social media platforms can enforce compliance, and it is difficult to monitor what developers and marketers do with that data once it has been sold to them. This will always leave users vulnerable to those willing to operate on the fringes of legality. It is ultimately the responsibility of individual users to take steps to protect their personal data. 

It is time for the consumers of these services to get wise to the threat and to approach online privacy in the same way they would offline privacy. 

The views expressed in this Commentary are the authors', and do not necessarily reflect those of RUSI or any other institution.


WRITTEN BY

James Sullivan

Director, Cyber Research

Cyber

View profile

Alexander Babuta

View profile


Footnotes


Explore our related content