How Can Social Media Companies Stop the Spread of Fake News?
Facebook’s actions to remove content created by the Russia-based Internet Research Agency demonstrate that it is ahead of its competitors in the fight against disinformation. However, this is a retroactive step and the damage has already been done.
Facebook published a blog post earlier this month announcing that it had removed dozens of accounts and more than 100 pages controlled by the Russian propaganda organisation and ‘troll farm’, the Internet Research Agency (IRA).
The post describes how ‘the IRA has repeatedly used complex networks of inauthentic accounts to deceive and manipulate people who use Facebook, including before, during and after the 2016 US presidential elections’. These pages and accounts, it added, were removed ‘solely because they were controlled by the IRA – not based on the content’.
This represents an apparent change in Facebook’s policy, as it amounts to proscribing and ‘blacklisting’ an entire organisation, rather than simply removing content that is deemed to breach the platform’s Terms of Service.
The IRA’s social media disinformation campaigns have been well documented since a 2015 New York Times Magazine investigation revealed how the St Petersburg-based organisation had ‘industrialized the art of trolling’
Facebook’s actions therefore raise two important questions: why has this not been done sooner?; and, does this mark a step change in the way that social media companies deal with disinformation?
The IRA’s social media disinformation campaigns have been well documented since a 2015 New York Times Magazine investigation revealed how the St Petersburg-based organisation had ‘industrialized the art of trolling’. It used fake accounts to craft elaborate hoaxes and spread rumours of fabricated terror attacks and police shootings, aiming to spread fear, amplify public anxieties and sow dissent among US citizens.
On 16 February, US Special Counsel Robert Mueller handed down a federal indictment, alleging that the IRA had first targeted the US as early as 2014, with the objective of ‘…interfering with the US political and electoral processes, including the presidential election of 2016’.
According to the indictment, the IRA’s fake pages and profiles – designed to look like they belonged to real American citizens – became ‘leaders of public opinion’. The scale of the IRA’s activity is concerning; last October, Facebook told Congress that the IRA had posted around 80,000 pieces of content that reached around 29 million users between January 2015 and August 2017.
Tech companies’ experiences with extremist content-removal demonstrate the difficulties in developing systems to automate content takedown
Appearing before the US Senate’s Commerce and Judiciary committees on 10 April, Facebook founder and boss Mark Zuckerberg said that ‘One of my greatest regrets in running the company is that we were slow in identifying the Russian information operations in 2016.’ He added that ‘… there are people in Russia whose job it is to try to exploit our systems […] so this is an arms race, right?’
While Facebook’s actions against the IRA represent a positive step towards cleansing social media platforms of (allegedly) state-sponsored disinformation, this is ultimately a reactive response. Once fake news has been published online, content spreads across platforms swiftly, and by the time it is eventually removed, the seeds of doubt may have already been sown among users.
So, beyond retroactively deleting fake news pages and accounts, what more can be done to prevent the spread of disinformation? In November 2016, Zuckerberg posted a status update outlining measures the company was taking to improve its handling of fake news.
These included making it easier for users to report stories as fake, attaching warning labels to stories flagged as fake, using third-party fact-checking services to verify the legitimacy of content, and improving automated systems that detect disinformation before it is flagged by users.
The first two measures rely on individual user-reporting to identify fake content. Aside from being a retroactive rather than preventative measure, this will only be effective if users consistently flag stories, which may be an unrealistic expectation. In addition, this can easily be abused by malicious actors ‘false flagging’ legitimate news stories.
There is at present a lack of research into the dissemination and consumption of disinformation
Third-party fact-checking is another preventive option. However, a study by the European Research Council, which funds scientific research, suggests that ‘fact-checks of fake news almost never reached [Facebook’s] consumers’ in the lead-up to the US presidential election.
In addition, research into fact-checking on social media has found that ‘… corrective messages often fail to reach the target audience vulnerable to misinformation and fall short of affecting the overall dynamics of rumor spreading’. Therefore, without a more wide-reaching and effective delivery mechanism, such fact checking is likely to have little impact.
It seems, then, that automated content removal is the only viable preventive action social media companies could take to ensure that disinformation does not appear on their platforms in the first place.
However, tech companies’ experiences with extremist content-removal demonstrate the difficulties in developing systems to automate content takedown, for this technology inevitably results in large amounts of legitimate content also being blocked.
It has already proven challenging to develop an artificial intelligence-based system that can effectively filter out terrorist content online; it is likely to be even more difficult for such automated content removal systems to distinguish between legitimate news material and disinformation.
Facebook’s current approach – proscribing and blacklisting organisations such as the IRA and promising to remove associated accounts – is therefore the most reasonable action to take. However, the risk now is that hostile organisations will adopt more sophisticated measures to conceal the origin of their material.
More broadly, there is at present a lack of research into the dissemination and consumption of disinformation. To address this knowledge gap, it will be necessary to continually identify the most prominent disinformation campaigns and dissemination mechanisms, and to assess the overall impact such material has on consumer beliefs and opinions.
Although threat and impact assessments of this type are usually the responsibility of government and law enforcement, in this context the tech companies themselves are perhaps better placed to conduct this analysis.
Social media companies must better articulate the threat posed by disinformation and provide more clarity on the possible methods to mitigate this threat.
The views expressed in this Commentary are the authors', and do not necessarily reflect those of RUSI or any other institution.
WRITTEN BY
Alexander Babuta
James Sullivan
Director, Cyber Research
Cyber