Online Radicalisation: The need for an Offline Response


Floral tributes at Parliament Square following Khalid Masood's 22 March 2017 terrorist attack. It is becoming increasingly impossible to 'stop terrorist financing' when it comes to lone actor or small cell operations. Courtesy of Wikimedia.


Scapegoating tech companies for online radicalisation is not only misguided – it detracts attention away from the crucial responsibility that society must bear in fighting the spread of violent extremism where it matters most: in the real world.

Prime Minister Theresa May's speech last week to the UN General Assembly reiterated a demand that the world’s tech giants go ‘further and faster’ to remove extremist content from the internet, and that they develop new technology to prevent such material from appearing on the web.

These proposals were echoed by Italian Prime Minster Paolo Gentiloni and French President Emmanuel Macron, in what has been described as ‘a true tripartite effort’.

Few would question the importance of ensuring that extremist material is not widely accessible in the public domain. The fewer people who are indoctrinated by such propaganda, the fewer people there will be who are likely to pose a threat to the UK’s national security.

But how would social media companies go about designing a tool that could automatically detect and block ‘extremist content’? What constitutes ‘evil material’? And who determines what these are?

As Brian Lord, former deputy director for Intelligence and Cyber Operations at GCHQ, pointed out in response to May’s comments, content that is seen to be ‘free speech’ in one country might be seen as incitement to violence in another.

No machine-learning based system, however sophisticated, could be taught to recognise a concept as nuanced and subjective as ‘extremism’. Imposing such a filter would inevitably result in vast quantities of legitimate content being blocked.

How would social media companies go about designing a tool that could automatically detect and block ‘extremist content’? What constitutes ‘evil material’? And who determines what these are?

This has already been demonstrated: Facebook has on multiple occasions been forced to apologise after erroneously blocking content. These included a video documenting the arrest of 22 environmental protestors, one of the most iconic images from the Vietnam War and the account of a Black Lives Matter activist who posted a screenshot of a racist message.

And such stringent means could also prove counterproductive from a security perspective. Lord warned that the wholesale, automated removal of large volumes of potentially harmful material ‘counters the availability of information’, likening such measures to ‘[using] a sledgehammer to crack a nut’.

The UK’s law enforcement and Security and Intelligence Agencies (SIAs) are able to detect and prevent the vast majority of terrorist attacks in the UK by maximising their intelligence coverage.

The detection rate of terrorist planning is directly proportional to this coverage; the more comprehensive the SIAs’ knowledge of individuals who pose a potential threat, the more plots they will be able to detect.

As it becomes increasingly difficult for individuals of national security interest to operate on mainstream social media, they migrate to more secure platforms – encrypted messaging services and Darknet media sharing platforms.

A recent study from the Policy Exchange think tank found that the quantity of extremist material being disseminated online by Daesh (also known as the Islamic State of Iraq and Syria, ISIS) has remained consistent over the past three years.

However, since 2015, the end-to-end encrypted messaging service Telegram has replaced Twitter as the primary means by which core content is transmitted to its ‘vanguard’ of existing supporters.

It seems therefore that governments and tech companies have not been able to effectively reduce the amount of online extremist content – only to displace it to less visible platforms.

Assistant Commissioner Mark Rowley, the Metropolitan Police’s head of counter-terrorism, speaks of the ‘internet going darker’, making it more difficult to investigate persons of interest.

As it becomes increasingly difficult for individuals of national security interest to operate on mainstream social media, they migrate to more secure platforms – encrypted messaging services and Darknet media sharing platforms

The result is that authorities are forced to develop increasingly aggressive means of intercepting communications and undermining encryption, empowering cyber criminals and putting all internet users at risk.

So a careful balance must be struck, between preventing the dissemination of illegal and dangerous material, while still being able to identify and monitor those who are responsible for producing and sharing it.

May’s comments also do a disservice to the positive work that global tech companies do to prevent the spread of extremist content. Facebook, Twitter and YouTube expressly prohibit any content that supports or incites terrorist activity, and co-operate closely with authorities in sharing relevant information, according to Max Hill, the government’s independent reviewer of terrorism legislation.

Encrypted messaging platforms are far less compliant when it comes to co-operating with governments and law enforcement, yet politicians continue to blame the companies doing the most to combat the problem. 

But most importantly, continuing to focus on the role that tech companies play in preventing violent extremism detracts attention from the responsibility that the rest of society must bear in identifying and reporting those who pose a potential risk.

Research conducted at the International Centre for the Study of Radicalisation at King’s College London suggests that ‘radicalisation rarely happens exclusively online’. Analysis of the now-banned UK extremist group Al-Muhajiroun has found that ‘most of their radicalization has occurred offline’, in small groups, where like-minded individuals meet and indoctrinate each.

Therefore, while the internet can play an important role in accelerating the process of radicalisation, indications of violent tendencies are more likely to be picked up by those who are physically close to the individual in question.

RUSI research has found that around 45% of religiously inspired extremists who have successfully carried out acts of violence ‘leaked’ information regarding their intentions to a friend or family member before the attack. In addition, a study has shown that ‘those best positioned to notice early signs of individuals considering acts of violent extremism might be those individuals’ friends’.

While the internet can play an important role in accelerating the process of radicalisation, indications of violent tendencies are more likely to be picked up by those who are physically close to the individual in question

For the authorities, the threat assessment process would be highly inefficient if the goal was to identify the many thousands of individuals who hold radical views – rather, the focus is on identifying the very small proportion of these who are believed to be actively planning acts of violence.

And the activities that would-be attackers must carry out to plan and prepare for acts of violence do not take place on the internet – they take place offline. Equipment is bought, vehicles are hired, potential target locations are visited and devices are assembled. Even threats to commit an attack do not become actual threats until they are coupled with the means and preparations necessary to carry it out.

Until these societal root causes are adequately understood and effectively addressed, the threat of violent extremism will continue to corrupt communities and society at large – whether online, offline or somewhere in between.

The views expressed in this Commentary are the author’s, and do not reflect those of RUSI or any other institution.


WRITTEN BY

Alexander Babuta

View profile


Footnotes


Explore our related content