Tutor HuntResources Journalism Resources

Tech Companies And Extremist Content

Date : 04/01/2019

Author Information

Lucien

Uploaded by : Lucien
Uploaded on : 04/01/2019
Subject : Journalism

He accessed the material and was using it to self-radicalise. Online played a major role in what happened . These were the words of Commander Dean Haydon, head of the counterterrorism at the Metropolitan police, when explaining what drove Darren Osborne to ram a van into a crowd of people outside of Mosque in Finsbury Park last June.

Darren Osborne's conviction and the insights into his radicalisation are timely. It was only last week, during the World Economic Forum, that Theresa May called on tech companies to intensify and improve their efforts at countering extremist materials online.

Although May s call to tech companies reaffirmed what multiple governments have already repeated tech companies have an inalienable responsibility to counter extremist material online it made no allusions to the forms of extremist content that need countering.

The use of the virtual world by extremists, more precisely violent Islamist extremists, such as the so-called Islamic State (IS), has been extensively covered in academia, the media and has no doubt received significant attention on a state level. Searching Islamist extremist use of internet on Google will provide a plethora of academic and journalistic articles, books and governmental briefs covering the issue. Undeniably, the connection between Islamist extremism and the virtual world has gained prominence in the counterterrorism vernacular.

On the other hand, the growing presence of far right extremist groups online, and the use of platforms to disseminate propaganda and hate speech has so far received less attention. Without making claims that it has been all-together circumvented from violent extremist literature, media attention and policy making (see UK's Home Affairs Select Committee recent inquiry), it is fair to say that far right extremism online has thus far signified a slighter priority within the public discourse.

Considering the growing presence of far right extremists online, and the radicalisation of individuals through far right extremist content, two questions need addressing: how are tech companies trying to deal with extremist materials online, and are different forms of extremism treated equally?

In January 2016, a group of top intelligence officials travelled to Silicon Valley to discuss the role of tech companies in countering extremist material circulated by IS and its acolytes. As a senior official in the Obama administration stated at the time, countering the vile ideology of ISIL and similar groups in the digital sphere is a priority for both government and private sector . This meeting appeared to set the tone for the future: tech companies top priority when countering online extremist content would IS and those alike.

Although the meeting does not imply a watershed moment, it did spur tech companies to mobilise greater resources towards the issue. In February 2016, Twitter announced that it had suspended 125,000 accounts associated to the so-called Islamic State. Although this wasn t the first time Twitter had suspended accounts, it was the first of such instances of this scale disclosed to the media. Silicon Valley had joined governments in their fight against IS.

Momentum encouraged other big tech firms to the challenge. In April 2016, Facebook hired Brian Fishman, expert in online strategies used by the so-called Islamic State and al-Qaeda, to head their counterterrorism department. In June 2016, Google announced new software it had been developing combining Google search advertising algorithms and YouTube s platform to weed out potential recruits for the so-called Islamic State.

Ending the year with a bang, Google, Facebook, Twitter and Microsoft joined forces to create a shared database of unique digital fingerprints, otherwise known as hashes , for content promoting terrorism. While there is no clear indication that this collaboration focuses exclusively on violent Islamist extremism, Hany Faird s (the computer scientist responsible for the software) comments offer a good idea of who its main targets are: what we want is to eliminate this global megaphone that social media gives to groups like ISIS .

Research from the Institute of Strategic Dialogue (ISD) and J.M. Berger, senior researcher at George Washington s Program on Extremism are some of the few pieces of research documenting the growth of far-right extremism online. Berger s study found that white supremacist numbers on Twitter have grown by 600% over the last four years, beating Islamist extremists in tweets and followers. Speaking on the BBC 4 Radio Today show, Rebecca Skellett, Senior Programme Manager at the ISD explained that there has been a huge proliferation of the far right s use of the online space, and I would say from our research we can see a disproportionate availability of far right content much more than we can see on the Islamist extremist spectrum because of the speed at which that Islamist content is taken down .

The ISD s report The Fringe Insurgency , amongst other things, explains that the strategic, tactical and operational convergence has allowed the extreme right to translate large-scale online mobilisation into real-world impact . The significance of this is of course notable, we are no longer talking about trolls posting discriminatory posts online, but rather groups utilising the virtual world to organise grassroots activities, influence elections and intimidate minorities.

The translation of online activities to real life outcomes is especially key it represents the convergence of the virtual and real world, where content turns to violence. Darren Osborne s attack is an obvious example, the attack in the attack in Charlottesville following the white supremacist rally on the 11th of August 2017 is another. White Supremacists used Facebook to organise and recruit for the rally. Further to the violence that occurred at the rally itself, the online organisation, directly or indirectly, led to the terrorist attack on the 12th of August, killing one person and injuring many more.

So, how are tech companies integrating the rise of the far right online into their approaches to countering extremism? Admittedly, there is very little information available. Having sought comment from Google, an employee told the me t that Google rarely goes on record about its operations without going through official channels . Unfortunately, there isn t much on official channels either.

A YouTube blog post from August 2017 mentions that the company is expanding its team of experts to include the Anti-Defamation League, the No Hate Speech Movement and the ISD. One might assume that this is translated into an expanded approach treating far right extremism with the same sense of urgency as Islamist extremism. Yet, the fact that big tech companies have yet to come out publicly to stake their claims against far right extremists, as they had done against Islamist extremists, may be an indication that the bulk of their efforts may still be directed towards the latter.

This notion is reinforced through further comments made by Rebecca Skellet in her interview with the BBC. Going on to underline the slower response to the removal and the perceived danger of far right extremist content online, Skellet claims there is a huge discrepancy between how Internet platforms view the need to respond to the far right space .

Having said that, the UK Home Affairs Select Committee hate crime and its violent consequences inquiry offers the most comprehensive review of how tech companies are failing to deal with far-right extremist material online, highlighting inadequacies in removing supremacist content. One recommendation, amongst a dozen similar ones, states: The weakness and delays in Google s response to our reports of illegal Neo-Nazi propaganda on YouTube were dreadful . Although the report does not ignore the issue of Islamist extremism online, it serves as a critical reminder to tech companies that more must be done on other fronts.

Ultimately, it appears that tech companies have adopted counter extremism models akin to that of states, focussing in the main on theIslamist extremist threat. However, considering the rise of far right extremism online, and the high levels of extreme violence far right groups commit, tech companies need to recalibrate their efforts to be in line with real threats and not perceived ones.

This resource was uploaded by: Lucien

Other articles by this author