As terrorist organisations continue to exploit social media to spread content and malcontent, social media giants face an uphill battle balancing free expression against propaganda spread by extremist ideologues.
Terror groups globally are exploiting Facebook, Twitter, Instagram, Google, and other social media platforms to spread ideology and potentially attract new recruits.
The late Osama Bin Laden, one of the founders of Al Qaeda, predicted the extent of the role social media could play in a letter written in 2010, which was later discovered by US Special Operations forces in his hideout in Pakistan. He wrote, “The wide-scale spread of jihadist ideology, especially on the Internet, and the tremendous number of young people who frequent the Jihadist Web sites [are] a major achievement for jihad”.
According to security experts, terrorist organisation, Daesh, uses social media platforms like no other militant group, successfully managing to enlist many recruits and raise funds through the success of its hashtag campaigns and high-production value propaganda video campaigns targeted at disenfranchised youths. Daesh uses big data and analytics for identifying people with particular interests, subsequently unleashing a stream of targeted multi-media content. It has been reported that Daesh has its own video production, editing and distribution agencies, publishing more than 90,000 posts on social media platforms like Facebook, Twitter, Google, YouTube and others in 2015 alone.
As per the Counter Extremism Project (CEP), a global network of counter extremism and security experts, which include former world leaders and security experts, the leader behind the London Bridge attack in June 2017, Khuram Shazed Butt – was reportedly radicalised by online lectures from self-appointed cleric Ahmad Musa Jibril.
According to CEP, a search for “Ahmad Musa Jibril” yielded 14,700 hits and his 130 YouTube videos have amassed more than 1.5 million views. CEP added that Al-Awlaki, an American-Yemeni Senior recruiter of Al Qaeda, reportedly inspired Dzhokhar Tsarnaev of the Boston Marathon bombings in 2013, Syed Rizwan Farook of the San Bernardino massacre in 2015 and the Orlando shooter Omar Mateen in 2016.
According to the United Nations’ Counter Terrorism Committee, Daesh hired at least 30,000 foreign fighters from around 100 countries including Germany, Belgium, France, the UK, Tunisia, Morocco etc. Much of this success can be attributed to its propaganda on Twitter. Intel Centre, a private counter-terrorism intelligence services running since 1989 highlighted that 31 groups have already pleaded allegiance to Daesh.
With the rise of domestic terrorism in Europe and the US, Governments are seeking cooperation with tech giants, to help break encryption, allow backdoor access to their services, and freely engage with law enforcement agencies in the event of a terrorist attack.
Social media firms also face pressure from advertisers for tagging products alongside extremist content, resulting in free-falling ad revenue.
The UK and European Governments have taken a tough stand towards the social media firms’ handling of terrorist content on their sites. Germany’s Parliament recently approved legislation to fine social media platforms up to Euro 50 million if one is found guilty of spreading fake news or hate speech on their platform.
In the wake of the terrorist attacks in London earlier this year, UK Prime Minister Theresa May proposed an industry-wide levy on tech firms to raise funds for policing the internet. The UK Government also called a meeting with Google, Facebook, and Twitter to discuss their counterterrorism strategy, while the Home Secretary Amber Rudd, asked tech firms to adopt a proactive approach in stopping terrorists’ activities more effectively.
UK Intelligence agency GCHQ also claimed social media firms have become “command-and-control networks of choice for terrorists and criminals”.
The Australian Government is also considering legislation to compel social media giants to provide access to suspected terrorists’ encrypted messages.
After the 2015 terror attack in France, its Government did not mince words in admonishing US-based social media firms for not better monitoring their platforms and doing enough to curb the proliferation of extremist content.
You Tube, Facebook, Twitter and Microsoft all claim zero tolerance policies on terrorism, and have invariably invoked existing internet laws in the countries’ concerned to resist any suggestion of large-scale censoring of content on their site.
In the past, social media firms have resorted to large scale suspension of accounts which were found violating their user policies related to promotion of terrorism. The big four even voluntarily pledged in 2016 in Europe that they would work towards removing terror content within 24 hours.
Facebook, Microsoft, Twitter and YouTube in June 2017 inched a step ahead towards fostering collaboration in tackling extremist content on their platforms, announcing the formation of the Global Internet Forum to Counter Terrorism. In December 2016, the tech giants entered into a partnership to identify videos and photos deemed to be extremist content through a digital hash aiming at flagging extremist content and swiftly removing it. The Global Internet Forum to Counter Terrorism is expected to formalise and structure the partnership and cooperation not only among the big four, but also with smaller tech companies, government bodies, non-profit international bodies, civil society groups and academics, as well as institutions like the United Nations and European Union.
Facebook and You Tube also recently released an update to their user policies. FaceBook announced that it would employ new technologies, including artificial intelligence, to monitor content on its platform.
However, online terrorism watchdogs like Counter Extremism Project, argue that Facebook appears to be un-willing enough to share complete data with public or private authorities. It adds that if Facebook can have data to customise the user experience and sell ads, then why is it not able to cull out sufficient and specific information related to extreme content and share with policy makers or investigative agencies.
The CEP also questioned Facebook’s content review efforts and says that the tech firm’s moderation teams is inadequately staffed to sift through posts of nearly two billion users every day, which appear in 80 languages.
Like Facebook, Google also outlined its renewed efforts for monitoring content in June 2017. Google pledged to increase the use of technology to remove content that violated its policies. The company also announced expanding You Tube’s Trusted Flagger Programme, introducing warning labels on inflammatory content and widening its role in counter-radicalisation efforts by using re-targeting technology as used in online advertising. However, the tech giant’s anti-extremism policies are still not specific about completely removing extremist propaganda, which has arguably radicalised thousands of people. For example, the late Al Qaeda’s motivator and recruiter Anwar al-Awlaki continues to inspire viewers even after death. You Tube searches on him rose from around 60,000 in December 2015 to more than 80,000 in June 2017.
Google argues its warning labels will help against radicalisation through re-directing users to other content that offers counter-narratives.
Counter Extremism Project has identified three major categories where YouTube has failed to meet its own goals for the swift removal of known terrorist content: bomb-making videos, violent videos, and extremist influencers. The CEP argues that the ease of access to this content poses a continual risk to public safety.