In the late summer of 2014 a crushing offensive was launched against ISIS in Iraq and Syria. Early assaults swept away hundreds of unknown warriors in moments. Special ISIS teams, – the mujtahidun, or “industrious ones” – hurled thousands of faceless “volunteers” into the fray to replace them, to no avail.
Targeted strikes of a kind hitherto unseen in the Middle East degraded the capacities of ISIS in a way few predicted and many still work hard to understand. It has left the movement more isolated than it has been in months.
What’s that? You missed it? Possibly. The battle was fought on Twitter, in the company’s virtual counterstrike against key extremists who use its network to message their troops, maintain logistics, draw in people vulnerable to radicalisation and propagandise against the enemies of ISIS.
The scale of the fight and its results are studied in a new paper for the Brookings Institution. Unusually for this issue, it tries to put real data behind the popular interpretation of ISIS’s much vaunted social media skills.
Commissioned, curiously, by Google Ideas and conducted within Twitter’s rules on independent research, but without its support, The ISIS Twitter Census was released at the end of March by Brookings’ Center for Middle East Policy, authored by academic J.M. Berger and data scientist Jonathon Morgan.
They wandered into this particular battle last autumn just as it peaked, as “lucky” war correspondents often do. But like the better war reporters, they understood what was happening around them more than most.
Some 3,000 organised mujtahidun twitters and their re-tweeting bots were pushing Twitter’s anti-tweet-spam rules to their limits by mid-2014. Short but focused bursts let ISIS to dominate certain hash tags, triggering third-party aggregation and appearances in search results, and project messages outside of its own social network, to intimidate outsiders and tempt potential recruits.
The virtual counterattack coincided with real-world tactical shifts. On June 29, shortly after seizing Mosul, Iraq’s second city, ISIS declared a caliphate to lead the world’s 1.6 billion Muslims. The ISIS assault on the Iraqi-Kurdish city of Erbil had finally driven Barack Obama to authorise airstrikes against the group.
Twitter began suspending targeted ISIS accounts on June 13, 2014, barring around 1,000, including ISIS’ main information page. Accounts that tweeted most often and had the most followers were most likely to be suspended. The ISIS response was predictable.
The complexity of the process and the resources Twitter must have thrown at the problem is underlined by the report. Most of it is spent detailing Berger & Morgan’s extensive attempts, much less well resourced than Twitter’s, to corrall and sort the data, separating real ISIS supporters from jihadi-wannabes.
This went way beyond spotting texts breaking The Twitter Rules, its terms & conditions, which would still not necessarily ban an ISIS supporter’s tweets anyway. It is this, aside from the sheer volume of material to read through that has made past volunteer spotter efforts like those of Anonymous so fruitless.
Data war
Twitter had to fight a data war with data tools, Berger & Morgan had to report the war the same way.
Berger & Morgan worked on a sample of 20,000 selected ISIS supporters’ Twitter accounts, typically based in Iraq or Syria, one in five tweeting in English, 75 per cent overall in Arabic, tweeting between September-December 2014.
They conservatively estimated that 46,000 Twitter accounts were used by core ISIS supporters, allowing for deceptive locations and use of bots, tracking users based on criteria such as the number of messages, number of followers, hash tags, timing of messages, and in some cases, the geo-location of the senders.
While Twitter’s selective suspensions did have “concrete effects in limiting the reach and scope of ISIS activities on social media” it did not eliminate them. “Total interdiction” was clearly not Twitter’s goal, though the company kept its actual tactical objectives secret.
Nevertheless Berger & Morgan seem to regard the offensive as a Twitter victory. The primary ISIS hash tag – its name in Arabic – went from 40,000 tweets per day or more in September 2014, to under 5,000 a day in February.
Twitter suspended ISIS supporters under new names (and their bots) as quickly as they reappeared. The test of success became ISIS’ gradual disappearance from Twitter’s feeds. “Perhaps most important is what we didn’t see,” the paper noted. “We did not see images of beheaded hostages flooding unrelated hash tags or turning up in unrelated search results. We also did not see ISIS hash tags trend or aggregate widely.”
The report also noted: “The data we collected also suggests that the current rate of suspensions has also limited the ISIS network’s ability to grow and spread, a consideration almost universally ignored by critics of suspension tactics. The consequences of neglecting to weed a garden are obvious, even though weeds will always return.”
Interestingly, they thought that Twitter could do still more communications damage to ISIS, but advised against this for several reasons. The targeted suspensions so far have created obstacles to supporters joining ISIS’s social network, but have also isolated ISIS supporters online.
Would-be Twitter jihadis
Preaching to a smaller, already converted crowd could increase the speed and intensity of radicalisation for the admitted ones. It also cuts off access to Muslim counter narratives and de-radicalisation “exit ramps” for wavering would-be jihadis.
On the other hand, the censorship could reduce audience among potential “lone wolf ” attackers, people only marginally engaged with ISIS ideology, including those already prone to violence or mentally ill. Research suggests an association between mental illness and lone-actor terrorism.
Berger & Morgan urged further study to evaluate “the unintended consequences” of suspension strategies. “Fundamentally, tampering with social networks is a form of social engineering, and acknowledging this fact raises many new, difficult questions,” the two concluded.
They reminded all that US companies, and the US government, share constitutionally bound obligations to resist restrictions on free expression, recognising (if not wholeheartedly endorsing) the view that it was unethical to suppress political speech, “even when such speech is repugnant”.
Berger, some say, fails to question the popular assumption that there is a direct correlation between ISIS propaganda and jihadi recruitment, especially when used as justification for censorship of ISIS online. Certainly the closure of the ISIS “information ministry” @Nnewsi account, ended a flow of current, credited material for journalists and analysts.
It also cut off access to so-called “open-source intelligence” for the counter-terrorism community. Free expression is not an obvious winner here. Account suspensions, Berger & Morgan warn, may also disproportionately impact certain genders, races, nationalities, sexual orientations, or religions.
Neutering ISIS’s social media army
Nevertheless for Berger & Morgan, all in all, it’s a win: “Specifically, neutering ISIS’s ability to use Twitter to broadcast its message outside of its core audience has numerous potential benefits in reducing theorganisation’s ability to manipulate public opinion and attract new recruits.”
It does shed light on the formidable power of what might be called, only slightly ironically, Twitter’s “Weapons Division”. After all, by Berger & Morgan’s measure, Twitter took down ISIS in a fair fight on common ground. ISIS has plenty of other ways to communicate – everything from teenager’s Snapchat to hacker’s PasteBin – but none have the vital mass reach and immediacy of Twitter.
What might be its next target? Speculating: The tools could be used to target pro- (or anti-) gun control tweets after a US school shooting. Or opinions judged to be “hate speech” in a region-focused sweep like Berger & Mason’s ISIS sample, so specific in that regard it counted just one primary ISIS tweeter in the UK during the survey.
Targeting terrorist accounts
Who can say? Twitter doesn’t comment. As the Electronic Frontier Foundation’s expert Jillian C. York and others report, Twitter is generally transparent about its conventional content removals, reporting legal takedown requests to the Chilling Effects archive. But it keeps silent about targeting terrorist accounts.
Berger & Morgan found Twitter “discloses literally no information about the accounts it suspends, yet this activity takes place every day.” In fact, they add, the legal vacuum that surrounds these issues concedes near-absolute authority to Twitter, as it does to all the online giants.
“This point needs to be crystal clear: social media companies can and do control speech on their platforms. No user of a mainstream social media service enjoys an environment of complete freedom.”
What are you doing when you have developed a “weapon” that provides total control – at speed – over the communication capacities of a targeted group?
As Berger & Morgan warn, companies would be well advised to consider proactive measures and clarify their rules of engagement in “an area where government oversight may eventually come into play”.