It was September 2014, and machine gun and artillery fire had erupted in small villages just west of Kobani, Syria. The Islamic State had started a siege on the border town, strategically entering through nearby villages. That same week, researchers at the University of Miami who were tracking online extremist groups saw a flurry in activity.
“It was like the spreading of a fever,” said Neil Johnson, head of research for the study. Johnson and his team saw an uptick in the creation of pro-Islamic State groups on Russian social media platform VKontakte.
Measuring the rate of creation of these online groups is essential to a new study’s claims that a mathematical model can help predict terrorist attacks. The research attempts to scientifically determine the relationships between online extremist groups and real-world attacks.
The study, published last week by the journal Science, identifies hardcore pro-Islamic State groups on social media by searching for key words, such as mentions of beheadings, and zeroing in on specific community pages and groups. These groups trade operational information, such as which drone is being used in an attack or how to avoid detection, as well as fundraising posts and extremist ideology.
An uptick in the creation of these groups correlated with terrorist attacks, the study found. When the team used the model they created on their set of data, they found that the model correctly predicted the Kobani attack they observed in 2014.
Such groups are often shut down by social media platforms such as Facebook or VKontakte once their extremist content is identified by website moderators. Members of these groups then disperse and multiply, Johnson said, often forming other, smaller groups. It’s this pattern that makes the identification of these aggregates, and the individuals who interact with them, so useful.
“The data suggests that there’s no such thing as a ‘lone wolf’ in that sense,” Johnson said. “If an individual looks alone, the chance is that they will at some stage . . . have been in an aggregate. And if you look long enough, they will be in another aggregate soon.”
Does this mean that this algorithm can not only show when an attack is likely to occur but also point to individuals who might be ripe for radicalization? Johnson is hesitant to say that the model is truly predictive right now. He’s most enthusiastic about nailing down a pattern of behavior that could help clarify how radical groups, terrorist attacks and online activity interact.
“It always seems to me to be that we know that the Internet has an influence on this behavior, but how? I can see things online, but I’m not gonna go and do something,” he said. “So it puts a science, a starting point, of what’s going on.”
The study is not without its skeptics. “This is a potentially valuable approach, and more research should be done on the approach,” said J.M. Berger, a fellow in George Washington University’s Program on Extremism, told the New York Times. “But to jump ahead to the utility of it, I think, takes more work.”
Andrew Gelman, a statistical and political science professor at Columbia University, also cautioned against how applicable the model is in a blog post.
Research suggesting that social media monitoring be used in counterterrorism also raises concerns about privacy and the potential for abuse.
“It’s hard to pick apart the privacy component when you’re talking about something as loaded as radical extremism,” said Megan Price, executive director of Human Rights Data Analysis Group. “But I think we have to extrapolate out from that what are the other ways that this might be implemented? It’s not necessarily clear what the definition of those groups is.”
Price cited undercover monitoring of radical activist groups in the 1960s as an example of domestic surveillance designed to quash political freedom. “Historically, the government has never needed much encouragement to expand their surveillance activities,” she said.
In terms of digital privacy, Johnson sees his model as less invasive than other surveillance tactics. “Our approach pushes the focus from individuals to groups that fed the fire within the individual. So that is arguably good since instead of potentially having to track one percent of Facebook or VKontakte users being on some watch list because of what they have Googled,” he said. “The focus is on the few hundred aggregates that lie out there.”
But Price isn’t sure that this study is a step in the right direction. “It’s just splitting hairs,” she said. “What they are flagging is the growth of these aggregate groups, but the way they are defining growth is by tracking individuals. I don’t know if operationally it makes a difference in terms of privacy.”
Special to The Washington Post · Karen Turner
One Response
The US Defense Department tried this after 9/11. The project was lead by John Poindexter. It was shut down by Congress because of privacy concerns and political correctness. It would have nipped AQ and ISIL in the bud, but no.