Anti-Terror Algorithms Raise Ethics, Effectiveness Questions

2016-06-18-isis-twitter

With a renewed focus on countering ISIS following the Orlando massacre, new tactics are being suggested for defeating the terror network and those it inspires. Among the suggestions are technologies to allegedly predict terrorist attacks and to automatically censor social media, although neither method is fully developed.

The idea of predicting terrorist attacks using an algorithm was put forth by a team led by University of Miami physicist Neil Johnson, which published a study in the journal Science on Thursday. The abstract of the study claims that “we uncovered an ecology evolving on a daily time scale that drives online support, and we provide a mathematical theory that describes it.”

Johnson and colleagues say that the key to predicting attacks, or at least some kinds of attacks, is to closely monitor social connections between individuals and the formation of small groups, rather than focusing on specific Twitter hashtags or extremist social media posts of specific individuals. The study has received mixed reviews in the media.

“Experts who study terrorism and online communication said that the new research was informative, and that they appreciated that the authors would make their data available to other researchers,” the New York Times reported. “But they cautioned that the actions of terrorist groups are extremely difficult to anticipate and said more information was needed, especially to substantiate any predictive potential of the team’s equation.”

The Times also notes that the only attack that happened within the time frame examined by the researchers and which would have been predicted using their model was the September 2014 attack by ISIS on the Syrian town of Kobani, which more closely resembled a military operation than a terrorist attack.

“Unfortunately, because social media is a big factor, the algorithm is incapable of weeding out lone attacks like that in Orlando and San Bernardino,” iTech Post reports.

The New York Times briefly mentions that for purposes of comparison with ISIS, “the researchers tracked groups promoting civil unrest in Latin America,” although it neglects to point out that the research actually started by looking at trends of civil protest in Brazil and Venezuela.

Indeed, while the study as published purports to focus on the “online ecology” of “ISIS and beyond,” much of the findings seem consistent with a goal of finding ways to monitor civil protest rather than terrorism. The research, for example, “suggests that the online proliferation of pro-ISIS or protest aggregates can indeed act as an indicator of conditions becoming right for the onset of a real-world attack campaign or mass protests, respectively.”

Another new algorithm, meanwhile, aimed not at predicting attacks based on social media but at censoring its content, was also unveiled this week.

“In short, an algorithm created by Hany Farid of Dartmouth analyzes an image, video, or audio file and creates what is known as a unique ‘hash’ for that file,” writes Elias Groll of Foreign Policy. “By creating a database of known extremist content, social media and tech companies can run content through Farid’s algorithm, and if that content matches a hash identified as ‘extremist,’ the company could automatically flag it or remove it altogether.”

Groll also notes, though, that “if the project is ever to get off the ground it will have to overcome serious concern that using algorithms to police speech doesn’t end up as Orwellian as it sounds.”

Farid, the researcher behind the censorship algorithm, is working with an organization called the Counter Extremism Project, whose CEO Mark Wallace is described in the article as a “longtime GOP operative” and a “player in Washington.” His research has also received funding and support from Microsoft, which recently amended its terms of use “which already prohibit hate speech and advocacy of violence against others – to specifically prohibit the posting of terrorist content on our hosted consumer services.”

Microsoft acknowledges that it is entering a gray area in imposing restrictions on so-called terrorist content. “There is no universally accepted definition of terrorist content,” the company’s corporate blog post notes. “For purposes of our services, we will consider terrorist content to be material posted by or in support of organizations included on the Consolidated United Nations Security Council Sanctions List that depicts graphic violence, encourages violent action, endorses a terrorist organization or its acts, or encourages people to join such groups.” The list in question includes nearly 400 groups.

While Microsoft may be on board with Farid’s work, other tech industry players appear hesitant as of yet to embrace his algorithm.

“According to an executive at a Silicon Valley social media company who spoke on condition of anonymity to describe industry discussions, several major tech companies convened for a conference call on April 29 that was organized by Facebook to discuss Wallace and Farid’s proposal. During that call, the companies questioned the effectiveness of the concept and whether its organizers could come up with a sufficiently neutral definition of what constitutes ‘extremist’ content,” according to Groll.

“When Wallace and Farid unveiled their technology Friday they said that they have had extensive discussions with Silicon Valley, but their announcement notably did not contain any commitments by companies to use the algorithm.”

Like the algorithm for predicting attacks, the plan to automatically censor “terrorist content” brings up difficult questions, but hasn’t yet provided acceptable answers to them.

“If defining what constitutes a terrorist is a famously tricky problem, nailing down what counts as terrorist rhetoric is doubly hard,” Groll writes. “Farid himself acknowledges that his algorithm could be turned toward nefarious ends. ‘You could also envision repressive regimes using this to stifle speech,’ he said.”

Indeed, the famous ambiguity of defining terrorism seems it could eventually find itself eclipsed by the ethical ambiguity of the tools being developed to fight it.

 

 

Help keep independent journalism alive.

Donate today to support MirrorWilderness.com.

$1.00