Counter-terror chatbots plan would automate entrapment trolling

2016-11-28-ai-fbi

In a post last week on Lawfare, a blog published in cooperation with the Brookings Institute, former National Counterterrorism Center official Walter Haydock argues that “Artificial Intelligence Targeting Personas” — an obscure term that does not appear to have been used before publicly, if at all — could be harnessed in the fight against online extremism. His position illustrates some of the more self-defeating trends in present military strategic thinking.

Haydock uses the term Artificial Intelligence Targeting Personas or AITPs, though this phrase does not appear in the latest Department of Defense Dictionary of Military and Associated Terms, nor does it return much in the way of Google results. What he is talking about, however, is nothing amazingly high-tech. Haydock’s AITPs are chatbots, essentially. “In this proposed model,” he writes, “computer programs would replace some of the government employees whose job it is to pose as terrorist sympathizers and facilitators online.”

It has been known for some time that US psychological operations personnel have been crafting false “personas” to engage in various kinds of propaganda and influence operations against Islamic State sympathizers and other would-be jihadists online. The FBI’s “Online Covert Employees” are allowed to troll the internet impersonating journalists. The use of automation in fighting online extremism, meanwhile, is not an entirely new development either, as efforts to use algorithms to both predict terrorist attacks and to censor social media have already been in the works.

Yet key ethical and strategic questions raised by those practices have yet to be answered. Haydock notes that “both automated and human de-confliction would be necessary to ensure that AITPs do not accidentally target each other.”

And while ISIS fighter wannabes might not be the sharpest tools in the shed, Haydock acknowledges that his proposed anti-terror chatbot army would at least have to be somewhat believable — which would mean spending time generating hits for jihadi websites and online buzz for terrorist publicity materials. “For an AITP to appear genuine,” he writes, “it would itself have to exhibit some characteristics of its targets, such as viewing extremist propaganda and communicating with known terrorists.”

Haydock’s proposal, however, also runs into the same issues concerning entrapment and the radicalization of people who wouldn’t otherwise have become terrorists that are associated with using undercover agents in more conventional, human-run anti-terrorism sting operations, but only amplified because of the automation involved.

“If the target swears off violence or exhibits no signs of operational planning for an extended period of time,” Haydock writes, “then AITP operations would cease and the FBI would follow existing policy for continuing or terminating the assessment. Obviously, if the target never engages the AITP or says anything alerting, the FBI would have no grounds to continue investigating. But this is no different than in traditional HUMINT [human intelligence] operations. To be clear, I am not proposing relaxing any rules with regard to initiating assessments (although the bar is already admittedly low) or elevating them to full investigations. I am merely proposing automation of the initial stages of such efforts in an effort to save manpower and allow more frequent check-ins with subjects of potential interest.”

Whether “more frequent check-ins” by FBI chatbots with potential ISIS sympathizers will be an effective de-radicalization tool remains to be seen, though it hardly seems a sure bet.

“With regard to the Constitution, the use of AITPs would most likely draw scrutiny on First and Fourth Amendment grounds,” Haydock notes, so to address the public’s First Amendment concerns he suggests “the FBI permanently delete all logs of communications derived from AITP-enabled assessments upon closing them. The only exception would be for any content that a human agent or analyst affirmatively identifies as evidence of investigative or national security value.”

That sounds fair enough — except that Haydock’s whole proposal is one advocating for “automating surveillance.” He says that we should have artificially “intelligent” algorithms data mining private information gathered through state surveillance and social media platforms to identify “potential terrorists,” and then trying to lure them into committing terrorist violence. But don’t worry, you’re privacy will be protected.

Haydock notes that “no current federal law appears to prohibit the use of automated investigative personas” but that some FBI policies would require slight shifts to implement his plan because, among other reasons, “using ‘data mining’—targeting certain characteristics or patterns of behavior—to deploy AITPs more precisely and effectively would require additional approvals under current policy. The FBI’s Sensitive Operations Review Committee (SORC) would have to authorize these efforts, and notify Congress whenever they take place.”

In wrapping up his pitch for FBI jihadist-impersonator chatbot trolls, Haydock writes that “the FBI could conduct its assessments in a strictly objective manner, judging AITP behavioral models by their success rates in leading to full investigations, arrests, and prosecutions. With such hard data, it would be very difficult for agents to justify the monitoring of individuals for any reason other than their disposition towards criminal behavior.”

Indeed, with such hard data, why should we retain any error-prone humans in the equation at all? The full investigations, arrests, and prosecutions can all be conducted by computers, which can also serve as judge, jury and executioner while they’re at it. This is likely to be much more efficient than the old way of doing it, and will make things a lot easier on the overworked psychological operators of the US intelligence community.

Maybe one day soon, artificial intelligence will advance to the point where it can replace not only the FBI’s Online Covert Employees, but the rest of the workforce of the intelligence community along with the bureaucrats who put together its flawed and self-perpetuating policies. We might want to pull the plug on this whole operation while we still have the chance, though, before we allow our all-too-human, imperfect logic to convince us it’s an intelligent idea to create a fully automated and autonomous military-industrial complex.

 

 

Help keep independent journalism alive.

Donate today to support MirrorWilderness.com.

$1.00

One thought on “Counter-terror chatbots plan would automate entrapment trolling

Leave a comment