As the U.S. carries on its “counter-terror” crusade against a wide range of non-state actors within its own borders and outside of them into an era of ever-increasing information flows, the Pentagon is having trouble keeping up with its own intelligence collection capabilities.
“The Department of Defense is grappling with an overwhelming preponderance of data, so much so that it can’t process it all,” according to a recent article from the military tech news site C4ISRnet. (C4ISR is military jargon meaning command, control, communications, computers, intelligence, surveillance and reconnaissance).
“The department needs to take a comprehensive approach at what is known as processing, exploitation and dissemination, or PED in DoD parlance, according to the director of Defense Intelligence (Warfighter Support), which falls under the purview of the Office of the Under Secretary of Defense for Intelligence,” the article continues.
The suggested solution for this “PED” problem? If you guessed “more automation,” you’d be correct.
“Describing industry as light-years ahead of DoD on this front, (Lt. Gen. Jack) Shanahan outlined a coordinated approach to surveying a vast array of sectors from industry to the research labs within DoD to Silicon Valley. While they’ve discovered a lot of one-off projects and ideas, they are not focused on the war-fighter requirements,” writes C4ISRnet’s Mark Pomerleau.
“The goals, under a working title of the ‘Go Big Project,’ are to immediately inject industry’s best technologies in artificial intelligence and deep learning. DoD can take care of the ‘processing’ and ‘dissemination’ portion of PED, Shanahan said. It’s the ‘E’ in ‘exploitation’ — it’s the analyst’s time of looking at a video for 12 hours a day to see if the white pickup truck left or entered the compound — that the department needs help with. A machine can watch for the pickup truck, he said.”
The idea that America should increase its use of A.I. and automation in fighting its Global War on Terrorism (or overcoming “gray zone challenges,” or whatever the preferred terminology is at the moment) is not entirely new. In November, former National Counterterrorism Center official Walter Haydock suggested in a post on the blog Lawfare that the Federal Bureau of Investigation should replace some of its counter-terror internet trolls with “Artificial Intelligence Targeting Personas,” or chatbots, in other words. Other suggestions have included using algorithms to automatically predict terrorist attacks before they happen, and to censor social media.
Yet the suggestion of automating tasks such as the monitoring of video feeds signals a significant step towards a broader embrace of automation in all areas of military and counter-terror intelligence collection, beyond just the realms of cyberspace and social media, which might be reasonably expected to pose particularly novel challenges to the types of officials tasked with tackling them — career FBI agents, and so forth. Shanahan’s recent suggestion was apparently not tailored to a narrow set of circumstances, but rather was reportedly put forth as a potential solution to the problem of “an overflow of data — everything from publicly available data to open-source intelligence to the most sensitive classified material.”
It makes sense that the Defense Department, as well as agencies like the FBI, would look into automating some of their more mundane intelligence-related tasks. Yet any embrace of further automation in this area should be considered carefully. It would certainly be unwise to put life-and-death decisions with the potential to have major ramifications for U.S. foreign policy under the control of artificial intelligence.