Despite recent comments from leading military technologists claiming that the Pentagon isn’t or shouldn’t be pursuing autonomous killer robot technology, research it has simultaneously been funding seems to suggest otherwise.
“Without even realizing it, soldiers could soon be training robot sharpshooters to take their jobs,” writes Patrick Tucker, technology editor for Defense One, in a recent article on research partly funded by the U.S. Army into harnessing human brainwaves to teach machines how to recognize targets.
A team of researchers from the Army Research Laboratory and Alexandria, Virginia-based DCS Corp. “fed datasets of human brain waves into a neural network — a type of artificial intelligence — which learned to recognize when a human is making a targeting decision,” and published their results in a paper presented in March at the annual “Intelligent User Interface” conference in Cyprus. The project “branched out of a multi-year, multi-pronged program called the Cognition and Neuroergonomics Collaborative Technology Alliance,” Tucker reports.
“We know that there are signals in the brain that show up when you perceive something that’s salient,” one of the paper’s authors, Matthew Jaswa of DCS Corp., reportedly said. These signals, known as P300 responses, are “basically the brain’s answer to a quick-decision task,” Tucker writes.
“The researchers hope their new neural net will enable experiments in which a computer can easily understand when a soldier is evaluating targets in a virtual scenario, as opposed to having to spend lots of time teaching the system to understand how to structure different individuals’ data, eye movements, their P300 responses, etc.,” Tucker’s report continues. “The goal, one day, is a neural net that can learn instantaneously, continuously, and in real-time, by observing the brainwaves and eye movement of highly trained soldiers doing their jobs.”
While current research may still be focused on developing ways of accurately reading and translating the brainwaves of soldiers in combat, it is not hard to see how this research could be incorporated into autonomous killer robot technology.
“If you can improve this to the point where you can put it on guys in the field, you can get to the point where they’re just looking at things and doing their normal tasks,” Jaswa reportedly said. “All their years of experience that feed into that normal situational awareness. We’re peeking into what their brains are doing. If you can have enough guys in a squad looking at similar things, then we can say, ‘Three or four guys looked at this thing. It’s probably important.'”
When it comes to “a system that allows the weapon to take a given life without the intervention of a human,” Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff, said last month, “I don’t think we need to go there.” Other top Pentagon technology strategists have recently made similar comments.
Yet from what is available from open sources, it is clear that even as officials pay lip service to caution and restraint in the area of artificial intelligence, the military continues to fund research into components that could eventually be combined into self-aware killer robots. Given that such machines quite understandably inspire widespread fear of their potential to wipe out humanity, it makes sense that officials would engage in such public relations posturing. But it also follows that what officials are telling the public about research in this field is quite likely just the tip of the iceberg.