Following the death by bomb-armed-robot of the suspect in last week’s cop killings in Dallas, which apparently involved an improvised setup that the local police department’s robot wasn’t built to be used for, a public outcry over police access to high tech weaponry has erupted.
“The Dallas Police Department’s unprecedented use of an explosive-laden robot to kill an armed suspect ushers in a new phase in the militarization of U.S. police departments,” reports the Los Angeles Times. The article goes on to point out that there have been similar uses of modified machines originally built as anti-bomb robots, particularly in the military. But Dallas is certainly the most high profile domestic policing use so far, and experts say it could have lasting implications.
“If lethally equipped robots can be used in this situation, when else can they be used?” University of California at Davis law professor Elizabeth Joh told U.S. News & World Report. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.” Other close observers of law enforcement and robotics expressed similar views.
University of Washington law professor Ryan Calo said in an interview with The Verge that because the police in Dallas were justified in using lethal force, the use of a robot doesn’t set any major precedent. He expressed concern, however, regarding other kinds of police uses of robots employing non-lethal force.
“There’s a great danger that you’ll over use non-lethal force delivered by a robot because you don’t have situational awareness,” Calo said. “It’s too convenient. You mistakenly believe that non-lethal force is not dangerous.”
Indeed, the tendency for organizations to develop technology for one purpose, only to later use it for purposes previously unforeseen, is a well-known aspect of law enforcement research and development programs. Think of it as technological mission creep.
“Use of remote-controlled devices by law enforcement raises a range of possible questions about when and where they are appropriate,” writes David Graham for Defense One. “The advent of new police technologies, from the firearm to the Taser, has often resulted in accusations of inappropriate use and recalibration in when police use them.”
When it comes to robots, there are many reasons to be wary of such “recalibration.” In the wake of Dallas, the discussion around police militarization has also returned once again to the Pentagon’s “1033 Program,” which gives out surplus military gear to police departments around the country. According to the L.A. Times, since the program’s inception nearly two decades ago, it has given away more than $6 billion worth of equipment to over 8,000 law enforcement agencies.
But the really scary part is not that the Pentagon is actively giving away old equipment, including bomb-disposal robots that can apparently be converted to bomb delivery bots. What is truly frightening is that the military has been extensively funding competitions and research, through the Defense Advanced Research Projects Agency (DARPA), to develop much more advanced robots, both humanoid and quadrupedal.
“The goal of the competition” to develop humanoid robots, according to Computer World, “was to spur researchers to build more autonomous, more balanced and generally more capable robots that one day can be sent into disaster situations to turn off disabled systems, hunt for victims and assess damage.”
Indeed, in describing its Robotics Challenge, DARPA claims the admirable goal of developing “robots capable of assisting humans in responding to natural and man-made disasters.” Well, that sounds nice. No mention there of any frightening words like “explosives,” “lethality,” or “domestic law enforcement.”
Of course, given the government’s love affair with ambiguity and the twisted genius of its lawyers, along with its aforementioned and well-established tendency to find creative new uses for technology after using taxpayer dollars to develop it in the first place for much less controversial ostensible reasons, unsettling questions come to mind, despite DARPA’s reassurances. One of the more obvious might be, for example, how far will the agency eventually go towards defining “man-made disasters” exceedingly vaguely? If the typical U.S. government approach is to be expected in this case, and there’s no reason it shouldn’t be, the answer will be: as far as possible.