It’s been a busy year in the world of robotics and the politics surrounding it, the latest development coming in the form of potentially conflicting comments from two Pentagon officials hinting at what the future of an international killer robot ban — or lack thereof — might look like.
Not only has the job-killing potential of automation started to register with the public as companies like McDonald’s move to replace cashiers with touch screens, but the people-killing potential is beginning to as well. An international “Campaign to Stop Killer Robots” has been gaining traction, even as governments around the world continue to take troubling steps towards developing autonomous weapons systems.
In the United States, 2017 has seen the establishment of an “Algorithmic Warfare Cross-Functional Team” at the Pentagon to harness the power of the weaponized algorithm — a capability that Defense Department officials have called “as big as the introduction of nuclear weapons” and which they hope will allow them to “start to open Pandora’s box and go after all of these other challenges across the department.”
One of the biggest investments the U.S. government has made in robotics in recent years, meanwhile, does not seem to have necessarily paid off. Following publication of the results of an investigation I conducted in collaboration with the Boston Institute for Nonprofit Journalism, which appeared in Massachusetts alternative newspaper Dig Boston last month, it was announced that Alphabet, parent company of Google, would sell robotics company Boston Dynamics to Japan-based multinational corporation SoftBank for an undisclosed sum.
That article, “Critical History: Lobotomass,” detailed some of the bizarre history of the military-industrial complex in Massachusetts, including the fact that Boston Dynamics, a spinoff of the Massachusetts Institute of Technology that makes some of the world’s most sophisticated four-legged, bipedal and wheeled robots, and which received over $100 million in Pentagon funding between 2010 and 2014, sits right next door to what was once a Central Intelligence Agency front company called the Scientific Engineering Institute (SEI), which also had ties to MIT.
The SEI was a truly strange facility, engaged in work ranging from attempts to harness the paranormal as part of the CIA’s “MK-Often” program to research in electrical stimulation of the brain. This latter work led SEI researchers down a dark path of attempting to create not only animals but humans who could be guided by remote control. Around the same time that this effort apparently ended in failure (fatally, for some) the SEI began going through a series of name changes and ownership rearrangements, before its main building next to what is now Boston Dynamics was eventually demolished, though the government quietly continued its quest to engineer the “super soldier.”
Rumors had circulated for more than a year that Alphabet was looking to sell Boston Dynamics, and potential buyers were thought to include Amazon and Toyota, but the announcement of the sale to SoftBank came as a surprise, just over a week after publication of our article. SoftBank is an international tech conglomerate run by Masayoshi Son, Japan’s richest man, which, prior to buying both Boston Dynamics and the “secretive” robotics company Schaft from Alphabet, was already making a robot that can “read human emotion.”
U.S. taxpayers may feel some discomfort at the thought that more than $100 million of their money was diverted to high-tech robotics research that now, just a few years later, is in the hands of a foreign company.
Top Pentagon brass, however, may not be too concerned. Boston Dynamics’ machines are, in appearance, about the closest thing you could find to a “terminator” type robot in the U.S. today, at least as far as unclassified tech goes, and like Google, the military might be OK with ditching Boston Dynamics’ “nightmare-inducing” bots for public relations reasons. The idea that creating fully autonomous killer robots could cause some problems is one that our nation’s highest-ranking military minds have, luckily, given at least some amount of thought to. They call it the “Terminator Conundrum,” and, until recently at least, they seemed to think that giving artificial intelligence a license to kill would be taking things a step too far.
Yet that, too, may be changing. The military has been funding research that harnesses human brainwaves in an effort to teach artificial intelligence to identify targets to shoot. Pentagon rhetoric surrounding autonomous weapons systems has long stressed the importance of having a human “in the loop” when it comes to life and death decisions. Comments earlier this month from Col. Drew Cukor of the Algorithmic Warfare Cross-Functional Team, however, seem to hint that the Pentagon’s thinking — or simply its public posturing — could be shifting.
“When we get to the point where we’re talking about decision-making, we’ll have another debate, but we’re not anywhere near that now,” Cukor reportedly said on the topic of “targeting and trigger-pulling” at the Defense One Tech Summit in Washington, D.C.
Meanwhile, Russia has reportedly been working on exactly the kind of autonomous weapons systems the U.S. says it doesn’t want to get involved with. (It also might be worth noting, though, that while nuclear-armed, Russia still has a military budget less than an eighth of the United States’ and can’t even afford to bring its single Soviet-era aircraft carrier up to date for the 21st century, which is likely why it often pursues its geopolitical goals using asymmetric tactics such as the kind of information operations that have received so much media attention in the past year). China is also reportedly working on similar autonomous weapons systems.
Though the military sees a potential A.I. threat from America’s conventional nation-state adversaries, as I noted back in March, the bigger “general A.I.” threat could actually come from Silicon Valley, where tech companies have been excitedly hyping the potential of “brain-computer interfaces” this year. But ironically, when one of the most prominent pushers of this idea, Elon Musk, has tried to warn people that killer robots should be banned, he’s been accused of being an A.I. “alarmist” for signing a letter saying as much (along with well-known A.I. and robotics researchers, tech executives and academics) and of creating a distraction “from our real A.I. problems.”
Yet killer robots may be much closer to qualifying as a real problem than some A.I. enthusiasts writing code in Silicon Valley realize. Since 2012, the Pentagon has been guided by a policy document called Department of Defense Directive 3000.09 when it comes to the issue of autonomous weapons systems. “In effect, it constitutes the world’s first moratorium on lethal fully autonomous weapons,” according to Human Rights Watch. That directive is set to expire this year.
Thankfully, killer robots seem unlikely to get the green light — for now, at least — though we do have a notoriously unpredictable U.S. president, and military officials have undoubtedly been inching closer towards a broader embrace of autonomous weapons in recent years. Yet while Cukor may want to “have another debate,” top DoD officials aren’t so sure.
Gen. Paul Selva, Vice Chairman of the Joint Chiefs of Staff, made the case last week for “keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.”
Selva, who has repeatedly referenced the “Terminator” movies in discussions of A.I., said he doesn’t think “it’s reasonable for us to put robots in charge of whether or not we take a human life,” and predicted “there will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action.” He added that he was “an advocate for keeping that restriction,” but that “doesn’t mean that we don’t have to address the development of those kinds of technologies and potentially find their vulnerabilities and exploit those vulnerabilities.”
Regardless of whether the U.S. refrains from developing fully autonomous weapons systems or eventually uses its significantly weaker military rivals’ research and development programs as an excuse to create them, such technologies are likely to remain the subject of increasing news coverage and debate.
The arms race to create this potential doomsday technology may just now be picking up pace, but as automation and A.I. begins to seep into every aspect and area of a world already consumed by violence and war, it shows no signs of de-escalating without a significant public backlash. Organized opposition to autonomous weapons systems may not have achieved a high profile yet, but like so much else in the world of robotics, perhaps we’ll see that start to change in 2017.

Help keep independent journalism alive.
Donate today to support MirrorWilderness.com.
$1.00