As autonomous weapons systems, sometimes called killer robots, move closer towards becoming a logistically feasible option for governments around the world, they are also generating increasing controversy.
More than 2,400 researchers from 170 different organizations called this week for a worldwide ban on autonomous weapons. “Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI,” reads the new pledge from the Future of Life Institute. “In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.”
UPI reports that Massachusetts Institute of Technology professor and noted physicist Max Tegmark, president of the FLI, said he is “excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” adding that “AI has huge potential to help the world — if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons and should be dealt with in the same way.”
With any luck, the defense establishment in the U.S. and elsewhere will eventually come to their senses and listen to the experts in the ongoing research on this issue. At present, however, those driving decision-making at the Department of Defense continue to take a gung-ho approach to AI. Despite Google’s recent dropping of a contract for the controversial Project Maven after a public relations backlash, for example, the military apparently has no plans to give up on its “algorithmic warfare” efforts to “open Pandora’s box” of AI.
Just this week, meanwhile, a key Pentagon official, Thomas Michelli, the DoD’s acting deputy chief information officer for cybersecurity, while acknowledging that he was “tapdancing a little bit here, because we’re getting ready to make a big announcement, coming out in weeks, and I don’t get ahead of that,” reportedly said that “you’ll see we’re moving dollars and resources to artificial intelligence.”
At the same time, the Defense Advanced Research Projects Agency has announced the launch of a new program with the cutesy acronym SHRIMP (for “SHort-Range Independent Microrobotic Platforms”) and related “Olympic-themed contests” with the aim of developing tiny, insect-like robots.
In reporting on the SHRIMP initiative, Defense One‘s Patrick Tucker notes that the military “has been studying ‘insect cyborgs’ since 2006.” This may be an understatement. As documented in such sources as Annie Jacobsen’s book on the history of DARPA, The Pentagon’s Brain, the development of so-called “biosystems,” “biohybrids,” and “biomimetic” technologies have undergone a closely linked and continuous, though largely classified, evolution over a period of decades.
In an article for Massachusetts alternative newspaper Dig Boston and the Boston Institute for Nonprofit Journalism published last year, I explored the connection between a defunct CIA front company called the Scientific Engineering Institute where early experiments of this nature were planned or carried out on not only animals but on people, and the modern-day robotics company Boston Dynamics, which now sits next door to the SEI’s former location.
The goal of those early experiments on such animals as dogs, crows and snakes largely centered around developing remote controlled assassination animals, or, as in the case of the famous “acoustic kitty,” to engineer flesh-and-blood surveillance devices.
Now that scientists and commercial developers of these technologies are seemingly beginning to take the ethical dimensions of their work seriously, we are seeing some steps in the right direction. The latest pledge from the FLI is certainly a significant one, and if the U.S. government in coordination with other governments can come to an agreement on banning autonomous killing machines altogether, that would surely be another.
That being said, however, even the prospect of taxpayer dollars being spent to develop tiny surveillance robots indistinguishable from insects –which, like so much technology originally developed for the military, would likely end up in the hands of local police in the U.S. within a few years of their battlefield debut abroad– is quite a disheartening one, and underscores the need to rein in programs –such as “SHRIMP” seems more than likely to become– that blindly throw money at the most blatantly counterproductive forms of “innovation” that can seemingly be thought up by the military-industrial complex.

Help keep independent journalism alive.
Donate today to support MirrorWilderness.com.
$1.00