In 2016, after giving the matter “thoughtful consideration,” tech executive Eric Schmidt told the public they should “stop freaking out” about potential pitfalls of artificial intelligence. Schmidt recently announced his resignation from the chairmanship of Google’s parent company Alphabet, though he’s apparently continuing in roles not only as an Alphabet board member, but heading the quasi-governmental Defense Innovation Advisory Board.
Schmidt recently commented again on similar subjects, and although 2017 saw an increase in interest –including among many experts on the topic— in an international ban on fully autonomous weapons or “killer robots,” Schmidt apparently sees no urgency in such a measure. When it comes to “all the movie-inspired death scenarios” surrounding robotics and A.I., “I can confidently predict to you that they are one to two decades away,” he reportedly told attendees of a February conference in Munich, Germany. “So let’s worry about them, but let’s worry about them in a while.”
Schmidt reportedly said at the same conference that he wouldn’t put A.I. “in charge of command and control,” but also added that the U.S. needs to compete in further developing the technology with China, where, as has been previously discussed here and elsewhere, there is a “rapidly expanding system of algorithmic surveillance.”
Even more recently than those comments, however, it emerged this week that Google has been quietly working on a Defense Department pilot program to use A.I. to analyze footage from unmanned “drone” aircraft as part of Project Maven. The launch of Project Maven was first announced last year and involved the creation of an “Algorithmic Warfare Cross Functional Team” that one of the project’s main leaders, Air Force Lt. Gen. John N.T. “Jack” Shanahan, said last spring he hoped would allow the military to “open Pandora’s box” of A.I.
After information about Google’s involvement with Project Maven was posted on an internal mailing list last week, it reportedly became a hot topic of discussion within the company. Gizmodo, which first reported on Google’s involvement, noted that Schmidt discussed tech industry concerns about military use of A.I. last fall. “There’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly,” Schmidt said at the time.
“Some Google employees were outraged that the company would offer resources to the military for surveillance technology involved in drone operations, sources said, while others argued that the project raised important ethical questions about the development and use of machine learning,” according to Gizmodo’s report published this week. “While Google says its involvement in Project Maven is not related to combat uses, the issue has still sparked concern among employees, sources said.”
It may be the case that Google’s A.I. involvement with Project Maven is limited to watching for things like –to use an example borrowed from the military planners themselves– a given white pickup truck, and when it comes to making life and death decisions about whether to obliterate said truck and its occupants with a Hellfire missile, for now at least, there will still be a “human in the loop.”
In this context it may also be worth noting, however, that up until last year, Google still owned Boston Dynamics, the maker of bipedal, quadrupedal and wheeled robots that has received over $100 million in Pentagon funding. Google sold the controversial robotics company, some of whose products its own founder describes as “nightmare-inducing,” just over a week after I reported on the history of a Central Intelligence Agency front company that once operated right next door to its present location called the Scientific Engineering Institute (SEI) in an article for the Boston Institute for Nonprofit Journalism and alternative newspaper Dig Boston.
Boston Dynamics has made videos cheekily claiming that “no robots were harmed in the making of this video” (even though they frequently shove, kick, and hit their robots with sticks in their videos to demonstrate their ability to withstand human efforts to stop them). While obviously meant as a joke, the snarky disclaimer is less funny in light of the fact that the SEI that once sat next door was the site of gruesome experiments aimed at developing methods to turn animals and even people –including prisoners of war in Vietnam who were later killed– into remote-controlled automatons for use in CIA surveillance and assassination schemes by implanting electrodes in their brains.
As investigative journalist Nafeez Ahmed has reported on in great detail, through the company’s connections to the CIA’s venture capital investment firm In-Q-Tel, and to an even more obscure organization known as the Highlands Forum, Google has long had ties to what Ahmed calls the “shadow intelligence community.”
In this context it is not entirely surprising that Google is developing A.I. for military drones; nor is it an incredible shock that Eric Schmidt, having resigned his chairmanship of Alphabet, is perhaps more enamored with his role as a liaison between Silicon Valley and the Pentagon, with its virtually bottomless pit of funding, than with his more widely known role as a tech executive. It’s also predictable that Schmidt would use every opportunity to speak on the topic of A.I. to downplay its dangers and hype its potential benefits, as he has been doing.
It would also come as no surprise, however, to learn that the billionaire tech mogul and military-industrial complex go-between is being disingenuous about his understanding of risks associated with A.I., and is simply betting — correctly or not — that his company and his network of connections to the “shadow intelligence community” can profit from it without turning the world into a nightmarish dystopia resembling something straight out of George Orwell’s 1984 or the Terminator movies. Or alternatively, that if they do exactly that, that a person of Schmidt’s social standing will be on the commanding end of such a system, rather than under its control.