Elon Musk, the billionaire co-founder of PayPal, made headlines in August for joining a group of leading artificial intelligence experts in calling on the United Nations to ban fully autonomous weapons systems, sometimes referred to as killer robots. Yet a recent report on another Musk venture, Tesla, Inc., highlights the somewhat contradictory nature of his position.
Nearly a year and a half ago, 40-year-old Ohio resident Joshua Brown became the first person to die in a self-driving car-related accident when his Tesla Model S, driving in a semi-autonomous “autopilot” mode, collided with a tractor trailer on a highway in Florida. In January of this year, Tesla seemed vindicated to some degree when the National Highway Traffic Safety Administration (NHTSA) concluded that “Tesla didn’t cause Brown’s death,” as Wired succinctly summarized the findings at the time.
“But now there’s a new wrinkle to the story,” the publication noted in September, after a separate federal agency, the National Transportation Safety Board (NTSB), announced the conclusion of its own investigation into the incident and “declared that whether or not Brown paid heed to Tesla’s warnings, the carmaker bears some of the blame for selling a system that allowed that kind of misuse.” While the car warned Brown to retake manual control several times, the system had no built-in way to automatically stop the car or pull over.
“If automated vehicle control systems do not automatically restrict their own operation to those conditions for which they were designed and are appropriate, the risk of driver misuse remains,” the NTSB report states. Tesla has reportedly since fixed the seemingly minor, but in this case lethal, bug.
While the specific problem with Tesla’s Autopilot feature that the NTSB concluded was a contributing factor to Brown’s death may be resolved, however, the larger problem that self-driving cars are bound to kill more people remains. In the past year or so the debate over who autonomous vehicles will kill when, for example, they are inevitably forced to decide between saving their passenger or a pedestrian, has been growing louder. This debate is in some ways remarkably similar to that ongoing in high-level military circles over autonomous “killer robots.”
For many years now the United States has, of course, used remote-controlled lethal “drones,” also known by their technical name as “unmanned aerial vehicles,” for targeted killing operations in various countries, primarily in the Middle East. So in a sense, we have been using “killer robots” for some time. Yet drones are ultimately human-controlled. The present debate within the military, therefore, revolves around the relative importance, or lack thereof, of keeping a human “in the loop” when it comes to life and death decisions. Tellingly, an expert quoted in Wired‘s story on the NTSB report on the Tesla crash uses this exact same phrasing.
The report “highlights that with the introduction of ever smarter technologies, the companies developing such systems have to carefully consider how to keep the human in the loop if they want to rely on the human operator as a ‘backup,'” Bart Selman, an AI researcher at Cornell University, told the publication. (Following the earlier NHTSA report, which seemingly exonerated Tesla of blame, Selman told Wired that the conclusion was “very positive for Tesla” and “puts the whole issue of the Florida accident in the right context.”)
It’s not clear what huge difference Musk sees between the autonomous killing machines that he thinks should be banned for use by the military forces of the world’s governments, and the ones that it would be “morally wrong” for his company to not try to sell you just as soon as they’ve barely gotten out of beta testing.
Yet Musk is no stranger to contradiction, especially when it comes to artificial intelligence matters. He has, for example, called A.I. “our biggest existential threat,” one posing “vastly more risk than North Korea,” and which he has compared to “summoning the demon.” At the same time, however, he is pushing for people to not only make greater use of A.I. but to physically merge with it. Arguing that the “existential risk is too high not to,” Musk is trying to sell the public on a brain-computer interface being developed by his new company Neuralink.
Elon Musk not only enjoys billions of dollars in government subsidies for his companies but seemingly incessant fawning media coverage. He is casually referred to as a “visionary” and a “genius” — yet judging by his actions and words, the label of hypocrite might be more fitting.