President Obama spoke at the “White House Frontiers Conference” in Pennsylvania on Thursday, discussing a wide range of technology-related issues with other panelists, from artificial intelligence to healthcare to cybersecurity. If you are reading this post, though, it may be because you maintain a healthy skepticism towards the “tech sector” or the “innovation space” or whatever it is going by these days, and the almost uniformly uncritical press that it gets.
As is often the case with these kind of events, much of the discussion was framed in overwhelmingly positive rhetoric, but at a few points in the video, Obama’s words regarding technology seem to take on a decidedly ominous tone. The entire conference video is over an hour long, but if you don’t have that kind of time to devote to it, here’s a few key moments that stood out:
♦ About 7 minutes 10 seconds into the video, the president talks about how he gets “riled up” at those who “willfully ignore facts or stick their heads in the sand about scientific consensus.” He then mentions climate change, but also goes on. “They’re doing everything they can to gut funding for research and development. Failing to make the kinds of investments that brought us breakthroughs like GPS and MRIs and put Siri in our smart phones, and stonewalling even military plans that don’t adhere to ideology. That’s not who we are.”
It is not clear at all from the context of Obama’s speech who these science-denying military stonewallers are that he speaks of, but he adds that not listening to science just because the results aren’t what you want or don’t fit with your ideology is “the path to ruin.”
♦ Later in the video, (around 41:40) the president has an interesting answer to a question from the panel moderator, surgeon and public health researcher Atul Gawande, who mentions that he was notified that his background records on file with the government from when he got a security clearance to work in the Clinton administration were hacked.
“If you can hack all of my background records, now suppose you can hack my genetic information, all of my electronic records, my mental health information and more (…) How do we trust that this research is in the right hands?” Gawande says.
The president says that cybersecurity is a serious, ongoing challenge and that his administration is working on getting better at it, essentially, but then has some interesting comments toward the end of his answer. “Here’s the only thing I would say though,” President Obama says. “The opportunities to hack your information will be just as great or greater in a poorly-integrated, broken-down healthcare system as it will be in a highly-integrated, effective healthcare system. So I think it’s important for us not to overstate the dangers of — the very real dangers — of cybersecurity and ensuring the privacy of our health records. We don’t want to so overstate it that that ends up becoming a significant impediment to us making the system work better.”
Not exactly a reassuring answer to the question. It is hard to understand Obama’s thinking here on how taking the security of health data seriously would become an “impediment to us making the system work better.”
♦ At one more point in the video (58:55), towards the end of the panel discussion, President Obama once again seems to propose policies that many would find alarming.
“We’re going to have to rebuild, within this wild wild west of information flow, some sort of curating function that people agree to. You know I use the analogy in politics. It used to be there were three television stations and Walter Cronkite’s on there and not everybody agreed and there were always outliers who thought that it was all propaganda and we didn’t really land on the moon and Elvis is still alive and so forth, but generally that was in, you know, the papers you bought at the supermarket right as you were checking out, and generally people trusted a basic body of information. It wasn’t always as democratic as it should’ve been,” Obama said.
“(…) But there has to be I think some sort of way in which we can sort through information that passes some basic ‘truthiness’ tests and those that we have to discard because they just don’t have any basis in anything that’s actually happening in the world. And that’s hard to do, but I think it’s going to be necessary, it’s going to be possible. I think the answer is obviously not censorship, but it’s creating places where people can say ‘this is reliable.’ And I’m still able to argue safely about facts and what we should do about it, while still not just making stuff up,” the president continued.
Also, timed to coincide with its Frontiers Conference, the White House released a report this week titled “Preparing for the Future of Artificial Intelligence.” While much of the report is enthusiastically supportive of A.I., it also notes potential problems.
“AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias. It is important that anyone using AI in the criminal justice context is aware of the limitations of current data,” it points out. Point taken. Also, the report notes that “one of the main factors limiting the deployment of AI in the real world is concern about safety and control. If practitioners cannot achieve justified confidence that a system is safe and controllable, so that deploying the system does not create an unacceptable risk of serious negative consequences, then the system cannot and should not be deployed.” Probably good advice there.
The report also features a section on “AI in Weapon Systems,” noting that the U.S. has incorporated autonomous components into some kinds of weapons systems for decades. “These technological improvements may allow for greater precision in the use of these weapon systems and safer, more humane military operations. Precision-guided munitions allow an operation to be completed with fewer weapons expended and with less collateral damage, and remotely-piloted vehicles can lessen the risk to military personnel by placing greater distance between them and danger. Nonetheless, moving away from direct human control of weapon systems involves some risks and can raise legal and ethical questions,” the report notes.
“Over the past several years, in particular, issues concerning the development of so-called ‘Lethal Autonomous Weapon Systems’ (LAWS) have been raised by technical experts, ethicists, and others in the international community (…) although it is clear that there is no common understanding of LAWS. Some States have conflated LAWS with remotely piloted aircraft (military ‘drones’), a position which the United States opposes, as remotely-piloted craft are, by definition, directly controlled by humans just as manned aircraft are. Other States have focused on artificial intelligence, robot armies, or whether ‘meaningful human control’ – an undefined term – is exercised over life-and-death decisions. The U.S. priority has been to reiterate that all weapon systems, autonomous or otherwise, must adhere to international humanitarian law, including the principles of distinction and proportionality.”
It is somewhat encouraging to see the U.S. government at least paying lip service to the idea of taking a cautious approach to the development of A.I. technology. It is simultaneously clear, however, that it also has its reasons for wanting to aggressively push ahead with A.I. research. It remains to be seen if in the case of artificial intelligence, as with so much that the U.S. government does, its actions will speak louder than its words.