CIA, DoD eager to tap AI capabilities they don’t understand

2017-09-07-cia-dod-ai

An artificial intelligence arms race seems to be officially underway following Russian President Vladimir Putin’s comment last week that whoever comes to dominate the field “will be the ruler of the world.”

This is one area, at least, where top U.S. policymakers can apparently agree with Putin. The military and the Central Intelligence Agency are pushing ahead with A.I. development at full speed — despite their admitted inability to understand how it works.

The technology has “got to be better integrated by the Department of Defense” James Mattis, the secretary of defense, said last month. Meanwhile the CIA is currently working on 137 AI-related pilot projects or “experiments,” according to Dawn Meyerriecks, the agency’s deputy director for science and technology.

Despite Mattis’s comments, the military has in fact been moving towards greater A.I. integration for some time, as I’ve previously noted. Earlier this year, for example, the Pentagon launched a new “Algorithmic Warfare Cross-Functional Team” whose leaders have since said, with noted enthusiasm, that A.I. “is as big as the introduction of nuclear weapons” and that the military wants to “open Pandora’s box” when it comes to the technology.

Meanwhile, the Defense Department is actively funding research to harness human brainwaves for the purpose of teaching machines to identify targets to shoot, even as top military officials continue to claim that the U.S. will stay out of the fully-automated killer robot game — although they’ve hinted that that policy could change.

Perhaps some of the mixed messages over what A.I. will be used for stem from a lack of clarity about what the term means. “AI is poorly understood in part because its definition is constantly evolving,” writes Robert Button, a senior operations research analyst at the RAND Corporation (an influential think tank that advises the government on various military- and technology-related policies) in a recent Real Clear Defense article. “As computers master additional tasks previously thought only possible by humans, the bar for what is considered ‘intelligent’ rises higher.”

Button notes that artificial intelligence has a wide range of potential military uses. “AI could be used in training systems,” he writes. “For example, it could provide unpredictable and adaptive adversaries for training fighter pilots. Computer vision, the ability of software to understand photos and videos, could greatly help in processing the mountains of data from surveillance systems or for ‘pattern-of-life’ surveillance. Facial recognition AIs are developing rapidly (including in China). Augmented reality can be used to close ‘skill gaps’ in complex maintenance; it is now being used by international airlines.”

Some of these, such as “facial recognition AIs” and “‘pattern-of-life’ surveillance” may sound creepy. But Button’s predictions get even more dystopian.

“Other suggested applications might include: using AIs […] to automate combat in so-called manned-unmanned operations; to speed weapon development and optimization, and for identifying targets (as well as non-combatants),” he writes. Some of the research towards these applications, as I’ve previously noted, is already underway.

“However, there are also implications from AI adoption by the military,” Button writes, in summing up. “The military’s current verification and validation process is meant for frozen software and is not suited to AIs that learn. Tainted data, possibly from adversaries, might have fatal consequences,” he writes, adding that “data will be critical, since learning AI success depends critically on data.”

Button also notes, almost as an afterthought, that it “is also hard to trust a system that cannot be understood.”

This statement may tell more than Button intended about the kind of thinking — fake or otherwise — prevalent in Washington policy-making circles when it comes to A.I., especially as it seems to echo the CIA’s Meyerriecks, who pointed out that “you can’t go to leadership and make a recommendation based on a process that no one understands.”

It may be the case that no one at the Pentagon or Langley really knows what they’re getting into with artificial intelligence. But if you ask the people making the decisions in those shadowy halls of power, they might also tell you that’s beside the point. It’s not important that we know exactly what SkyNet will look like; what’s important is that we invent it before the Russians or the Chinese do.

“I just want to go faster than they can keep up,” says a seemingly terrified Meyerriecks. “If there’s a bear in the woods, you just have to be faster than the slowest person.”

 

 

Help keep independent journalism alive.

Donate today to support MirrorWilderness.com.

$1.00

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s