Elon Musk Worried AI Will Lead to Real-life Terminator
Musk said there are potential dangers that can come from advancements of artificial intelligence.
By Steve Crowe - Filed Jun 19, 2014

First it was renowned scientist Stephen Hawking, now Elon Musk is expressing concerns about the potential dangers of artificial intelligence (AI).

Musk, an entrepreneur and founder of Tesla and SpaceX, invests in multiple AI companies. However, during a recent appearance on CNBC, Musk said he is invested in those companies not for financial benefit, but to "keep an eye on what's going on with artificial intelligence. I think there is a potential dangerous outcome there."

Musk even mentioned the 1984 classic The Terminator, saying it's the type of situations humans need to avoid.

"Yeah. I mean, I don’t think – in the movie The Terminator, they didn't create A.I. to – they didn't expect, you know some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. It’s just – you know, but you have to be careful. Yeah, you want to make sure that –"

Here's the full discussion between Musk and CNBC anchors Kelly Evans and Julia Boorstin (via Seattle Pi):

JULIA BOORSTIN: Now, I have to ask you about a company that you invested in. As you said, you make almost no investments outside of SpaceX and Tesla.

ELON MUSK: Yeah I’m not really an investor.

JB: You’re not an investor?

EM: Right. I don’t own any public securities apart from SolarCity and Tesla.

JB: That's amazing. But you did just invest in a company called Vicarious Artificial Intelligence. What is this company?

MUSK: Right. I was also an investor in DeepMind before Google acquired it and Vicarious. Mostly I sort of – it's not from the standpoint of actually trying to make any investment return. It's really, I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome there and we need to –

KE: Dangerous? How so?

EM: Potentially, yes. I mean, there have been movies about this, you know, like 'Terminator.'

KE: Well yes, but movies are – even if that is the case, what do you do about it? I mean, what dangers do you see that you can actually do something about?

MUSK: I don't know.

JB: Well why did you invest in Vicarious? What exactly does Vicarious do? What do you see it doing down the line?

MUSK: Well, I mean, Vicarious refers to it as recursive cortical networks. Essentially emulating the human brain. And so I think –

JB: So you want to make sure that technology is used for good and not Terminator-like evil?

MUSK: Yeah. I mean, I don’t think – in the movie The Terminator, they didn't create A.I. to – they didn't expect, you know some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. It’s just – you know, but you have to be careful. Yeah, you want to make sure that –

KE: But here is the irony. I mean, the man who is responsible for some of the most advanced technology in this country is worried about the advances in technology that you aren't aware of.

MUSK: Yeah.

KE: I mean, I guess that is why I keep asking, So what can you do? In other words, this stuff is almost inexorable, isn’t it? How if you see that there are these brain-like developments out there can you really do anything to stop it?

MUSK: I don't know.

JB: But what should A.I. Be used for? What's its best value?

MUSK: I don't know. But there are some scary outcomes. And we should try to make sure the outcomes are good, not bad. Yeah.

KE: Or escape to mars if there is no other option.

MUSK: The A.I. will chase us there pretty quickly.

Hawking and three other scientists published a paper that said AI could be "the biggest event in human history" and also "the last." Hawking cites several achievements in the field of AI, including self-driving cars, Siri and the computer that won Jeopardy! However, “such achievements will probably pale against what the coming decades will bring.”

The scientists continue, “The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

"Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation."

The scientists write that there may be nothing to prevent machines with superhuman intelligence from self-improving. "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

<< Return to story