Level8unitpart2oncontrollingAI
00:01 I m going to talk about a failure of intuition that many of us suffer from. It s really a failure to detect a certain kind of danger. I m going to describe a scenario that I think is both terrifying and likely to occur, and that s not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what I m talking about is kind of cool. 00:25 I m going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think it s very difficult to see how they won t destroy us or inspire us to destroy ourselves. And yet if you re anything like me, you ll find that it s fun to think about these things. And that response is part of the problem. OK? That response should worry you. And if I were to convince you in this talkthat we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldn t think, “Interesting. I like this TED Talk.“ 01:09 Famine isn t fun. Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response, and I m giving this talk. 01:30 It s as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to.What could stop usfrom doing this? A full-scale nuclear war? A global pandemic? An asteroid impact? Justin Bieber becoming president of the United States? 02:08 (Laughter) 02:12 The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently,generation after generation. Almost by definition, this is the worst thing that s ever happened in human history. 02:32 So the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an “intelligence explosion,“ that the process could get away from us. 02:58 Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us.But that isn t the most likely scenario. It s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more compe