Level8unitpart2oncontrollingAI
0001 Im going to talk about a failure of intuition that many of us suffer from. Its really a failure to detect a certain kind of danger. Im going to describe a scenario that I think is both terrifying and likely to occur, and thats not a good combination, as it turns out. And yet rather than be scared, most of you will feel that what Im talking about is kind of cool. 0025 Im going to describe how the gains we make in artificial intelligence could ultimately destroy us. And in fact, I think its very difficult to see how they wont destroy us or inspire us to destroy ourselves. And yet if youre anything like me, youll find that its fun to think about these things. And that response is part of the problem. OK That response should worry you. And if I were to convince you in this talkthat we were likely to suffer a global famine, either because of climate change or some other catastrophe, and that your grandchildren, or their grandchildren, are very likely to live like this, you wouldnt think, “Interesting. I like this TED Talk.“ 0109 Famine isnt fun. Death by science fiction, on the other hand, is fun, and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal an appropriate emotional response to the dangers that lie ahead. I am unable to marshal this response, and Im giving this talk. 0130 Its as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines. Our computer hardware and software just stops getting better for some reason. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to.What could stop usfrom doing this A full-scale nuclear war A global pandemic An asteroid impact Justin Bieber becoming president of the United States 0208 Laughter 0212 The point is, something would have to destroy civilization as we know it. You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently,generation after generation. Almost by definition, this is the worst thing thats ever happened in human history. 0232 So the only alternative, and this is what lies behind door number two, is that we continue to improve our intelligent machines year after year after year. At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an “intelligence explosion,“ that the process could get away from us. 0258 Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us.But that isnt the most likely scenario. Its not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more compe