Stephen Hawking is known by many as the greatest thinker and physicist of our time. Though he recently made headlines for explaining how to escape from a black hole, he's been a staunch critic of the Artificial Intelligence movement, warning us of the possible dangers of hyper-intelligent machines. Back in July, Hawking signed a ban on creating warfare AI and autonomous weapons along with tech superstars Elon Musk and Steve Wozniak. What's Hawking so afraid of, exactly? He took to Reddit to answer the public's burning questions on artificial intelligence and how it could spell the end of humanity.
2. Should We Be Afraid of a Terminator-like Future?
Q: Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call "The Terminator Conversation." My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style "evil AI" is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?
Stephen Hawking: You're right: media often misrepresent what is actually said. The real risk with AI isn't malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble. You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.
(Question courtesy of Reddit user thisisjustsomewords)
Q: Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done?
Stephen Hawking: If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.
(Image: Getty; question courtesy of Reddit user mixedmath)
4. Should We Prepare Early For an Inevitable Boom in A.I.?
Q: The idea of a "conscious" or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?
Stephen Hawking: The latter. There's no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don't trust anyone who claims to know for sure that it will happen in your lifetime or that it won't happen in your lifetime. When it eventually does occur, it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let's start researching this today rather than the night before the first strong AI is switched on.
(Image: Getty; question courtesy of Reddit user fat-chunk)
Q: One might think it impossible for a creature to ever acquire a higher intelligence than its creator. Do you agree? If yes, then how do you think artificial intelligence can ever pose a threat to the human race (their creators)?
Stephen Hawking: It's clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
(Image: Getty; question courtesy of Reddit user minlite)
Q: I've thought lately about biological organisms' will to survive and reproduce, and how that drive evolved over millions of generations. Would an AI have these basic drives, and if not, would it be a threat to humankind?
Stephen Hawking: An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.
(Image: Getty; question courtesy of Reddit user demented_vector)
Q: Your fear of A.I. appears to stem from the assumption that A.I. will act like a new biological species competing for the same resources or otherwise transforming the planet in ways incompatible with human (or other) life. But the reason that biological species compete like this is because they have undergone billions of years of selection for high reproduction. Essentially, biological organisms are optimized to 'take over' as much as they can. It's basically their 'purpose'. But I don't think this is necessarily true of an A.I. There is no reason to surmise that A.I. creatures would be 'interested' in reproducing at all. I don't know what they'd be 'interested' in doing. I am interested in what you think an A.I. would be 'interested' in doing, and why that is necessarily a threat to humankind that outweighs the benefits of creating a sort of benevolent God.
Stephen Hawking: You're right that we need to avoid the temptation to anthropomorphize and assume that AI's will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.
(Image: Getty; question courtesy of Reddit user ChesterChesterfield)