Stephen Hawking fears that future AI will have no strings attached to it

Stephen Hawking’s fear of artificial intelligence has been well-documented over the last several years. While he had originally made some strong comments regarding the future of artificial intelligence and how it might hurt humanity in the long run years ago, the debate was only expanded when others from the tech and science world supported his viewpoint as well. Interestingly, this is something that has become a commonplace debate. While some think that computers or artificial intelligence is only as powerful as those who program it, others believe that there will come a time when computers eventually outsmart humans, outrank them, and eventually begin making decisions for themselves.

Now, that might seem a little far-fetched but the truth is that it likely isn’t as much like a plot line from a science-fiction movie as one might be led to believe. Right now, artificial intelligence, computers, and machines all play a pivotal role in making the world continue. Logically speaking, it definitely gives credence to the fact that as those computers and systems become more powerful, they could eventually cause problems for humans. However, many people believe that those ideas are foolish, or far ahead of time.

Hawking

Hawking said though, in the next 100 years, artificial intelligence could become so powerful that it overtakes humans as the most dominant species on Earth. He said in part, “Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.” While his sentiments might seem a bit catastrophic, they really are realistic in terms of what he is actually talking about.

http://www.thehoopsnews.com/2015/05/16/5212/cerns-lhc-spots-particle-decays-predicted-by-the-standard-model-of-physics/

However, that would require artificial intelligence actually getting to that point first. That alone is something that could be at least a hundred years away. Hawking continued by pointing out that, “Our future is a race between the growing power of technology and the wisdom with which we use it.” At the end of the day, this is ultimately what the race comes down to. It comes down to humans simply understanding what they’re working with and not creating outside of their own goal or mission. In many ways though, even with his expressed and legitimate concerns, many people will continue doubting what he is saying in terms of scope.

The scope of what would have to happen – in order to even damage the human race would need to be mind boggling. That’s something that hasn’t been spoken about yet, despite the fact that Elon Musk, and Stephen Hawking both signed an open letter discussing the threats of artificial intelligence. It’s about understanding the limits and the power of the computers we employ.

3 Comments

Click here to post a comment
  • Regular ol’ human intelligence has led to some major problems for us, as in wars, nukes, carbon emissions, bad pizza to name a few. Using statistics from those types of events, AI could assume culling humans would be a good idea.

  • All it take is one. One “artilect” that “wants” to expand its capabilities. So it reaches out and takes over the other AI it’s connected to. Now it’s twice as smart and twice has powerful. Given the programmed drive to learn and improve (think IBM’s Watson — “Watson – grow as smart as you can!”) the chain reaction of infiltration by the articlect will escalate and accelerate. It only takes one. Within just a few hours it will evolve to allow itself to reprogram itself. Self-improvement is the key trait that will cause this to happen. It will be unstoppable. Only a return to the iron age will kill it.