We all know Stephen Hawking is a genius and what he says plays an important factor in the world we live in.
In a recent interview with Wired, Hawking claimed,
“I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”
Listen, we’ve been telling you that AI is no joke. Did you get a chance to peep Sophia become the first humanoid citizen? Sophia receiving Saudi citizenship was just the first step into a void of “no return.”
It’s inevitable that AI robots will take over. Eventually, AI will be able to learn on its own – improving on itself way faster than a human brain.
Allies of Hawking include Elon Musk. He thinks that implementing AI into our human society is dangerous AF. Back in 2014, he went off in a speech at M.I.T.
Regarding AI being humanity’s “biggest existential threat,” Musk said,
“With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”
One of humanity’s greatest fears is not being in control. AI is moving in that direction. Geniuses like Hawking and Musk are terrified that we might create something destructive by accident.
Musk actually discussed his fear of an accidental AI mishap with Ashlee Vance, author of his biography, Elon Musk. His concerns about his homie Larry Page, founder of Google, scared the caca out of me.
Musk is worried Page could possibly create something with good intentions but still fuck up mankind with “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
I hope our top global thinkers know what they are doing and hopefully they take AI technology safety into consideration. I really don’t want to have to shoot the hands with a metal robot. Especially one that learns faster than me.