ai by Claude J. Easy March 14, 2018
Do we trust Elon Musk? He’s warned us about the dangers of AI before, and this time at the SXSW tech conference in Austin, TX his alarm about the technology has never been so real.
This is the same man who told us that AI is more life threatening than nukes and put Mark Zuckerberg in his place.
At SXSW, he explained how close he is to cutting edge AI and how shook he is to finding out the modern technology’s capabilities. He said,
“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me. It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”
Not only do we have our generation’s resident genius warning us about the possible dangers of an AI takeover but artificial intelligence experts also penned a 101-page study about the dangers of the technology.
If that didn’t get you shook, AI bot Sophia’s rapid acceptance into society as a citizen of the world should have. Very highkey, she’s going to “destroy all humans.”
So why aren’t we listening to Musk? He explained at SXSW,
“The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are… This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”
The big deal for Musk is the need for regulatory oversight over the rapid advancement of AI. This is a very unusual request for the rude boy of science, but according to him the technology is far more dangerous than nukes.
To put this into perspective, it would be like allowing North Korea to build an atom bomb and America not saying squat. That’s why Musk is so shook and we should be too. Peep what he said at SXSW,
“I am not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public. It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely…”
He continued to stress the importance of regulatory oversight of AI saying,
“This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane… And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.”
Not only did Musk warn about AI eventually outsmarting humans, but he brought up the transportation industry and how it makes up 12 percent of the world’s jobs. He warned that with self driving cars that industry could be hit hard.
This is just one example of how damned we are. Still, Musk is only worried about the long term effects and thinks digital super intelligence regulatory oversight takes precedence. At SXSW, he explained,
“I am not really all that worried about the short term stuff. Narrow AI is not a species-level risk. It will result in dislocation, in lost jobs,and better weaponry and that kind of thing, but it is not a fundamental species level risk, whereas digital super intelligence is… So it is really all about laying the groundwork to make sure that if humanity collectively decides that creating digital super intelligence is the right move, then we should do so very very carefully — very very carefully. This is the most important thing that we could possibly do.”
BRUH! It’s time for us to take heed of Musk’s warnings. In the words of the late Stephen Hawking,
“I FEAR THAT AI MAY REPLACE HUMANS ALTOGETHER. IF PEOPLE DESIGN COMPUTER VIRUSES, SOMEONE WILL DESIGN AI THAT IMPROVES AND REPLICATES ITSELF. THIS WILL BE A NEW FORM OF LIFE THAT OUTPERFORMS HUMAN”
Stay safe and peep how fast AI is advancing.