Skip to content Skip to footer

Artificial intelligence experts pen 101-page study about the dangers of AI

“We need to be super careful with AI. Potentially more dangerous than nukes,” Elon Musk tweeted four years ago.

It’s an artificial intelligence takeover and if you haven’t caught wind of AI in the recent news you’re asleep. AI threatens our daily life, infiltrating it from all ends and is technologically advancing faster than predicted.

If we’re not careful, the power of a superhuman artificial intelligence could fall into the wrong hands. It sounds like something out of horror sci-fi flick but it’s very real.

Still, that’s the conclusion 26 leading AI researchers have come to in a 101-page report titled “The Malicious Use of Artificial Intelligence.”

The researchers, who came together at a conference in Oxford last February, have now formulated four high-level recommendations to combat the changing threat landscape.

One of those recommendations,

“Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.”

Advances in AI could leave AI automated systems vulnerable and hackers could use the weak points to inflict physical and political harm on the public.

According to the Yale, Oxford, Cambridge, and OpenAI researchers, if taken advantage of, the threats of AI could target self-driving cars and drones.

How? Self-driving cars could misread something as simple as a stop sign, causing accidents. Drones on the other hand if controlled by a single AI system could be hacked and used as a weapon. The report warned,

“If multiple robots are controlled by a single AI system run on a centralized server, or if multiple robots are controlled by identical AI systems and presented with the same stimuli, then a single attack could also produce simultaneous failures on an otherwise implausible scale. A worst-case scenario in this category might be an attack on a server used to direct autonomous weapon systems, which could lead to large-scale friendly fire or civilian targeting…”

That’s not all we have to worry about. AI is predicted to become so advanced that highly realistic videos of state leaders could become a reality.

This is caused by advances in image and audio processing. Just imagine a fake video coming out with Trump declaring war on North Korea. It would look so real that even Kim Jong-un might believe it.

Image result for trump korea gif

Talk of “deepfakes” was just in the news when fake pornographic videos surfaced with superimposed images of a person’s face were placed over adult film actor’s faces.

The report also pointed out the creation of more targeted propaganda. Facebook already has AI that can pick up on statuses posted from people looking to harm others or themselves.

Imagine what one will be able to do with technology that can analyze human behaviors, moods, and beliefs on the basis of available data. The report predicted what we should expect. It said,

“We also expect novel attacks that take advantage of an improved capacity to analyze human behaviors, moods, and beliefs on the basis of available data.”

Another instance to take note of, Elon Musk just took the backseat at OpenAi, a company he co-founded to make AI technology safer. We’ve always known Musk to be ahead of the curve. Hopefully, he will still be influential as he continues to donate and advise the organization.

OG Stephen Hawking has also warned us that our simple human intelligence could lead to an AI takeover. Listen, this is no game.

In the end, all we can do is try to grasp this technology and understand AI as much as possible. Those in power need to heed the warnings and know that if put in evil hands, AI can be more destructive than innovative. The report concluded,

“Though the specific risks of malicious use across the digital, physical, and political domains are myriad, we believe that understanding the commonalities across this landscape, including the role of AI in enabling larger-scale and more numerous attacks, is helpful in illuminating the world ahead and informing better prevention and mitigation efforts. We urge readers to consider ways in which they might be able to advance the collective understanding of the AI-security nexus, and to join the dialogue about ensuring that the rapid development of AI proceeds not just safely and fairly but also securely.”

The malicious use of AI is a conversation that we must have. Stay woke sheeple!