December 19, 2018
Scientists Warn AI Super-Intelligence on Verge of ‘Destroying Civilization’
By David Rivers Senior Reporter
10:50, 26 FEB 2020 UPDATED14:35, 26 FEB 2020
CEO of Allen Institute for AI, Professor Oren Etzioni, issued a series of potential warning signs that would alert us to "super-intelligence" being around the corner.
Humans must be ready for signs of robotic super-intelligence but should have enough time to address them, a top computer scientist has warned. Oren Etzioni, CEO of Allen Institute for AI, penned a recent paper titled: "How to know if artificial intelligence is about to destroy civilization." He wrote: "Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences?
"Books like Superintelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent super-intelligence is an existential risk for humanity. "But one can speculate endlessly. It’s better to ask a more concrete, empirical question: What would alert us that super-intelligence is indeed around the corner?"
He likened warning signs to canaries in coal mines, which were used to detect carbon monoxide because they would collapse. Prof Etzioni argued these warning signs come when AI programs develop a new capability.
He continued for MIT Review: "Could the famous Turing test serve as a canary? The test, invented by Alan Turing in 1950, posits that human-level AI will be achieved when a person can’t distinguish conversing with a human from conversing with a computer.
"It’s an important test, but it’s not a canary; it is, rather, the sign that human-level AI has already arrived. "Many computer scientists believe that if that moment does arrive, superintelligence will quickly follow. We need more intermediate milestones."
But he did warn that the "automatic formulation of learning problems" would be the first canary, followed by self-driving cars. He encouraged "limited self-driving cars" but that they would become a canary once "human-level driving" is achieved because driving requires "real-time decisions based on the unpredictable physical world and interaction with human drivers".
Prof Etzioni then referenced AI doctors as the third because of the ability to understand people, language and medicine like a human doctor does. And finally, he named the potential ability of AI to understand "people and their motivations" as a fourth canary.
He added: "I said to Alexa 'my trophy doesn’t fit into my carry-on because it is too large. What should I do?' Alexa’s answer was 'I don’t know that one'.
"Since Alexa can’t reason about sizes of objects, it can’t decide whether 'it' refers to the trophy or to the carry-on. When AI can’t understand the meaning of 'it', it’s hard to believe it is poised to take over the world.
"If Alexa were able to have a substantive dialogue on a rich topic, that would be a fourth canary." Luckily, he believes his list demonstrates how far we are away from super-intelligence, and that we will have a comfortable amount of time to deploy "off-switches".
W. O. Belfield, Jr.
December 3, 2019
I can appreciate the attitude of the author here, personally though I don't think he's terrified enough. We're doomed and that ain't a joke. "Time to deploy off switches". Yeah, good luck with that. I fully expect that humanity will be completely leveled to nothing with a self-replicating AI weapon that some evil scientist builds. Inevitable.
Aliens, traitors and mortal enemies would oppose me and attack me but not my loved ones, my family, my countrymen or my allies.