(News Article):Scientists Warn AI Super-Intelligence on Verge of ‘Destroying Civilization’ | Information Technology / Computer Talk | Forum

A A A
Avatar
Please consider registering
guest
sp_LogInOut Log In
Register | Lost password?
Advanced Search
Forum Scope




Match



Forum Options



Minimum search word length is 3 characters - maximum search word length is 84 characters
sp_Feed Topic RSS sp_Related Related Topics sp_TopicIcon
(News Article):Scientists Warn AI Super-Intelligence on Verge of ‘Destroying Civilization’
February 26, 2020
12:00 pm
Avatar
Richard Daystrom PhD
Livermore, CA.
Member
Level 1
Forum Posts: 740
Member Since:
December 19, 2018
sp_UserOfflineSmall Offline
DAILY STAR

 

Scientists Warn AI Super-Intelligence on Verge of ‘Destroying Civilization’

By David Rivers Senior Reporter

10:50, 26 FEB 2020 UPDATED14:35, 26 FEB 2020

CEO of Allen Institute for AI, Professor Oren Etzioni, issued a series of potential warning signs that would alert us to "super-intelligence" being around the corner.

Humans must be ready for signs of robotic super-intelligence but should have enough time to address them, a top computer scientist has warned. Oren Etzioni, CEO of Allen Institute for AI, penned a recent paper titled: "How to know if artificial intelligence is about to destroy civilization." He wrote: "Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences?

"Books like Superintelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent super-intelligence is an existential risk for humanity. "But one can speculate endlessly. It’s better to ask a more concrete, empirical question: What would alert us that super-intelligence is indeed around the corner?"

He likened warning signs to canaries in coal mines, which were used to detect carbon monoxide because they would collapse. Prof Etzioni argued these warning signs come when AI programs develop a new capability.

He continued for MIT Review: "Could the famous Turing test serve as a canary? The test, invented by Alan Turing in 1950, posits that human-level AI will be achieved when a person can’t distinguish conversing with a human from conversing with a computer.

"It’s an important test, but it’s not a canary; it is, rather, the sign that human-level AI has already arrived. "Many computer scientists believe that if that moment does arrive, superintelligence will quickly follow. We need more intermediate milestones."

But he did warn that the "automatic formulation of learning problems" would be the first canary, followed by self-driving cars. He encouraged "limited self-driving cars" but that they would become a canary once "human-level driving" is achieved because driving requires "real-time decisions based on the unpredictable physical world and interaction with human drivers".

Prof Etzioni then referenced AI doctors as the third because of the ability to understand people, language and medicine like a human doctor does. And finally, he named the potential ability of AI to understand "people and their motivations" as a fourth canary.

He added: "I said to Alexa 'my trophy doesn’t fit into my carry-on because it is too large. What should I do?' Alexa’s answer was 'I don’t know that one'.

"Since Alexa can’t reason about sizes of objects, it can’t decide whether 'it' refers to the trophy or to the carry-on. When AI can’t understand the meaning of 'it', it’s hard to believe it is poised to take over the world.

"If Alexa were able to have a substantive dialogue on a rich topic, that would be a fourth canary." Luckily, he believes his list demonstrates how far we are away from super-intelligence, and that we will have a comfortable amount of time to deploy "off-switches".

 

RELATED ARTICLES:

W. O. Belfield, Jr.

March 3, 2020
5:13 pm
Avatar
Straight Zeke the Geek
Member
Members
Level 0
Forum Posts: 91
Member Since:
December 3, 2019
sp_UserOfflineSmall Offline

I can appreciate the attitude of the author here, personally though I don't think he's terrified enough.  We're doomed and that ain't a joke. "Time to deploy off switches".  Yeah, good luck with that.  I fully expect that humanity will be completely leveled to nothing with a self-replicating AI weapon that some evil scientist builds.  Inevitable.

Aliens, traitors and mortal enemies would oppose me and attack me but not my loved ones, my family, my countrymen or my allies.

May 11, 2020
7:22 pm
Avatar
Guy
Member
Members
Level 0
Forum Posts: 124
Member Since:
May 3, 2020
sp_UserOfflineSmall Offline

Thanks for posting this article, which would normally be interesting to me, BUT...

This is a very unenlightened CEO, a stupid article, and an extremely inept institute. I had dealings with that institute before and I won't have anything to do with them again. Everything was about as backwards as it could be, from the secretaries, to the misspelled instructions, to the false assumptions, to the selection process, to the criteria, to the contact methods, and more. Paul G. Allen ultimately deserved to die of cancer because he set up an institute that was so foolish and so inept that there was no way the institute would ever be able to recognize a good idea for artificial general intelligence (AGI) even if it came their way, therefore there was no way any promising AGI discovery would ever be supported by his institute to cure his cancer, which was why he created the institute in the first place. 

This article is a great example of the consistently bad quality the institute supports and publishes. I don't even know where to begin to start my criticisms of this article. Self-driving cars are a ridiculous attempt with the wrong architecture (digital computers) so should not even be mentioned. The Turing test is so outdated and so naively ridiculous that it shouldn't even be mentioned. The AI doctors comment suggests the CEO has no concept of the difference between human reasoning and shallow empirical reasoning. The CEO seems to be completely ignorant of how to test intelligence in a machine, and obviously didn't do any serious research on that topic or even think about it to any great extent. What is he doing being the head of a famous institute when he clearly doesn't even understand the field in which he is working? Jeez, no wonder his institute is so backwards, throughout.

(p. 79)
Perhaps the absurdity of trying to make computers that can
"think" is best demonstrated by reviewing a series of at-
tempts to do just that--by aiming explicitly to pass Turing's
test. In 1991, a New Jersey businessman named Hugh Loeb-
ner founded and subsidized an annual competition, the Loeb-
near Prize Competition in Artificial Intelligence, to identify and
reward the computer program that best approximates artificial
intelligence [AI] as Turing defined it. The first few Competi-
tions were held in Boston under the auspices of the Cam-
bridge Center for Behavioral Studies; since then they have
been held in a variety of academic and semi-academic loca-
tions. But only the first, held in 1991, was well documented
and widely reported on in the press, making that inaugural
event our best case study.

Practical Problems

The officials presiding over the competition had to settle a
number of details ignored in Turing's paper, such as how of-
ten the judges must guess that a computer is human before
we accept their results as significant, and how long a judge
may interact with a hidden entity before he has to decide. For
the original competition, the host center settled such ques-
tions with arbitrary decisions--including the number of
judges, the method of selecting them, and the instructions
they were given.
Beyond these practical concerns, there are deeper ques-
tions about how to interpret the range of possible outcomes:
What conclusions are we justified in reaching if the judges are
generally successful in identifying humans as humans and
(p. 80)
computers as computers? Is there some point at which we
may conclude that Turing was wrong, or do we simply keep
trying until the results support his thesis? And what if judges
mistake humans for computers--the very opposite of what
Turing expected? (This last possibility is not merely hypotheti-
cal; three competition judges made this mistake, as discussed
below.)

Halpern, Mark. 2011. "The Turing Test Cannot Prove Artificial Intelligence." In Artificial Intelligence, ed. Noah Berlatsky. Farmington Hills, MI: Greenhaven Press.

Forum Timezone: America/Los_Angeles
Most Users Ever Online: 376
Currently Online:
Guest(s) 66
Currently Browsing this Page:
1 Guest(s)
Top Posters:
greeney2: 10396
bionic: 9870
Lashmar: 5289
tigger: 4577
rath: 4297
DIss0n80r: 4162
sandra: 3859
frrostedman: 3815
Wing-Zero: 3279
Tairaa: 2842
Member Stats:
Guest Posters: 2
Members: 25786
Moderators: 0
Admins: 2
Forum Stats:
Groups: 8
Forums: 31
Topics: 9875
Posts: 126490
Newest Members:
dagatructiep79, Freddy Nation, GeneralMalfunction, Black Cover, Deacon, Don Portnoy, John Greenewald, letty shawn, Jacob, Bridge City13
Administrators: John Greenewald: 697, blackvault: 1777