Blog Contents; Who am I?
Everybody is talking about the defeat in the board game GO of the reigning world champion Lee Se-Dol by the program AlphaGo - the score stands at 1-3 in the best of five contest. GO is an ancient Chinese board game (458 BC) which is far more complex than Chess. In 1997, by defeating the then world chess champion Kasparov, supercomputer Deep Blue cleared that hurdle . Then, IBM's supercomputer Watson defeated the Jeopardy world champions in 2011.
Everybody is talking about the defeat in the board game GO of the reigning world champion Lee Se-Dol by the program AlphaGo - the score stands at 1-3 in the best of five contest. GO is an ancient Chinese board game (458 BC) which is far more complex than Chess. In 1997, by defeating the then world chess champion Kasparov, supercomputer Deep Blue cleared that hurdle . Then, IBM's supercomputer Watson defeated the Jeopardy world champions in 2011.
Chess and to some extent Jeopardy may be played by
a computer with enough crunching power - a brute force approach - and these
computer victories would suggest that artificial narrow intelligence (ANI),
task-specific intelligence, has been achieved by computers like Watson.
Lee is one of the best ever GO players and his
defeat at the hands of AlphaGo is particularly remarkable. To play GO at the
highest level requires an ability in dealing with abstract strategic concepts
and intuitive human-like style of game play - something that the experts were
predicting to happen, maybe after 10 years.
Lee remarked: "I don't know what to
say,..., I kind of felt powerless."
AlphaGo's creator Demis
Hassabis (CEO Google Deep Mind) said: "To be
honest, we are a bit stunned and speechless"
The next two slides describe the challenges that computers face in playing GO by brute force crunching power only: ( The slides may be skipped without breaking continuity of the text but click on a slide to view bigger image)
The victory of AlphaGo points to the first steps
that are necessary in the development of Artificial General Intelligence (AGI): Sometimes
referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a
computer that is as smart as a human across the board—a machine
that can perform any intellectual task that a human being can. Creating AGI is a much harder
task than creating ANI. Professor Linda Gottfredson describes human
intelligence as “a very general mental
capability that, among other things, involves the ability to reason, plan,
solve problems, think abstractly, comprehend complex ideas, learn quickly, and
learn from experience.” AGI, when
properly developed, would be able to do all of those things as easily as you
can.
Consider the sheer complexity in a game of GO relative to
Chess.
A game of chess can go 10123 different ways. The number of atoms in the universe is less than 1081. A game of Go can go 10700 ways - a number so large that it is difficult to imagine what it means. That a computer can master such a complicated game gives us some idea of the pace at which AI is improving.
Google DeepMind, AlphaGo's creator, has developed a system called deep-Q Network (DQN), that can quickly master a range of Atari 2600 games – Space Invaders, Breakout, etc – based on nothing more than raw pixels and game scores. DQN differs from Deep Blue or AlphaGo in one crucial way: it’s not limited to a single task. The games it has mastered have diverse rules – but the DQN can solve them all with just one algorithm. (click on slide to view bigger image)
DeepMind says that its work is:
“... the first demonstration of a general-purpose agent that is able to continually adapt its behaviour without any human intervention, a major technical step forward in the quest for general AI.”
A game of chess can go 10123 different ways. The number of atoms in the universe is less than 1081. A game of Go can go 10700 ways - a number so large that it is difficult to imagine what it means. That a computer can master such a complicated game gives us some idea of the pace at which AI is improving.
Google DeepMind, AlphaGo's creator, has developed a system called deep-Q Network (DQN), that can quickly master a range of Atari 2600 games – Space Invaders, Breakout, etc – based on nothing more than raw pixels and game scores. DQN differs from Deep Blue or AlphaGo in one crucial way: it’s not limited to a single task. The games it has mastered have diverse rules – but the DQN can solve them all with just one algorithm. (click on slide to view bigger image)
DeepMind says that its work is:
“... the first demonstration of a general-purpose agent that is able to continually adapt its behaviour without any human intervention, a major technical step forward in the quest for general AI.”
DeepMind's chief executive, DemisHassabis said that its AlphaGo software followed a three-stage process, which began with making it analyse 30 million moves from games played by humans."It starts off by looking at professional games," "It learns what patterns generally occur - what sort are good and what sort are bad. If you like, that's the part of the program that learns the intuitive part of GO.""It now plays different versions of itself millions and millions of times, and each time it gets incrementally better.
It learns from its mistakes.
Tested against rival Go-playing AIs, Google's
system won 499 out of 500 matches
Jason Millar in the Observer states that Deep Learning represents a paradigm shift in the relationship humans have with their technological creations. AI can display genuinely surprising and unpredictable behaviour (not expected from brute crunching computers). Lee described being stunned by an unconventional move he claimed no human would ever have made. The creators of AlphaGo were very pleased with some of the surprising and beautiful moves that AlphaGo played.
Millar writes "Possessing more intuitive approach to problem-solving allows artificial intelligence to succeed in highly complex environments. AI is also increasingly able to manage complex, data-intensive tasks such as detecting cyber security threats, high-frequency stock trading etc. Embodied as robots, deep-learning AI is poised to begin to move and work among humans - in the form of service, transportation, medical and military robots."
The speed of progress in AI is only going to accelerate - AlphaGo is an excellent proof of that - the question is if the human society is capable of understanding and acting quickly enough to adjust to the new paradigm. We move far too slowly - the complexity of human civilisation dictates that - but this could be our undoing. I have no suggestions how humans can manage the fantastic rate at which AI is developing - some suggest that we should stop these developments - that in my view is a fool's delusion. March of technology cannot be stopped - it is best to be flexible and try to change the social, legal, financial, military systems and whatever else defines the human society to meet the new challenges.
Post Script: There’s a cute irony to DeepMind. The highly
intelligent developers of the technology are doing their best to make
themselves obsolete. It might not happen in their lifetime, but DeepMind and
companies like it will, sooner or later, make human intelligence
irrelevant. Hassabis says, “It’s quite possible there are unique things
about humans. But, in terms of intelligence, it doesn’t seem likely.”
No comments:
Post a Comment