Don’t miss the latest developments in business and finance.

Towards 'singularity'

AlphaGo Zero's performance can be put to great use

Image
Business Standard Editorial Comment
Last Updated : Oct 24 2017 | 11:08 PM IST
Artificial intelligence (AI) experts theorise about singularity — a point in time when computers will surpass their creators in terms of general intelligence (whatever that is) and in broad-based problem-solving ability. Opinions differ on whether singularity is around the corner or centuries away or, indeed, at all attainable. Whatever the timelines, singularity has certainly edged closer. One key component of intelligence is auto-didacticism: The ability to teach oneself a skill. AI appears to be developing that attribute. Last year, the game of Go, which was considered the final frontier in terms of computational complexity, was conquered by AI. A program called AlphaGo developed by DeepMind, a British subsidiary of Alphabet, beat two human Go world champions in succession and did so convincingly. Last week, a new version of AlphaGo thrashed the earlier version of the program by 89-11.

Go, which has been played in China and Japan for over several millennia, has very simple rules. Two players alternately place black and white counters (called stones) on a 19x19 grid until one player gains territorial control and captures the other’s “stones”. The game has many possible variations, starting with 361 possible moves, 360 possible replies and so on. This number is many orders larger than the total number of particles in the universe, so large that no computer can conduct an exhaustive analysis of possibilities, unlike with chess. Humans and computers alike have to “sample” possible moves and use rule of thumb and known patterns to extrapolate what the good moves might be. The early version of AlphaGo was taught to play by using the normal method of supervised learning: It was fed the rules and tens of millions of games played by human players and, thus, taught how to think. It ran on 48 specialised chips spread across many servers. The new version, AlphaGo Zero, runs on only four such specialised chips and it eschewed human inputs. It was just taught the basic rules and ordered to play Go against itself. It used a combination of game iterations and probabilistic analysis of the results to learn what worked well and what didn’t. Within two days, it was playing at human world champion levels. By Day 40, it was good enough to beat the earlier version comprehensively. It had played 4.9 million games with itself and discovered new strategies and patterns that humans had never found.

This method of auto-didacticism is called reinforced learning. It has obvious potential for applications across multiple disciplines. The same or similar algorithms could be used to attack problems that humans don’t know how to solve. If reinforced learning can work with a game as complex as Go, it could work with something like protein folding, which is similar in that it has (fairly) simple rules and a monstrous number of variations. It could also lead to much quicker and more accurate development of face-recognition, cancer scans and biometric matching programs. These have always been dependent on a very data-intensive supervised learning process where humans must input thousands of images and tell the program which ones are matches and why. The philosophical implications of a computer that can teach itself to solve problems beyond the domain of human competence are exhilarating and disturbing. Will such hyper-intelligent machines remain technological tools or will they eventually supersede homo sapiens as the dominant species?


Next Story