Don’t miss the latest developments in business and finance.

Artificial intelligence still isn't a game changer

Machines can beat humans at some things but they remain one-trick ponies

Artificial intelligence
Machines may exhibit stellar performance on one task, but it may degrade dramatically if the task is modified even slightly, researchers say
Leonid Bershidsky | Bloomberg
Last Updated : Dec 05 2017 | 10:38 PM IST
Not much time passes these days between so-called major advancements in artificial intelligence. Yet researchers are not much closer than they were decades ago to the big goal: actually replicating human intelligence. That’s the most surprising revelation by a team of eminent scholars who just released the first in what is meant to be a series of annual reports on the state of AI.
 
The report is a great opportunity to finally recognise that the current methods we now know as AI and deep learning do not qualify as “intelligent”. They are based on the “brute force” of computers and limited by the quantity and quality of available training data. Many experts agree.
 
The steering committee of “AI Index, November 2017” includes Stanford’s Yoav Shoham and Massachusetts Institute of Technology’s Eric Brynjolfsson, an eloquent writer who did much to promote the modern-day orthodoxy that machines will soon displace people in many professions. The team behind the effort tracked the activity around AI in recent years and found thousands of published papers (18,664 in 2016), hundreds of venture capital-backed firms (743 in July 2017) and tens of thousands of job postings. It’s a vibrant academic field and an equally dynamic market (the number of US start-ups in it has increased by a factor of 14 since 2000).
 
All this concentrated effort cannot help but produce results. According to the AI Index, the best systems surpassed human performance in image detection in 2014 and are on their way to 100 per cent results. Error rates in labelling images (“this is a dog with a tennis ball”) have fallen to less than 2.5 per cent from 28.5 per cent in 2010. Machines have matched humans when it comes to recognising speech in a telephone conversation and are getting close to parsing the structure of sentences, finding answers to questions within a document and translating news stories from German into English. They have also learned to beat humans at poker and Pac-Man. But the authors of the index wrote: “Tasks for AI systems are often framed in narrow contexts for the sake of making progress on a specific problem or application. While machines may exhibit stellar performance on a certain task, performance may degrade dramatically if the task is modified even slightly. For example, a human who can read Chinese characters would likely understand Chinese speech, know something about Chinese culture and even make good recommendations at Chinese restaurants. In contrast, very different AI systems would be needed for each of these tasks.”
 
The AI systems are such one-trick ponies because they’re designed to be trained on specific, diverse, huge datasets. It could be argued that they still exist within philosopher John Searle’s “Chinese Room”. In that thought experiment, Searle, who doesn’t speak Chinese, is alone in a room with a set of instructions, in English, on correlating sets of Chinese characters with other sets of Chinese characters. Chinese speakers are sliding notes in Chinese under the door, and Searle pushes his own notes back, following the instructions. They can be fooled into thinking his replies are intelligent, but that’s not really the case. Searle devised the “Chinese Room” argument — to which there have been dozens of replies and attempted rebuttals — in 1980. But modern AI is still working in a way that fits his description.
 
Machine translation is one example. Google Translate, which has drastically improved since it started using neural networks, trains the networks on billions of lines of parallel text in different languages, translated by humans. Where lots of these lines exist, Google Translate does OK — about 80 percent as well as an expert human. Where the data are lacking, it produces hilarious results. I like putting in Russian text and telling Google Translate it’s Hmong. The results, in English or Russian, will often be surprising — like the pronouncements found inside fortune cookies.
 
I doubt this is accidental. There are probably not many legitimate calls for translations from Hmong, so idle tricksters must have helped train Google’s translation machine to produce various kinds of exquisite nonsense.
 2017 Bloomberg
Next Story