Don’t miss the latest developments in business and finance.

Squeezing more out of chips to cut artificial intelligence cost

Image
John Markoff Seattle
Last Updated : Nov 01 2016 | 12:09 AM IST
Ali Farhadi holds a puny $5 computer, called a Raspberry Pi, comfortably in his palm and exults that his team of researchers has managed to squeeze into it a powerful program that can recognise thousands of objects.

Dr. Farhadi, a computer scientist at the Allen Institute for Artificial Intelligence here, calls his advance "artificial intelligence at your fingertips." The experimental program could drastically lower the cost of artificial intelligence (AI) and improve privacy because you wouldn't need to share information over the internet. But the AI system is emblematic of something even more significant for the microelectronics industry as it inches closer to the physical limits of semiconductors made with silicon: It uses 1/32 of the memory and operates 58 times as fast as rival programs.

There is a growing sense of urgency feeding this sort of research into alternative computing methods. For decades, computer designers have been able to count on cheaper and faster chips every two years. As transistors have shrunk in size, at regular intervals, computing has become both more powerful and cheaper at an accelerating rate - a concept known as Moore's Law.

Two years ago, with manufacturing costs exploding and severe technical challenges growing, the cost of individual transistors stopped falling. That has ended - at least temporarily - the ability of computer makers to easily make new chips that are faster and cheaper.

But if silicon has its limits, ingenuity may not. Better algorithms and new kinds of hardware circuits could help scientists continue to make computers that can do more and at a lower cost.

"It's been a fun ride," said Thomas M Conte, an electrical engineer at the Georgia Institute of Technology. "Today you're entering this patchwork world where you are going to find a better solution for a particular problem, and that's how we're going to advance in the future."

This summer, for example, Intel acquired Nervana Systems, a small maker of specialised hardware designed to run AI programs more efficiently.

Earlier this month, researchers at Argonne National Laboratory, Rice University and the University of Illinois at Urbana-Champaign published research demonstrating how a programming technique for an Intel microprocessor chip uses significantly less power to accomplish the same work.

The new approach is significant, according to supercomputer designers, because the high energy requirements of the fastest computers have become the most daunting challenge as scientists try to move from today's petaflop - a quadrillion computations per second - machines to exaflop computers, which could perform a quintillion computations per second.

Such computers are considered necessary to solve fundamental scientific problems like predicting the risk of climate change to the future of humanity.

Because of the slowdown in Moore's Law, the arrival of exascale computing has repeatedly been pushed back. Though it was originally expected in 2018, projections now set the next generation off as far as 2023.

The Argonne paper notes that a future supercomputer capable of an exaflop will multiply energy costs by a factor of a thousand.

To reduce those energy demands, the researchers demonstrated how they used a conventional Intel chip and turned off half of its circuitry devoted to what engineers call mathematical precision. Then they "reinvested" the savings to improve the quality of the computed result.

"Mathematical precision is like a knob you can turn," said Krishna V Palem, a Rice University computer scientist. "The question is what you do with the saved energy."

The researchers experimented with using the various modes of the microprocessor in a manner similar to a gearshift in a car, automatically shifting from higher to lower precision and back as needed to solve a problem.

"There is a lot to be done by thinking more carefully on how you can save energy," said Marc Snir, a veteran supercomputer designer and University of Chicago computer scientist.

The Argonne researchers are exploring ideas put forward by Dr. Palem, who in 2003 first proposed an idea he described as "inexact computing."

He suggested trading off precision to make dramatic gains in computing efficiency. Originally, he explored the idea of inexactness as a way to make use of imperfect chips where portions of the transistors were not working because of manufacturing flaws.

More recently, he has turned to using his ideas to gain significant energy savings from today's common processors.

Dr. Palem said that the group was planning to extend the Argonne research to more efficiently run mathematical models that relate to climate change.
©2016 The New York Times News Service

Also Read

First Published: Nov 01 2016 | 12:05 AM IST

Next Story