Don’t miss the latest developments in business and finance.

<b>Devangshu Datta:</b> At the dawn of the AI age

As AIs surpass humans and improve themselves continuously, the gap in functional intelligence would increase and they would leave humans further behind

Image
Devangshu Datta New Delhi
Last Updated : Nov 27 2015 | 10:01 PM IST
What happens to society at a point when computers - artificial intelligences - become more intelligent than their creators? This question is often referred to in science fiction. Science fiction writers and Artificial Intelligence (AI) researchers - there is a lot of overlap between these two groups - have been speculating about it for decades. It is referred as "The Singularity" and there is even an annual Singularity Conference.

The word "singularity" is borrowed from physics - it is the point within a black hole where the normal laws of physics cease to operate. In the so-called Standard Model, the Big Bang was preceded by a singularity.

In physics, it is by definition, impossible to know what happens within a singularity since the normal laws cannot be extrapolated. In social terms, a similarly impossible-to-extrapolate situation might arise as and when AI surpasses natural intelligence. The normal laws of social science are derived from situations when human beings are the dominant species because of their collective intelligence and tool-using capacity. What happens if that basic condition is altered?

Before getting into this debate, a willing suspension of disbelief may be required in that it must be assumed that this singularity will occur. There are actually no guarantees about this even if most members of the AI community believe that the singularity is inevitable. Assuming that it will occur is not illogical, given advances in computer intelligence.

Some put a date to it. Ray Kurzweil, the computer scientist who pioneered optical character recognition, believes singularity will be achieved by 2045 and the Turing Test (where a computer cannot be distinguished from a human in conversation) will be passed earlier, by 2029.

As and when the singularity occurs, it may cause a population explosion of many super-intelligent AIs. AI can be replicated very quickly simply by copying code to a new machine. By definition, superior intelligences would also be able to improve themselves. They would be self-aware and capable of designing their own learning programs.

AIs may work in tandem or not. Their goals may be very different from their creators. Perhaps they can solve problems human beings find intractable. For example, they may be able to figure out cures for diseases such as AIDS or cancer, or to tackle global warming efficiently. Or they might emulate Skynet, and try to eliminate humans. They could also design increasingly efficient weapons of mass destruction. The laws of economics might break down. The conventions of political systems and of international relations may be radically altered, or simply become obsolete.

Nobody has answers and several of the brightest and most informed persons around have expressed public disquiet at some possibilities. Stephen Hawking, Elon Musk and Bill Gates, to name three highly informed persons, have discussed the dystopic aspects of the singularity.

Dr Hawking feels it could be a direct threat to the existence of human beings as a species and Mr Musk concurs. This is scarcely irrational given that a lot of AI research is specifically targeted at developing better weapons systems, such as drones and other weapons with autonomous capability.

Another set of intriguing questions arise for theologists and ethicists. As AIs surpass humans and improve themselves continuously, the gap in functional intelligence would increase and they would leave humans further behind.

Would they necessarily share their insights with human beings and continue to behave like devoted servants in a sort of Jeeves and Wooster relationship? Or, would they treat humans like favoured pets, intelligent enough to be housebroken and taught a few commands? Would a human being have the right to switch off, or permanently format, an AI that could out-think him or would this be rated a crime equivalent to murder?

Finally, consider a situation where AIs have not only achieved singularity; they have improved themselves for millennia. An apocryphal situation related by Dr Hawking may arise. A super-computer is asked, "Is there a God?" It responds, "There is now", even as it induces a short-circuit that ensures that it can never be switched off.

Twitter: @devangshudatta

Also Read

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

First Published: Nov 27 2015 | 9:46 PM IST

Next Story