In 1950, Alan Turing, dismissed the question "Can machines think?" as meaningless. In a paper titled "Computing machinery and intelligence", he proposed instead that machines (actually digital computers) should be rated on the basis of their ability to play the "Imitation Game".
The imitation game, as Turing defined it, was the ability of a computer to produce human-seeming responses in natural language. His original test conditions involved type-written conversational responses between "A" and "B" who would be physically separated. Either of A or B might be a computer. If the computer's responses cannot be distinguished from a plausibly human set of responses, it would pass what is now called the Turing test.
Many versions of this test have been proposed and it has generated its share of controversy. Even now, increasingly autonomous and sophisticated machine intelligences fail to pass popular versions of the Turing test. But as critics assert, the imitation concept could be flawed in that it encourages trickery (in programming a machine to pretend to be human).
More From This Section
Charlie has an Indian connection in that the chief designer of the 20-cm-tall robot is an IIT alumnus called Rajiv Khosla. Charlie is supposed to specifically act as an aid to people suffering from autism, dementia, brain damage and related conditions.
So far, the robot has been programmed to speak 11 languages and it can be personalised to "decode" language, culture and lifestyle. The South Indian nod of the head, for instance, can mean something entirely different from the North Indian nod and Charlie can be "trained" to recognise these differences. The robots can be contacted via a variety of different interfaces including voice, touch pads and so on.
The robot could, for example, remind autistic children to wash their hands, or take their medicines on cue. It could also help elderly people to stay in touch with friends and relatives by helping them to access the internet and send emails. As of now, the robot is estimated to cost about Rs 90,000. If it catches on, economies of scale should lead to reduced costs.
Of course, robots and other autonomous thinking machines are finding a variety of other uses as well. There has been an enormous amount of research into these devices, which can vary enormously in utility and in design.
The earliest uses were in structured factory environments, including hazardous ones such as inside nuclear power plants and underground in mines. Increasingly, robots are used in disaster relief and on military applications. They are also being programmed for entirely mundane tasks such as housework as in the Roomba, which has already sold millions of units. There have also been enormous advances in driverless vehicle technology, including cars, helicopters and drones.
One broad over-reaching issue is that robots could struggle to deal with unstructured environments. Biology allows living beings to evolve and adapt and to recognise changes in their environments. Over very long periods of time, this helps biological creatures to "optimise programming" to handle natural environments. A smart hunter will, for instance, propagate its genes better over multiple generations.
Artificial intelligence has to find ways to adapt to changing environments in much shorter time frames. It can also run into odd problems. For example, a persistent issue that has hindered driverless car technology is the necessity to programme for reflections. Humans recognise puddles and identify reflections without thinking about them. Driverless car cameras must be programmed to recognise and allow for reflections caused by puddles on roads, and this is surprisingly difficult to do.
However, as these technical problems are progressively solved, robots could become more common in ordinary life. As that happens, other ethical and legal questions will arise. Take a driverless car, for instance. At the most fundamental level, what are the liabilities in an accident triggered by a robot, if there is loss of human life, medical costs or property damage? Or, if a robot is used to manage riot-control gear such as intelligent water-cannons, or tear gas shells, what degree of autonomy may it be given and what would be defined as excessive force?
How will courts and insurers treat incidents like these, which would be bound to arise at some stage or another? The lack of law to take care of such situations could be a major barrier against letting robots out into natural unstructured environments, in non-military, non-emergency scenarios.
There are also subtler issues such as privacy. If a robot performs housework, or acts as a companion/monitor for an individual, then it will be gathering quantities of personal data about that person. Who has legal access to that data and under what circumstances? Could it be admissible in a divorce or paternity case or in a criminal investigation? If the robot is used as a "seeing-eye" dog that helps an individual negotiate public spaces, it would also gather data about other individuals. Again, what are the privacy levels applicable to such data?
None of these may be insurmountable philosophical questions. But they require thought. As with many other fields, the technological capacity may well run far ahead of case law. Until the legal and ethical issues are understood and a framework created for dealing with those, robot usage could be tricky. Ultimately and intriguingly, if artificial intelligence progresses to the point where machines routinely pass variants of the Turing test, there could eventually be questions about safeguarding their "human rights". That might come up, not too far into the future.
Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper