Don’t miss the latest developments in business and finance.

From challenges of sensitising AI to issues of security, privacy and ethics

As AI becomes an increasingly important component in every sphere of life, machine ethics is also becoming an increasingly important field of research.

Artificial Intelligence, AI
Artificial Intelligence, AI
Devangshu Datta
Last Updated : Jun 02 2018 | 2:35 AM IST

Suppose you’re driving down a steep, narrow mountain road. As you turn an S-bend, you discover half a dozen children directly in your path. There’s no braking distance. You can drive into the kids. Or, you can slam your car into the mountainside, risking your own life. What do you do? What if you have an important passenger in the car, or your family is with you?

Behavioural scientists call this the Trolley Problem (the original version involves a runaway trolley on a railway line with a shunt and people at risk on both branch lines). It was dreamt up as an ethics experiment. Today, it is a real, live situation with huge sums riding on finding reasonable answers.

You see, it might be an autonomous Artificial Intelligence driving the car and that’s a situation that’s being discussed at length by autonomous vehicle designers. For then it is the AI that has to make an instantaneous decision to do the thing that’s likely to cause least harm.

That is a complicated question with many ramifications: whose life should be at least risk? Car insurance companies would really like to know what’s likely to be the AI’s response in this, and in other high-risk situations. Until acceptable solutions to such ethical dilemmas can be programmed reliably, this may be a big barrier that holds up widespread usage of autonomous vehicles.

Training a robot to take such crucial ethical decisions on the fly is incredibly difficult. Even training a robot to develop any “sense of ethics” is, in itself, very difficult. But as AI becomes an increasingly important component in every sphere of life, machine ethics is also becoming an increasingly important field of research. If AI is to integrate harmoniously with society, it must be programmed to maximise safety and fit in with human ethics. This is a task that is pulling in philosophers and logicians, as well as computer programmers.

Apart from autonomous vehicles, AI is extensively deployed by defence forces and law and order departments. Back in 1942, writer and biochemistry professor Isaac Asimov invented the “Laws of Robotics” in a science fiction short story, “Runaround”. According to Asimov’s First Law, “A robot cannot injure a human being, or through inaction, allow a human being to come to harm.”

In reality, AI is bound by no such law. Drones that can deal mass death and destruction are part and parcel of modern warfare. How can such devices be trained not to bomb hospitals? Smart gun systems can autonomously target moving objects in the field of fire. South Korea deploys such smart guns on the Demilitarised Zone with North Korea. How is such a gun “trained” to issue warnings before shooting?


Less obvious situations can arise in many other instances where AIs are taking over tasks that involve intimate interactions with humans. For example, somebody with a chronic condition, say, a diabetic or a bipolar individual, may be monitored by an intelligent agent such as Amazon Alexa and reminded to take insulin shots, or other medication, as required. If the patient ignores the instructions, should the AI then inform somebody else? This is a breach of privacy but it could save the patient’s life.

Privacy breaches in general are another massive area of concern. There are Internet of Things (IoT) sensors in all sorts of modern gadgetry. These things can literally be installed to keep the plumbing in good order. IoT is designed to monitor everything and monitoring can easily become “snooping”. 

A smartphone can also listen to and videotape conversations around it, without the owner even knowing this is happening. It is possible to know 24x7 where somebody is, what they are doing (including intimate details of bathroom visits), who they are speaking to, what they eat, if they move around with smartphones within modern “smart” environments. So how can such devices be programmed to keep certain things private?

If an IoT network or a smartphone monitors criminal activity, should it inform law enforcement autonomously? That would be a huge breach of human rights but it may save lives again. Arguably an AI should call “100” if somebody is committing a physical assault or making bombs. But then it could also be programmed to call the police if somebody is consuming beef, or making up jokes about politicians.

At a trivial level, we all set our mobiles on Do Not Disturb (DND) mode sometimes. We program text messages, “I’m sorry I can’t take your call right now”, and so on. A recent demonstration by Google showed how a virtual assistant, Duplex, can make interactive voice calls to set up appointments.

Could we train the next generation of virtual assistants to interactively lie on our behalf in such situations? Could “Duplex Version N” accept an incoming call and say, “This is Devangshu’s colleague. He’s left his phone behind in the office and can’t be contacted, I’m afraid.” If a virtual assistant can be trained to tell one sort of lie, can it be trained to tell other kinds of lies?

Philosopher Susan Anderson and her husband, computer scientist Michael Anderson, tried one approach at the University of Connecticut. They tackled the problem of a robot, “Nao”, that reminds people to take medicines and what Nao “should” do if patients skip their doses. They gave the robot cases that involved resolving such situations. Machine Learning algorithms were used to try and find patterns to guide the robot. This is a little like teaching a child the difference between right and wrong. The child might understand but you don’t know for sure how it will interpret new situations and act as these arise.

In another experiment, at the University of Bristol, Alan Winfield, Christian Blum and Wenguo Liu trained a robot to rescue other robots. The Asimov Robot, as they called it, had to prevent other robots (all about 2.5 cm x 75 cm in size) from falling into holes. Asimov did fine when just one robot was in danger. However, it had problems whenever more than one robot was in danger. It sometimes saved the nearest robot and sometimes it just froze. A rule-based system: “Save the one that’s nearest” could work in this case. But what if one robot is actually a child, and another robot an adult? 

One way around this is to train the robot to consider counter-factuals across two possible futures: if it saves A, B dies and vice-versa. And, as in the trolley problem, it’s hard to agree on the answers. Such an approach involves the program computing two different futures. In one it potentially kills many children; in another it potentially kills its own passenger(s). Which action causes least harm? Is the harm a side-effect of doing something good?

Military systems do need hard-and-fast rules. But those rules could be programmed differently for different situations. In some situations, the AI could be programmed to minimise risks to itself and to its own soldiers and act defensively. In other situations, it could be programmed to act aggressively to cause as much damage as possible. The US defence department has conceptualised an “ethical governor” that trains drones not to strike targets if civilians are put at risk.

Different cultures impose different rules on citizens. Some cultures are rigidly rule-based — no chewing gum in public; no walking around with hair exposed; no alcohol; no gambling; no casual sex; no criticism of government, and so on. Other cultures work by delineating broader concepts like fundamental rights and allowing citizens more agency to do what they please so long as those rights are not violated. Both these approaches are being tried when it comes to programming machine ethics. We might eventually get more insights into managing human behaviours as we figure out how to make ethical machines.