The creators of artificially intelligent machines are often depicted in popular fiction as myopic Dr Frankensteins who are oblivious to the apocalyptic technologies they unleash upon the world. In real life, they tend to wring their hands over the big questions: good versus evil and the impact the coming wave of robots and machine brains will have on human workers.
Scientists, recognising their work is breaking out of the research lab and into the real world, grappled during a daylong summit on December 10 in Montreal with such ethical issues as how to prevent computers that are smarter than humans from putting people out of work, adding complications to legal proceedings, or, even worse, seeking to harm society. Today's AI can learn how to play video games, help automate e-mail responses, and drive cars under certain conditions. That's already provoked concerns about the effect it may have workers.
"I think the biggest challenge is the challenge to employment," said Andrew Ng, the chief scientist for Chinese search engine Baidu Inc, which announced last week that one of its cars had driven itself on a 30 km route around Beijing with no human required. The speed with which AI advances may change the workplace means "huge numbers of people in their 20s and 40s and 50s" would need to be retrained in a way that's never happened before, he said.
Also Read
"There's no doubt that there are classes of jobs that can be automated today that could not be automated before," said Erik Brynjolfsson, an economist at the Massachusetts Institute of Technology, citing workers such as junior lawyers tasked with e-discovery or people manning the checkout aisles in self-checkout supermarkets.
"You hope that there are some new jobs needed in this economy," he said. "Entrepreneurs and managers haven't been as creative in inventing the new jobs as they have been in automating some of the existing jobs."
Yann LeCun, Facebook's director of AI research, isn't as worried, saying that society has adapted to change in the past. "It's another stage in the progress of technology," LeCun said. "It's not going to be easy, but we'll have to deal with it."
There are other potential quandaries, like how the legal landscape will change as AI starts making more decisions independent of any human operator. "It would be very difficult in some cases to bring an algorithm to the fore in the context of a legal proceeding," said Ian Kerr, the Canada Research Chair in Ethics, Law & Technology at the University of Ottawa Faculty of Law.
Others are looking further ahead, trying to analyse the effects of AI that exceeds human capabilities. Last year, Google acquired DeepMind, an AI company focusing on fundamental research with the goal of developing machines that are smarter than people. Demis Hassabis, one of the company's founders, described it as an Apollo programme for the creation of artificial intelligence.
"I don't want to claim we know when we'll do it," said Shane Legg, another founder of the company. "Being prepared ahead of time is better than being prepared after."
While they think the chance is small that a malicious super-intelligence can be developed, Legg and others have set out to study the potential effects because of the profound threat it could pose.
"I don't think the end stage is the world we now have with waiter robots who bring you your food on a tray," said Nick Bostrom, whose book, Superintelligence: Paths, Dangers, Strategies, has informed the discussion about the implications of intelligent machines. "The end result might be something that looks very different from what we are familiar with."
Shahar Avin, a researcher at the University of Cambridge's Centre for the Study of Existential Risk, said it's too early in AI research to have a good way to study how to prevent malignant AI from forming.
"We want an agent that cannot or will not modify its own value system," Avin said. It's an open question how to do this, he said. A combination of more funding and more public debate should bring more researchers into the field to study how to make AI safe.
As part of the effort, Elon Musk, founder of Tesla Motors Inc and Space Exploration Technologies Corp, and other tech luminaries announced the creation on December 11 of OpenAI, a nonprofit research group dedicated to developing powerful new AI technologies in as open a manner as possible.
If super-intelligence is inevitable, it's best to build it in the open and encourage people to think about its consequences, Musk said.