Why all of us take Mr Hinton’s comment very seriously is because he is known in scientific circles as the Godfather of AI. He is the brain behind the “neural network”, a mathematical concept that makes it easy to extract patterns from very, very large datasets of human languages. Thus, you can say that things like ChatGPT would not have come about without the “neural network” and Mr Hinton’s contributions (more than 200 peer-reviewed research papers) to make it usable.
Geoffrey Hinton’s words and actions may have the same impact that would have happened if Homi Bhabha had quit the Atomic Energy Commission in the 1950s, saying, “I left so that I could talk about the dangers of atomic energy without considering how this impacts India’s nuclear efforts”. If that happened, India may have cancelled its grand dreams about atomic energy.
Or, if EMS Namboodiripad had quit the CPI (Marxist), saying, “I left so that I could talk about the dangers of Marxism … etc”.
Normally, any breakthrough in technology is greeted by some folks who cry that jobs will be threatened by this new technology; the mystery this time is why a co-creator of a new technology is raising an alarm. So, it may be important that we listen.
Technology left unattended or unsupervised can create havoc. Take, for instance, the Bhopal gas tragedy at Union Carbide’s (the maker of Eveready Batteries) plant at Bhopal in 1984. On December 3, 1984, about 45 tonnes of the dangerous gas methyl isocyanate escaped from an insecticide plant, killing 15,000-odd people and leaving half a million survivors to suffer from blindness and other illnesses caused by exposure to the toxic gas. The consensus was (as reported by the Encyclopaedia Britannica) that “substandard operating and safety procedures at the understaffed plant had led to the catastrophe”. In other words, it was the outcome of a technology use withoutadequate supervision.
One of the key worries Mr Hinton has publicly expressed is that “bad actors” may use AI technology to do “bad things” … things that may hurt innocent citizens and an example that he gives is of authoritarian leaders using artificially created speeches and writing to “manipulate their electorates”. While that is something not easy to comprehend, there are some easy-to-understand examples. One is that of a driverless bus swerving into an adjacent lane which has incoming traffic or that of a military drone firing into an innocent crowd.
What can we do at policy level beyond merely worrying and joining such ranting?
In recent weeks a spate of researchers and think tanks, largely based in the Silicon Valley area, have started appealing to people worldwide to sign an open letter to pause all AI experiments more powerful than GPT4 (the one created by the creators of ChatbotGPT) for six months. One from the Future of Life Institute has to date seen about 28,000 people sign up for this appeal. Prominent signatories include Elon Musk (owner of Twitter and founder of Tesla), Steve Wozniak (cofounder of Apple), and Yoshio Bengio (a pioneer of deep learning and the Turing Award Winner).
Some of the recommendations from these folks are: Develop methods to spot AI-generated content, establish liabilities for AI-caused harm, and mandate third-party auditing and certification of AI system (full report “FLI Report: Policy Making in the Pause”).
The perplexing situation we find ourselves in now with AI reminds me about the early days of the internet (the late 1990s) where similar alarm bells started ringing, screaming that internet technology would lead to “bad actors” posting hate/erotic messages or stealing private information. There were cries that internet innovation was running amok and was bound to destroy humanity. We solved these worries threatening to stop all innovation in the internet/world wide web space by defining through legislation the concept of an “intermediary”, which meant tech platforms that merely provided a place where people could create content and commentators could post comments. This allowed continued innovation by intermediary/platform by freeing it from legal liability for mischievous behaviour by creators and commentators. That is to stay, by bringing in legislation that separates the responsibilities of the various types of players in this field.
While we think through all this, is it possible that the problem starts with researchers, investors and entrepreneurs in this field using the expression “artificial intelligence” to hype up the work they are doing? Should such tech be more appropriately named “machine learning”? By using the hyped-up expression “artificial intelligence” they imply that the algo or gadget they are building has moral wisdom or the ability to do moral reasoning, thereby igniting all the panic.
So, shall we start by banning “artificial intelligence” and mandating by law that all such work be labelled “machine learning”?
The writer is an internet entrepreneur (ajitb@rediffmail.com)
To read the full story, Subscribe Now at just Rs 249 a month
Already a subscriber? Log in
Subscribe To BS Premium
₹249
Renews automatically
₹1699₹1999
Opt for auto renewal and save Rs. 300 Renews automatically
₹1999
What you get on BS Premium?
- Unlock 30+ premium stories daily hand-picked by our editors, across devices on browser and app.
- Pick your 5 favourite companies, get a daily email with all news updates on them.
- Full access to our intuitive epaper - clip, save, share articles from any device; newspaper archives from 2006.
- Preferential invites to Business Standard events.
- Curated newsletters on markets, personal finance, policy & politics, start-ups, technology, and more.
Need More Information - write to us at assist@bsmail.in