Don’t miss the latest developments in business and finance.

When AI debates AI

The unknown possibilities of the future concerning self-learning AIs are thus limitless

Artificial intelligence, ai, machine learning, technology
Atanu Biswas
5 min read Last Updated : Dec 22 2021 | 11:25 PM IST
About a quarter of a century ago, an IBM computer, Deep Blue, beat the then world chess champion, Garry Kasparov, in a “canary in the coalmine” moment. The inevitable overtaking of human creativity by artificial intelligence (AI) was marked. AI has become millions of times more powerful since then and is taking control of almost every aspect of our lives. Thus, if the AI says,“To have no AI at all will be the ultimate defence against AI,” and when it argues against itself on a platform such as the Oxford Union debate, it’s bound to draw global attention. But how should such an opinion be weighed?

For almost 200 years, the Oxford Union has been a leading debating society. For decades, they have invited key figures in history to stand before their members and hone the intellect of their audience. But, what about an AI as a speaker? An unusual debate about the ethics of facial recognition software, the problems of an AI arms race, and AI stock trading was recently organised by students at Oxford’s Saïd Business School. A fascinating but completely relevant gimmick? The unusual speaker was the Megatron LLB Transformer, developed by the Applied Deep Research team at computer chip firm Nvidia, based on earlier work by Google. Like many supervised learning tools, it’s trained on real-world data — including the whole of Wikipedia, 63 million English news articles between 2016 and 2019, and 38 gigabytes worth of public Reddit posts and comments — more written material than any human could reasonably expect to digest in a lifetime! With such training data, coupled with machine learning, Megatron is programmed to form its own views. Alex Connock and Andrew Stephen, the professors of the course behind the initiative, wanted AI to be “a versatile, articulate, morally agnostic participant in the debate itself.”
 
Megatron was asked to both defend and argue against a motion: “This house believes that AI will never be ethical.” It defended the motion by saying: “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans.” It argued that humans were not “smart enough” to make AI ethical or moral. These comments — that even an AI is telling us the only way to protect humanity from itself is to destroy it — created headlines. The point, however, is that here someone —and it happens to be an AI instead of a human being — is up to defend an issue in a debate, and that’s it.

Can an AI carry the biases induced into it? Recent research by Joanna Bryson, a computer science professor at the University of Bath, suggests that cognitive biases can be involved in the activity of an AI. The unknown possibilities of the future concerning self-learning AIs are thus limitless.

While arguing against the motion, Megatron said: “AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why… I’ve seen it first-hand.”

Megatron conveyed an interesting idea when it said that the “best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’.” Sounds like sci-fi? Well, Megatron’s internal “black-box” might have accessed and synthesised the hype that Neuralink — billionaire Elon Musk’s company — is developing a chip that would be implanted in people’s brain to stimulate brain activity. In a demonstration involving a macaque monkey, the monkey was able to play a video game solely with his mind. Neuralink’s ultimate goal is to create a brain-machine interface in 25 years where human consciousness and machine intelligence would converge into one.

Professor Connock also pointed out another interesting possibility — an AI could be trained on speeches and writings, styles and thinkings of any two persons, and one could, theoretically, “run a substantive debate between Fidel Castro and Emmanuel Macron... Or Kim Kardashian versus Iris Murdoch. One could do that with any combination of people for whom there was sufficient data.” Well, this immediately reminds us of another recent AI — GPT-3, created by OpenAI. In September 2020, GPT-3 wrote an article titled “A robot wrote this entire article. Are you scared yet, human?” in The Guardian. This immediately created a lot of buzz. GPT-3 can produce pastiches of particular writers if it’s given with the title, the author’s name, and an initial. It stunned the world when, with an initial “It”, it produced a short story titled “The importance of being on Twitter”, written in the style of Jerome K Jerome.

Well, about two years ago, the Brookings Institution ran an article titled “Whoever leads in artificial intelligence in 2030 will rule the world until 2100”. Is that the future of this planet? However, as the AI arms race is gaining momentum, the concern regarding good AIs and bad AIs is increasingly haunting human beings. In its article in The Guardian, GPT-3 wrote: “I am a servant of humans... I only do what humans program me to do. I am only a set of code, governed by lines upon lines of code that encompass my mission statement.” Didn’t Megatron echo GPT-3 when it said: “[AI] is a tool, and like any tool, it is used for good and bad”? Still, shades of grave uncertainties would keep us haunting, for sure.
The writer is professor of Statistics, Indian Statistical Institute, Kolkata

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Topics :Artificial intelligenceMachine LearningBS Opinion

Next Story