Don’t miss the latest developments in business and finance.

Risks of AI and the challenge of taming this useful digital 'devil'

As Musk and others flag its threats, lessons can be drawn from research into recombinant DNA

Artificial intelligence, AI, machine learning
Photo: Wikimedia Commons
Devangshu Datta New Delhi
6 min read Last Updated : Apr 07 2023 | 11:58 AM IST
In late March, the Future of Life Institute (FLI), a non-profit that focuses on emerging technology and its impact on society, released an open letter calling for a six-month moratorium on research into powerful large language models (LLMs) exceeding ChatGPT-4 in capability. That letter has been signed by over 50,000 people, including a bunch of AI experts, computer scientists and other academics, as well as 1,800 CEOs and entrepreneurs including business magnate Elon Musk and Apple Co-founder Steve Wozniak.  

The letter claimed “AI labs are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

It referred to the Asilomar AI Principles, a suggested set of directive principles for the governance and regulation of AI developed at the Beneficial AI 2017 conference in Asilomar in California. 

The logic is, hitting pause would allow time for safety protocols to be debated, shared and audited by experts, regulators and policymakers. This letter was triggered by the release of ChatGPT in November 2022, the explosive growth of its user-base, and its integration into Microsoft’s Bing. Google has also released its own LLM, Bard. 

While these programs were developed to understand natural language, and that itself has huge implications, it quickly became apparent they could do much that went beyond the brief.

For example, ChatGPT-4 can write computer code and give solutions to mathematics problems.

Joe Biden also made a cautionary comment at a meeting of the President’s Council of Advisors on Science and Technology. While Biden acknowledged AI can help deal with challenges, such as disease and climate change, he also said it poses “potential risks to our society, to our economy, to our national security".

In October 2022, the White House published non-binding guidelines called the "AI Bill of Rights" designed to protect Americans from potential harm by AI. However, ChatGPT-4 with its new capabilities arrived in November, and Bing Chat in February – an indication of how quickly draft regulations can be superseded by tech advances. There are other researchers working on developing AI codes of ethics, and trying to establish guidelines around acceptable research and how to manage broader impacts on society.

The dark side

The potential dark side of LLM is now becoming apparent. Italy has banned ChatGPT due to privacy concerns and other European Union nations are examining privacy aspects. Privacy breaches coupled with the ability to mimic voices well enough to fool voice recognition programs (using other AI programs) could result in sophisticated identity thefts.

There are other potential risks. In 2022, researchers showed that a model meant to generate therapeutic drugs could also generate new biochemical weapons. OpenAI documented how GPT-4 could be used to complete Captcha verification.

In late March, a Belgian with mental issues, “Pierre” (a pseudonym used to protect his family), died by suicide after six weeks of “conversations” with a GPT clone. His widow has accused the AI of triggering this tragedy.

Apart from such issues, it’s clear ChatGPT, Bing Chat, Bard, etc., provide seductively logical and precise answers to questions. But those answers may be incorrect, and biased, and may include fake news and “information” the AI has made up.

Policymakers must also worry about the possibility that LLMs could soon start replacing white collar functions.

Is a six-month pause enough?

However, would a six-month moratorium serve much purpose? Opinions differ. The FLI makes reference to a crucial 1975 conference on recombinant DNA, which was also held in Asilomar. A moratorium on recombinant DNA research followed by that conference allowed experts to set rules and allow for safe research.

By analogy, a moratorium and conferences may work with AI. But it would require draconian measures to ensure R&D into LLM is stopped, even if there are relatively few organisations with the capacity to push the boundaries beyond Chat-GPT4.

Second, it is only by using AI that problems with it can be diagnosed. It is undeniable that AI has already been of benefit in areas like healthcare, education and agriculture, and a moratorium would slow adoption.

Hence, some experts feel, responsible AI principles should evolve side by side with R&D. Regulatory authorities are already drafting laws, and directive principles like Asilomar and the AI Bill of Rights. The US Senate has an Algorithmic Accountability Act, and there are similar initiatives in the EU and Canada.

Such regulations could set boundaries on what data can and cannot be used to train AI, address copyright and licensing issues, and require researchers to release more details like provenance of training data, code, and safety features.

In addition, some experts say AI models must allow for external audits of how they “think”. One of the most common fears is that AI often runs in black boxes. Even the programmers don’t understand how it works.

As AI is deployed in more and more areas, including education, medicine, and mental health, it is clearly affecting more people and, arguably, already shaping society and societal attitudes and opinions.

Building artificial intelligence

Consider the following sentences. “The batter scored a century.” “Green chillies can be used to spice dosa batter.” “He used a sledgehammer to batter down the door.” In each case, “batter” has different “transformed” meanings and relationships.

The transformer model of deep learning pays attention to those relationships and uses those to decode natural language. In ChatGPT, the GPT stands for Generative Pre-training Transformer. Each iteration of ChatGPT was trained using massive data sets consisting of billions of words.

The parameters of ChatGPT-4 run into the hundreds of billions. Transformer architecture has led to huge advances in AI – it is used in drug research for instance, since it can “pay attention” to active molecules and their relationships with each other.

The suggested moratorium would stop research trained on larger data sets, using even larger numbers of parameters. That might lead to as-yet-unknown new capabilities surfacing.

ChatGPT-4 had over 100 million users within two months of its launch. OpenAI, which was valued at $29 billion in the Microsoft deal, expects ChatGPT (users can pay a monthly fee of $20 for direct priority access) to generate revenues of over $1 billion in calendar 2024.

India is looking to build an educational tool using ChatGPT and the Bing integration. Multiple similar LLMs are also under development. These programs can not only write poetry and computer code and solve mathematics problems, they could be used as a first filter in healthcare diagnosis and as chatbots to service clients in many different types of industries.

As drones and autonomous vehicles become more common, they may be directed by natural language. They can also be used for identity theft and phishing and to cheat in exams and plagiarise texts. Figuring out how they work and where they can be used is already important and it will be increasingly important as their deployment broadens.

Topics :Artificial intelligence

Next Story