World leaders gathering at any intergovernmental political forum in recent months have inevitably deliberated a new policy question: the right approach to regulating artificial intelligence (AI). Though the discussions are still at the primary level, policymakers in India have recently been tilting towards a risk-based regulation.
With its unprecedented capabilities of learning and generating new forms of creative content, generative AI has prompted governments across the world to find ways to ensure user safety without dampening the rate of innovation. Risks like privacy violations, algorithmic biases, automation-spurred job losses, misinformation, weaponisation and uncontrollable self-aware AI models are at the forefront of their concerns.
Prime Minister Narendra Modi, while speaking at the B20 Summit, the official G20 dialogue forum with the global business community, stressed the need for global cooperation to ensure the growth of ethical AI. "Concerns about the effects (of AI) on skilling and reskilling and algorithmic biases are rising. We will need to detect the potential disruptions in various sectors. The scale of disruptions is becoming severe," Modi said.
The Prime Minister's remarks came days after the telecommunications regulator, Telecom Regulatory Authority of India (Trai), proposed that the Centre should set up a domestic statutory authority to regulate AI in India through the lens of a "risk-based framework".
"The regulatory framework should ensure that specific AI use cases are regulated on a risk-based framework where high-risk use cases that directly impact humans are regulated through legally binding obligations," Trai had said.
The first example of this regulatory approach is that of the European Parliament's AI Act. The act basically prescribes more restrictions on the use of AI where the risk is higher. The regulatory framework introduced different rules for AI applications classified based on levels of risk. The four categories include minimal or no risk, limited risk, high risk, and unacceptable risk.
It also requires AI companies to ensure constant iterative evaluation of risks, making sure they only use error-free datasets for training and imposing on them an obligation to establish audit trails for transparency.
The EU has made it clear what is not allowed at all and what will have stricter scrutiny. Companies such as OpenAI, the creator of ChatGPT, will have to disclose the data used to train its systems. There will be a European AI Board and a penalty regime that is more stringent than the General Data Protection Regulation (GDPR).
Leaders of the Group of Seven (G7) at a summit in May also called for the development and adoption of international technical standards for trustworthy AI, with a focus on risk-based regulation.
On the other hand, there is the US's blueprint for an AI Bill of Rights. The regulation proposes principles to encourage agencies to provide "fairness, non-discrimination, openness, transparency, safety, and security" in all AI developments. The blueprint was seen as an expected laissez-faire stand by the US to avoid hampering AI innovation and growth.
The blueprint expects users' protection from unsafe and ineffective systems and that the AI systems don't discriminate, as well as take steps to address privacy concerns around notice and user autonomy. However, the blueprint did not explicitly stipulate what AI companies have to do. On top of this, it is non-binding legislation.
Experts vary in their opinions on the right way for India's regulation of AI.
Kamesh Shekar, programme manager of The Dialogue, a tech-policy think-tank, has previously said that while the EU's risk-based approach to regulate AI has both positives and negatives, India must consider laying out enabling principles to support home-grown AI innovations to serve worldwide.
"Though the AI developers take high-risk management measures if the users misuse it and the impacted population is unaware, it falls through the cracks. We need a principle-based intervention that maps responsibilities and principles for various players within the AI ecosystem," he explained.
The Ministry of Electronics and Information Technology has initiated the drafting of the Digital India Bill to regulate the use cases of AI. The law is expected to lay out broad principles to ensure trust, openness, and safety of digital platforms. The ministry has also followed a principle-based approach in the recently enacted Digital Personal Data Protection Act of 2023.