The Telecom Regulatory Authority of India (Trai) recently made recommendations on regulating artificial intelligence (AI). This is in the context of the rapid yet unpredictable growth trajectory of generative AI (GAI), which has unleashed a barrage of questions and concern, from job displacement to cyber-attacks, from ethical grey zones to copyright concerns and creative autonomy. It also comes at a time when the Ministry of Electronics and Information Technology is expected to bring in similar norms in its upcoming Digital India Bill. The centrepiece of Trai’s recommendations is the proposed establishment of an Artificial Intelligence and Data Authority of India (AIDAI), an independent statutory authority, which will act both as a regulator and an advisory body for all AI-related domains. Pointing out that the formation of too many statutory bodies might create confusion for the sector, the telecom regulator has also advised that the “work of the AIDAI should be entrusted to Trai, with suitable modifications in the Trai Act”.
Trai’s recommendations are notable for its acknowledgement that AI systems are still evolving and its citation of various international practices as ready references for India’s AI regulatory framework. In a nod to the European Union’s recent AI Act, the regulator noted that it was important to regulate specific AI use cases that might have a direct impact on humans within a risk-based framework. However, experts have cautioned against a centralised regulatory body, and argued for guardrails in lieu of rigid rules. Regulatory sternness can discourage young tech firms from entering the AI market, leading to a dominance of established tech giants. A quick consensus has also built up that AI needs human guidance and that the regulations need to make space for a clear framework for human-AI collaboration, as well as the goals and limits of such collaborations.
Here, Trai’s reference to the EU’s risk-based framework for establishing obligations for both providers and users in certain AI-related activities is crucial. In the EU’s AI Act 2023, those AI-driven systems classified under “unacceptable risk” will be considered a threat to people and banned. These include the cognitive behavioural manipulation of people or specific vulnerable groups, the social scoring of people based on behaviour, socioeconomic status or personal characteristics, or real-time and remote biometric identification systems, such as facial recognition. Similarly, AI systems deemed “high risk” will be divided into two categories. The first will include AI systems that are used in products falling under the EU’s product safety legislation, such as toys, aviation, and cars. The second category will include AI systems deployed in eight specific areas such as biometric identification, management, and operation of critical infrastructure and law enforcement, which will have to be registered in an EU database. The EU parliament lists similar preliminary precautionary measures for “GAI” and “limited risk” classifications as well.
Such a classification helps clear the clutter of an expanding basket of AI-driven tools and services. As AI innovators discover new ways of harnessing GAI or large language model-based tools, most such systems can be reliably classified into these “risk-based” subheads. For India, therefore, more crucial than establishing an AIDAI is the implementation of such a preliminary cataloguing, which can lay down the groundwork for most future AI regulations, no matter how rapidly the ecosystem evolves.
To read the full story, Subscribe Now at just Rs 249 a month