Even as India debates how artificial intelligence (AI) ought to be regulated, the European Union (EU) has taken the first definite step toward governing it. On Wednesday, the European Parliament approved the AI Act, which will now go for approval to the European Council.
In India, however, experts believe the country will be required to evolve its own set of regulations.
The EU has taken a risk-based approach, which basically means greater restrictions on the use of AI where the risk is higher. It has clarified what is not allowed at all and what will invite stricter scrutiny. Companies such as OpenAI, the creator of ChatGPT, will have to disclose the data used to train its systems.
The use of AI for social scoring or an AI system that can pose a threat to the livelihood, safety and rights of people will be banned. The EU has also disallowed the use of real-time remote facial recognition and biometric identification in public.
Governments across the world have been scrambling to figure out how they can ensure that AI does not get misused and how it can be used for the betterment of citizens.
India, which has been most vocal on the subject, is yet to even start consultations with stakeholders.
Rajeev Chandrasekhar, the minister of state for electronics and IT, has previously expressed intent to bring in guardrails and principles for AI in the upcoming Digital India Bill through the prism of restricting user harm. The government also plans to spend about $200 million to develop an AI ecosystem and three centres of excellence for AI. Queries to government officials on the progress of AI regulations were unanswered till press time.
Experts and advocacy groups in India are looking at the EU with admiration for its efforts; however, they believe India should adopt its own set of regulations.
Kamesh Shekar, programme manager of The Dialogue, a tech-policy think-tank, observed that though the EU’s risk-based approach to regulate AI has both positives and negatives, India must consider laying out enabling principles to support home-grown AI innovations to serve worldwide.
“Though the AI developers take high-risk management measures, if the users misuse it and the impacted population are unaware it falls through the cracks. We need a principle-based intervention that maps responsibilities and principles for various players within the AI ecosystem,” he explained.
He also said that while regulating evolving and niche technologies such as AI requires a high level of state capacity and enforcement skills, the government could consider alternatives such as industry-based certification and code of conduct to ensure the players followed principles strictly.
Experts also point out that the EU’s AI regulation may create a “Brussels effect”, as multinational tech platforms are interconnected and operate across geographical boundaries.
“The EU generally gets the first mover advantage. By moving fast, Brussels often ends up setting the contours of the operating framework, leaving the rest of the world to figure out how they can align with it,” said Rohit Kumar, Founding Partner at TQH Consulting, a public policy group.
He added “An example here is GDPR, which has become sort of a benchmark that the rest of the world has to look at. When companies change their systems to start complying in one jurisdiction, they often resist creating separate internal systems for risk management, transparency and record-keeping in other countries.”
While consultation for regulating AI is yet to begin in India, experts are hopeful that the government will look at the AI Act in detail.
“It’s a very good first step in starting to regulate these technologies. When it comes to surveillance, we believe that real-time surveillance systems should be banned, and so should facial recognition, especially in public spaces. That is something we also want in India, and which the EU AI Act recommends,” said Anushka Jain, policy counsel, Internet Freedom Foundation, India.
Jain added that a risk-based approach had its merit. “It’s not fair to categorise all users in the same bucket. It is also important that the demarcation of different categories of risk is very clear.”
Globally, efforts are being made to rein in AI. For instance, last month the US government invited the head of tech companies who are working on AI such as Microsoft, Google and OpenAI to discuss risks involving AI.
China, meanwhile, has issued draft regulations mandating security assessments for any products using generative AI systems like ChatGPT. According to media reports, China will come up with a set of regulations by the year-end.