The European Parliament’s approval of the world’s first rules to govern artificial intelligence (AI) sets a clear regulatory framework, and the Indian companies conducting business or catering to clientele in the European Union (EU) will need to adhere to it, experts said.
The Artificial Intelligence Act was passed by lawmakers on Wednesday. It lays down rules and guidelines for specific risks associated with the use of AI in areas like biometric authentication, facial recognition, high-risk domains such as healthcare and deep fakes.
The experts believe the comprehensive framework will increase the cost and compliance burden of these companies.
“The regulation will require Indian companies to adjust their AI systems to meet the prescribed standards, undergo conformity assessments, and implement risk management measures if they are in the higher risk categorisation. The compliance costs and regulatory burden could be significant, especially for smaller firms,” said Somshubhro Pal Choudhary, Co-founder, Bharat Innovation Fund (BIF) - a deep tech-focused venture capital firm.
Though the Act will require companies to assess their AI models to determine risk classification, it also allows sufficient time for compliance, said Jameela Sahiba, senior programme manager of AI vertical, The Dialogue.
“The Act allows time for compliance, as it will come into force twenty days after its publication in the official journal and will be fully applicable 24 months thereafter,” she said. Its support for innovation through regulatory sandboxes can be leveraged by Indian startups to develop and test responsible AI solutions before market entry, she added.
More From This Section
Experts are of the opinion that though the risk-based approach is perhaps applicable for the EU regions, each country will look at its own requirements.
“While it will definitely offer lessons to India, it is important to note that India's diverse socio-economic context, technological infrastructure, and regulatory framework differ significantly from that of the EU. In conversations around potential AI regulation so far, the Indian government has stressed a ‘user-harms perspective’ to AI regulation,” Sahiba said in response to a question on India.
“This emphasis on risk categorisation establishes a clear regulatory framework. High-risk AI systems are set to face stringent regulations, including rigorous risk assessments, human oversight, and explainability requirements to ensure user trust,” she said.
The regulation defines high-risk systems as something that can cause potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.
“The EU AI Act is landmark legislation since it is the first real regulation brought out in AI; so far, countries have only been talking of it. It extends the GDPR risk framework, and puts the onus of obligations on the providers or developers of high-risk systems, irrespective of where these providers are located,” said Jaspreet Bindra, Founder, TechWhisperer.
Experts also said that the Indian government has emphasised that regulations shall not stifle innovation.
The Indian government has also been looking at ways to regulate the risks of AI. It has issued advisories under the IT rules to curb deepfakes and the biases arising due to the under-testing of AI models.
“While the risk-based approach targets a more proportional approach, avoiding broad and stifling regulations, categorising the risk levels can be challenging and subjective, potentially leading to disputes,” said Choudhary of BIF.