Don’t miss the latest developments in business and finance.

A unified global approach to AI regulation: India's leadership role

For regulation to minimise risks while optimising uptake and advancement of AI, it must be collaborative, inclusive and agile

The artificial intelligence (AI) market in India is expected to clock a compound annual growth rate of 25-35 per cent by 2027, matching a global trend of the technology's expansion. The Indian market is worth $7-10 billion now and it is expected to r
Representative Picture
Pallavi Bajaj
5 min read Last Updated : Oct 23 2024 | 10:58 PM IST
Artificial Intelligence (AI) technologies, built on predictive and deterministic analysis of large data sets and patterns, transcend geographies. AI is inherently collaborative, inclusive, agile, and continuously evolving. Naturally, regulation of AI would need to be, too. Regulatory sandboxes cannot cater to ensuring that the inter-connected, fast-evolving AI benefits all, with minimal risks, globally. Cooperation is essential.
 
India, with 1.4 billion people, and exponentially increasing digital adoption, will play a key role in this discussion. The benefits of AI extend across domains —  economic, social, health, logistics, education, security, and environment. Evidently, the pace and manner of its adoption into everyday activities is unprecedented. A significant dampener to this adoption is inadequate regulation, encouraging misuse, and impacting user confidence. Risks to security — human, national, economic, social — posed by unregulated advancement and accessibility of AI, are evolving with technology, impacting all stakeholders.
 
Stakeholders make data — the spine of AI. The larger, more diverse, more inclusive the dataset, the lesser the opportunity for unintended bias, more accurate the assessment and outcome. This makes AI assessment inherently collaborative, inclusive, and cross-jurisdictional, with user confidence critical to its enhancement, and associated risks ubiquitous. The need for collaborative regulation to minimise risks while optimising uptake and enhancement follows naturally.
 
The current global scenario where AI regulation is fragmented in individualistic sandboxes is sub-optimal. The race for regulatory “leadership” is counter-intuitive. There cannot be a legitimate first-mover advantage when effective regulation naturally necessitates cross-jurisdictional collaboration.
 
This merits two questions. One, what is the optimal approach for effective AI regulation? Two, where is India positioned in this discussion? Optimal, effective AI regulation bears two critical caveats. One, regulation must address risks to all stakeholders effectively, across geographies, demographics, society and economics. Two, regulation must not suffocate the enhancement or adoption of productive AI technologies by intervening with its inherently collaborative, inclusive, agile fabric.
 
Both require inclusive, comprehensive stakeholder mapping, engagement, and education (ensuring all stakeholders have access, information, skills, and capacity to participate effectively, in an informed manner). Critically, there can be no gaps globally in AI regulation. Where each transaction, data point, and determination are inherently cross-jurisdictional, building exponentially on a continuum, any regulatory weak link will immediately and irreversibly render any attempt at regulation, even at the level of individual economies, moot. Therefore, the regulatory ecosystem must be developed globally, and implementation and enforcement would be national.
 
Leadership, evidently, is a matter of taking everyone together, establishing effective regulatory cooperation, and spearheading such cooperation with a focus on capacity building — guided by the principle of Vasudhaiv Kutumbakam — One Earth. One Family. One Future. One AI Regulatory Ecosystem. 
 
Having already utilised its G20 presidency impactfully to discuss collaborative, inclusive solutions to global concerns, harnessing diversity of experiences, challenges and solutions, India is in a prime position to assume the kind of leadership global regulation of AI requires — taking everyone forward together on cooperation to develop an effective global ecosystem for AI regulation.  The approach to such cooperation should be modular.
 
One, collective, comprehensive identification and mapping of all stakeholder groups. Two, identification and agreement on universally acceptable definitions, data structures and risk-mapping models, leveraging ongoing work in silos in the private sector and academia.  Three, mapping roles, responsibilities and risks associated with each stakeholder group, including suppliers and consumers, with due regard to differing capacities, access to technology (digital divide), and sensitivities — economic, social, cultural. This would offer a better-defined problem statement. Four, establishing platforms for continuous, collaborative, inclusive stakeholder engagement with built-in feedback loops that ensure agility and effectiveness of resulting regulatory frameworks. This must be a process, not an exercise.
 
Five, establishing secure mechanisms for government-to-government discussions with two-way information exchange, feedback loops, and structured action plans on enhancing security of AI-generated solutions.
 
Six, leveraging this base to develop an ecosystem of trust and accountability for collaborative global regulatory guidelines and frameworks for AI, factoring in sensitivities of individual countries, employing best practices with maximum effective impact, while catering to the lowest common denominator (of access and capacity). Frameworks must ensure accountability for, inter alia, responsible, ethical development, and use, including for informed omission. This requires inclusive, close, considered cooperation across countries and stakeholders, public and private. Inputs from national experiences and deliberations must inform the global regulatory ecosystem, and vice-versa. Seven, building substantive proposals with quantifiable outcomes on the necessary capacity building for all stakeholders into the foundation of this ecosystem. Digital (platform) technologies can be leveraged for reach and scale. The collaborative frameworks can then be implemented in an agile manner at the national level, in accordance with respective domestic laws and administrative structures.
 
National implementation would also be continuous, inclusive, collaborative, and built on a “whole-of-government, all-stakeholders-on-board” approach, with built-in capacity building across stakeholders — government, business, academia, consumers. Inter-Ministerial advisory bodies must be established to continuously monitor and guide developments in AI, ensuring regulation keeps pace with technology. They would actively engage with industry, academia and civil society in a structured, consistent manner, leveraging digital technologies, for a two-fold outcome.
 
One, effective enforcement of regulatory guidelines at the national level, customised to domestic processes, capacity and sensitivities. Two, continuous feedback of evolving experiences, challenges and successes into the collaborative global ecosystem for agile regulatory cooperation. Static regulatory frameworks will not only be ineffective, but will impede enhancement of AI, and optimal distribution of its benefits. The problem statement is agile. Regulation would necessarily have to be, too.
 
The author is senior legal consultant with the Government of India. The views are personal
 

Topics :BS Opinionartifical intelligence

Next Story