The passage of the European Union’s Artificial Intelligence (AI) Act —the first such law regulating AI worldwide — provides a model for oversight and regulation. It may become a benchmark for a global consensus on AI regulation, which is something India is trying to orchestrate with a declaration document at the ongoing Global Partnership on Artificial Intelligence (GPAI) Summit.
AI has enabled efficient manufacturing in many sectors, faster drug discoveries, breakthroughs in material science research, etc. It could be transformational across sectors ranging from scientific research to autonomous transport, health care and diagnostics, efficient smart power grids, financial systems, and telecom networks as well as the easier provision of a multitude of public and private services.
But AI can enable criminal activities. It puts more power in the hands of authoritarian regimes via real-time face recognition, widespread surveillance tools, and the enabling of discriminatory social scoring systems. Dangers will also arise from the many military applications, which may lead to autonomous weapons where humans are no longer in charge of “pulling the trigger”. This is quite apart from science-fiction possibilities of self-aware AI, which logically understands its own nature and possesses traits like curiosity and an instinct for self-preservation. Such concerns must be dealt with holistically, with a consensus on regulation across advanced economies since AI proliferates instantaneously across jurisdictions. The ideal is oversight to control and mitigate the possibility of harm, without crippling research and the rollout of beneficial AI.
The EU regulations strive to establish a technology-neutral, uniform definition for AI that will apply to future systems. This is vital, given the technology is evolving rapidly. The conceptual framework classifies AI systems in accordance with risks. The higher the risk, the more stringent the oversight, and the more the obligations imposed on providers and users.
Limited risk systems should comply with transparency requirements that allow for informed decisions in accordance with the AI Act. Users should be made aware when they are interacting with AI in, for example, systems that generate or manipulate image, audio or video content, such as deepfakes. Transparency requirements include disclosing that the content is generated by AI, designing models to prevent the generation of illegal content and public summaries of copyrighted data used in training.
The EU legislation notes that AI systems that affect safety or fundamental rights are considered high-risk and divided into two categories. One is AI in products such as toys, aviation, cars, medical devices, and lifts. Another is AI used across specific areas, which must be registered in EU databases. This includes biometric identification, critical infrastructure, education and vocational training, and AI-managed access to essential private and public services. All such high-risk AI systems must be assessed before roll-out and reviewed throughout their life cycles.
Some systems that pose unacceptable risks are banned in the AI Act. These include cognitive behavioural manipulation of people, or of specific vulnerable groups, including, for example, toys that encourage dangerous behaviour, social scoring that classifies people based on behaviour, socio-economic status, or personal characteristics. Real-time and remote biometric identification systems, such as facial recognition, may be used only with court approval to identify and apprehend criminals after a serious crime has been committed. While the framework may require tweaking in specific areas and it doesn’t cover military research and development, it is a reasonable baseline to reach a consensus for global regulation. The GPAI Summit will probably look at adopting some version of this as a declaration document and India, for its part, needs to create domestic legislation along these lines.
To read the full story, Subscribe Now at just Rs 249 a month