A committee of the European Parliament has cleared a draft of the artificial intelligence (AI) law to be discussed and put to the vote at a forthcoming session. The final details may change before it is passed. The proposed Act will not only set standards for how AI is deployed within the European Union (EU), but also apply to any entity that serves EU residents in the same way as the EU’s General Data Protection Regulation (GDPR). Like the GDPR, as legislation serving the world’s largest economic bloc, the AI Act would set benchmarks for legislation elsewhere. Quite a few classes of AI tools that breach privacy or enable discrimination have been banned. In general, AI is to be classified in accordance with risk — from minimal to limited, high, and unacceptable. While many high-risk tools will not be banned, entities using these must be transparent and subject to stringent oversight and auditing.
The committee said it aimed to ensure AI systems were overseen by people, and were safe, transparent, traceable, non-discriminatory, and environment-friendly. It wants a uniform, technology-neutral definition for AI, such that it can be applied to AI systems of the future. The banned systems include real-time remote biometric identification used in public spaces (such as facial-recognition systems used by the London police to identify anti-monarchist activists during the recent coronation of King Charles III). The Act also bans remote biometric ID systems for “post-real-time” use (for example, to identify members of a crowd from CCTV footage), with the exception of use by law-enforcement agencies in cases of serious crimes, and only after judicial authorisation. The other bans include biometric categorisation systems using sensitive markers like gender, race, ethnicity, citizenship status, religion, and political orientation, as these can enable discrimination. Similarly, “predictive policing” systems based on profiling, location or past criminal behaviour are also banned.
The Act also bans indiscriminate scraping of biometric data from social media or CCTV footage to create databases, as this violates the right to privacy. It also bans so-called “emotion recognition” systems used in law enforcement, border management, workplaces, and educational institutions, to identify people whose facial expressions or body language indicates discomfort. The committee defines high-risk as AI that could cause potential harm to people’s health, safety, fundamental rights, or the environment. It classifies AI systems used to influence voters in political campaigns, and recommendation systems used by social-media platforms with over 45 million users as “high-risk”. These may continue to be used, but oversight will be tighter and the deployment transparent. Generative models, like generative pre-trained transformer, or GPT, would also have to comply with strong transparency requirements, including the disclosure that a piece of content was generated by AI, designing models to prevent generation of illegal content and publishing summaries of copyrighted data used for training or citations.
There will be exemptions made for research activities provided under open-source licences and the Act allows regulatory sandboxes, or controlled environments established by public authorities, to test any AI before deployment. The Act also proposes legislation to make it easier for citizens to file complaints and ask for explanation of decisions based on AI systems that impact them. The EU AI Office and relevant national authorities will have to boost their capacity considerably to ensure these rules are implemented. But setting such bounds can be seen as a beginning, given that the technology has far outstripped the law.
To read the full story, Subscribe Now at just Rs 249 a month