Don’t miss the latest developments in business and finance.

Regulating AI

World leaders have a long road ahead

AI, google, Artificial Intelligence
Business Standard Editorial Comment Mumbai
3 min read Last Updated : Nov 06 2023 | 9:58 PM IST
The first international summit on artificial intelligence (AI) safety, held symbolically at Bletchley Park, UK, can be seen as a first big step towards addressing potential risks and safety concerns. The summit had three major outcomes — a multilateral agreement by tech companies to collaborate with governments for testing advanced AI models, an international declaration addressing risks associated with AI, and the United Nations confirming its support for the creation of an expert AI panel akin to the Intergovernmental Panel on Climate Change. While agreements made good headlines, a lot will depend on how things move from here. Founders and chief executives of large tech companies, for instance, were unable to arrive at a consensus regarding the severity of long-term risks posed by AI. Most of them, though, agreed on the immediate short-term risks, such as the malicious use of generative AI in influencing electoral outcomes. Fears of algorithmic bias and misinformation glut have become overwhelming concerns across countries.

It is thus not surprising that countries are scrambling to regulate AI. The US has made it amply clear that it is the hard power in AI, given its commercial and political strength in the matter, and has announced establishing an AI safety institute. The European Union solidified its position as the leading authority in introducing regulatory mechanisms for AI. But it may be too early to arrive at appropriate regulatory measures that can rein in AI, partly because the field itself is developing at an astonishingly rapid pace. Besides, even if regulations are imposed on Big Tech companies, it may not be possible to limit individual software developers and their use of AI for purposes that may not be very well known at the moment. Countless programmers are using freely available AI software. Moreover, given the immense international spillovers that the use of AI can generate, it is not very clear how the world can reach a consensus regarding the matter. There were 28 signatories to the Bletchley Declaration, but it does not say much about where the rest of the world stands on this issue. Countries regardless of their size will have equal rights to develop and use AI. It is also worth noting that the world has not been able to effectively curb the ills of social media, which is dominated by a handful of tech giants.

However, despite these challenges, it is encouraging that a global discussion on the issues has begun and there is hope that some of the potential ill effects of AI would be contained. It was also heartening to see the US and China on the same side of this issue. In terms of future development, India has taken an appropriate stand that there is a need to have safe and trusted AI platforms, and they should be utilised for progress. India can gain from global standards because it has a large pool of tech professionals who can help develop solutions for the world. While there are risks that the development of AI can affect certain kinds of jobs, progress in the right direction can help solve a number of development challenges. Thus, developing global standards and regulations will be critical, but not easy.

Topics :Business Standard Editorial Commentartifical intelligenceTechnologyEuropean Union

Next Story