Don’t miss the latest developments in business and finance.

Regulating AI

Guidelines will need to evolve over time

AI
Illustration: Binay Sinha
Business Standard Editorial Comment
3 min read Last Updated : Jun 08 2023 | 10:18 PM IST
Nasscom, the IT industry chamber, has released guidelines for the responsible use of generative artificial intelligence (AI). These draft guidelines are the result of consultations with a multi-disciplinary group of AI experts, with representations from academia and civil society. They should aid in defining frameworks and act as common standards for researching, developing, and using GenAI responsibly. This is part of a global effort to review and develop standards around this fast-developing sector. Even the chief executive officer of OpenAI, which developed the GPT series of algorithms, triggering the current explosion of GenAI, agrees that regulation is desirable. The guidelines define GenAI as a type of AI technology that can create artefacts such as image, text, audio, video, and various forms of multi-modal content. Even this already-broad definition may need to be modified as AI rapidly develops entirely new capabilities.

The induction of open-source AI has already presented novel problems in the past six months. While AI promises to help solve many intractable problems, and to impart new levels of efficiency to all sorts of activities, as well as drive research, it also promises to irrevocably alter employment patterns, by creating entirely new jobs and rendering established ones irrelevant. Widespread AI penetration presents new ethical issues and legal conundrums. While AI is a powerful tool for data analysis, it “inherits” inborn biases in data if such exist, and it may render current privacy norms ineffective. Using it for military or policing purposes could be a two-edged sword. For instance, autonomous weapons systems could cause tragic loss of life, and using facial recognition programmes indiscriminately could destroy privacy and enable repression. Moreover, this is only the beginning. The following iterations will be even more powerful, and dealing with this transformative effect will present an ongoing set of challenges.

The Nasscom guidelines highlight certain obligations for researchers, developers, and users. They are asked to maintain internal oversight throughout the entire lifecycle of a GenAI solution. To promote transparency and accountability, public disclosures of data and algorithm sources used for modelling and other technical details should be mandatory. Developers should reveal non-proprietary details about the development process, capabilities, and limitations. To preserve privacy, the guidelines focus on privacy-preserving norms and standards, and the testing of GenAI models in regulated environments, as well as strict adherence to data protection and IP rules during AI training. Developers are also being asked to make public disclosures about the values, goals, and motivations of research projects and to describe methodologies, model training datasets, and tools.

Setting up audits for norms and standards in research data collection, processing, and usage, and conducting safety testing of GenAl models in regulated environments is also advised. So is auditing for harmful bias and, if necessary, deploying protocols and measures to mitigate it. Nasscom urges developers to publish research findings in open-source formats, wherever possible, to democratise framing new problems and statements, as well as fostering collective inquiry into the potential risks and benefits. There are experts who believe AI could be potentially as dangerous as nuclear weapons. It could certainly have an enormous effect on societal norms. Guidelines such as these will need constant updating, given that this genie is well and truly out of the box. 

Topics :Business Standard Editorial Commentartifical intelligenceRegulations

Next Story