Regulating artificial intelligence (AI) is a topic debated globally. India’s National Association of Software and Services Companies (Nasscom), which represents the country’s IT services industry, recently released guidelines on generative AI amid growing calls for its regulation. Ashish Aggarwal, vice president and head of public policy at Nasscom, in an interview with Sourabh Lele spoke about safeguards, data privacy, and regulating the emerging technology.
What are Nasscom’s thoughts while drafting guidelines for generative AI?
Generative AI is rapidly scaling up in terms of adoption. This is the first industry-led initiative that thinks about common standards and guidelines around how we address minimization of the risk, which can impact the adoption of the technology itself. Our thinking was to create awareness among the industry, bringing them on to a common ground to converge. I think it has succeeded to a large extent.
What were the takeaways during consultations with the industry?
At Nasscom, we had a programme on AI for some years, and the responsible AI initiative itself is not new. The objective is not to come up with a prescriptive detailed set of guidelines, but to focus on the key principles. Keeping in mind different companies, different operating models, and different business models, the focus was also on being flexible enough to let them develop their own detailed internal guidelines. But at the same time, we have also called out a few things which we think are important in terms of transparency and making sure that the impact assessments are done.
These are early days; I think that a lot of it is about creating that whole conversation and awareness. We are talking about it to more and more companies, who are coming forward to understand what this means.
The guidelines recommend an internal oversight for AI solutions and public disclosure of technical information. Can you give more details?
We have looked at the three key roles for generative AI, in terms of the role of research, development, and the user. Once you start thinking in terms of these roles, then broadly, there are five principles that have been put out. These principles apply to each of these roles. Organizations will need to not only implement their responsible AI guidelines but also need to carry out an assessment of the impact. That is the key thing brought out in these guidelines.
The guidelines aim to create greater transparency around the stuff, which is not proprietary. In terms of privacy and safeguards related to data, there is already a lot of understanding within the industry, as most of them comply with global regulations on data protection. But the highlight here is to well implement those practices in the context of generative AI. It also encourages chief information security officers (CISOs) and security personnel in enterprises to get together and start thinking in terms of what are the best practices that they should be implementing and exchange good practices they have implemented.
What do you think about the debate that generative AI models violate copyright laws?
The question is about something which is copyrighted data being used for training. In this context, the EU (European Union) has taken a position that there should be a disclosure of training of copyrighted data and some reasoning needs to be published. To some extent, transparency around how these systems work is going to be an important element.
But at the same time, how are the models trained on a particular data and thereafter generate an output? Does that output violate the copyright of the data on which it is trained? I think this is something that is still evolving as a discussion. At this point the question to ask is that in our regulatory setup, if somebody has a concern that his/her copyright has been violated and are there mechanisms for them to raise those concerns? I think then the question of jurisprudence evolving, and how we will deal with it, will develop.
There are calls for global cooperation in regulating generative AI. What is Nasscom’s stand?
When you are thinking of regulation, you are primarily trying to focus on harms, and not necessarily on a particular technology per se because you can have newer technologies come in and the extent of technology debate evolves, it is still ahead of us. At a basic definitional stage, the latest version of the EU’s AI Act is much more compatible with the OECD definition, and also what has been talked about in the US. At a basic level, some of those developments are positive.
The union cabinet has cleared the Digital Personal Data Protection Bill. What do you expect from it for generative AI?
On personal data, we have seen one more step in terms of getting closer to the law. It will be interesting to see how compatible our data protection bill is with emerging and new technologies, including AI. In Europe, we saw certain challenges emerge as soon as the generative models developed. In our analysis, the way our Bill is shaped provides enough room for emerging technology models.
We believe that generative AI use cases should be able to be responsibly implemented under the new Bill on the data protection side.