The year 2024 will be the year of AI at scale, and what this means for India is an accelerated pace of development for the country, believes industry leaders.
A panel discussion on the subject of ‘How India can ride the AI wave to become a global powerhouse’ at Business Standard’s two-day summit, BS Manthan emphasised on the opportunities and challenges facing the country as it strives to be a developed nation by 2047.
“The power of AI can contribute about $400-500 billion to India’s GDP by 2025. It promises accelerated economic growth opportunities,” said Irina Ghose, managing director, Microsoft India speaking at the event.
Ghose said the confidence comes from the fact that businesses are looking at AI adoption and infusion in a big way. “The other aspect is the aspect of return on investments (RoI). The deployment time of AI tools has reduced significantly. It is taking less than 12 months to go for a full-fledged deployment and in about 14 months people are able to see ROI,” she said. “For a dollar spent, you’re getting about a three-and-a-half times investment return,” she added.
A recent EY study indicates that in 2029-30 alone, GenAI has the potential to contribute an additional $359-438 billion to India’s GDP.
Also Read
For Ravi Jain, head of strategy, Krutrim, it’s an exciting time for the country in terms of AI. He is of the opinion that India needs to invest in creating its own AI technology. “We cannot be a developed nation on borrowed technology,” he said speaking at the event.
Talking about Krutrim, India’s first AI unicorn, Jain said that the success of the chatbot lies in its India-centric approach with its model being contextualised in local Indian languages.
Speaking on the panel, Jain said, “Valuation is one indicator of possible success, not success itself. But I think what is unique about the approach that we are taking to AI is its India-centricity.”
He further said, “We want to solve how to make AI accessible to all Indians by working on various elements, and training models with Indian context and languages is one of these. The other key element is to bring the cost structure down.”
Amith Singhee of IBM had a caution. “There is also a lot of caution around getting something wrong, especially in terms of privacy, compliance requirements, among others. Three years later, I don’t want to have an issue where I really don’t know what went into my AI training pipeline, and then I’m hunting for that,” said Singhee.
Biases and cost imperative
The Indian government in the past has issued multiple advisories asking AI platforms and intermediaries to address biases, and ensure proper training of models before public deployment. The upcoming Digital India Act is also expected to come up with regulations around the technology.
Speaking at BS Manthan, Balaraman Ravindran, head, Centre for Responsible AI, IIT Madras, and head of Wadhwani School of Data Science and AI, IIT Madras, underlined the significant costs that enterprises have to incur when deploying AI tools. “Right now, at the current levels at which the capable AI systems operate, we cannot have penetration in India. It’s too expensive,” he said.
Ravindran shared instances of companies that initially adopted AI for their customer services support but later reverted to human agents due to cost considerations. “While we can talk about what will happen in the future, but before that we have a lot of things that need to be solved,” he added.
In order to reduce biases and increase AI application in different sectors, the panellists stressed on the need to create more localised data sets.
Talking about this, Ghose said, “When you look at the nuances of healthcare, those are different from agriculture. In agriculture, the language spoken in Bihar is different from what is getting spoken in Uttar Pradesh. For this, you need data sets to be created.”
To bring down the costs associated with AI, Ghose suggested that apart from large language models (LLMs), small language models can be deployed, which can give responses with the same level of efficiency and accuracy.
Ravindran of IIT Madras made an interesting point here. He questioned whether individuals in India understand the different language nuances to teach an AI model so that it can give right feedback. “When it comes to responsibility, fairness and ethics, in India there are too many nuances. We do not have resources to even know what kind of systematised biases are in India,” he said.
He also called out the dilemma around AI impacting jobs or replacing people at work. “In India there are many sectors where we do not have enough skilled people. We do not have enough teachers, professionals in critical sectors. AI can step in to fill this gap. We are already seeing this happen in the education sector,” he said.
Tech under scrutiny
The experts on the panel also pointed towards a shift in technology innovation moving out from labs and design centers to open communities. Speaking on this transition, Amith Sanghee, director, IBM Research said, “If you look 20-30 years ago, a lot of technology innovation would happen, within the four walls of an industry lab or an R&D centre or some academic institution. But in the past 10-15 years, AI for sure, but even things like cloud computing and everything that’s happening, is in the open, in communities, where daily testing happens, involving proving and rejection (of ideas).”
The artificial intelligence technologies around the world have been under active scrutiny of regulatory bodies and governments from a user safety point of view. Recently, the European Parliament passed a comprehensive AI act with an aim to “promote the uptake of human centric and trustworthy artificial intelligence.”
Experts at the panel agreed that the tech has a lot of safety and trust issues, and to tackle those, innovation in AI safety is required. They said the industry needed a lot of collaboration at the global level to ensure safety and trust standards are same at all levels, and geographies.
“To ensure AI safety, we need the communities of all the right stakeholders to come together and make certain decisions. Tools (for ensuring safe AI) will exist, but unless decision makers agree on certain decisions on how to use the tools, they will just be tools,” said Singhee from IBM.
He said that companies have an opportunity to actually create communities beyond their direct sphere of influence, to share best practices on AI related issues.
Making AI responsible
Ankur Puri, partner, McKinsey & Company, proposed a three step approach to ensure responsible AI.
Puri said that the first step towards ensuring safe AI was to build awareness, especially amongst the senior leadership. “Often I find people in senior leadership use the same words but mean different things. So can we have a common understanding because you are probably going to talk about this in the building and it's useful if you can all have that common language,” he said.
Puri urged stakeholders to consider the strategic implications of AI, and acknowledge its current capabilities versus future potential. “Reflect on how it could impact your role as a publication house, educational institution, innovator, or technology producer,” he added.
The hour-long session with deliberations on AI ended with the panellists issuing a call to action.
While Ravindran called for ensuring that a large fraction of the population has a better understanding of the role of AI and its impact, Ravi Jain of Krutrim urged for investing in deep technologies.
Puri said, “I would think about a few areas of social impact that are at scale and critical and say how do we set up accelerators, labs, the infrastructure, and the talent supply to make a real dent in those problems, that is my call to action.”
Further, Singhee called for the country to get into a mission mode around AI and urged the public services to use the technology for delivering public services. Microsoft’s Ghose urged the citizens to use technology and AI to ignite their creative side.