Global tech leaders at the Nasscom leadership summit cautioned industries to use artificial intelligence (AI) responsibly especially in terms of handling the inherent bias in the data used to train the AI and ML (machine learning) algorithms as businesses seek to utilise the technology across the board.
While the importance of AI and the disruption it can bring in was a major area of discussion at the ongoing flagship event by the industry body, technology evangelists and industry leaders repeatedly called out the need for responsible use of the technology. One major concern, they raised, is the bias creeping into the AI output simply because the ML algorithms use human generated historical data which itself is ridden with bias.
"Responsible AI makes humans accountable for the proper functioning of the AI system. Fairness is an issue in AI and some people do say that AI is discriminatory or biased. The way humans use data and the algorithms is what creates bias and therefore there is a need to make humans accountable for AI behavior," said Paul Daugherty, chief technology and innovation officer at Accenture. It is the responsibility of the humans to learn and understand when to apply specific AI techniques to control inherent bias from data creeping in, he added.
With the data, internet and smartphone boom that India is witnessing, organisations across the board are introducing AI-led frontend or backend interfaces to cater to their business needs in some way or the other. This has also sparked concerns that not all AI implementers are entirely aware about the risks of having a blind faith on the AI results.
Last week, global technology research Gartner said that similar to humans, AI is also intrinsically biased in one way or the other. “Today, there is no way to completely banish bias. However, we have to try to reduce it to a minimum,” said Alexander Linden, Vice president – Research at Gartner. “In addition to technological solutions, such as diverse datasets, it is also crucial to ensure diversity in the teams working with the AI, and have team members review each other’s work. This simple process can significantly reduce selection and confirmation bias,” he added.
Microsoft’s whitepaper titled ‘Age of Intelligence’, launched during the forum, outlined challenges and opportunities with respect to AI and how appropriate government policies, technological advancements including growing internet penetration and connectivity can help reap benefits of digital transformation in years to come. The paper highlighted that to ensure fairness, while building AI systems, developers should be sensitive to situations where societal or other biases may get incorporated into training data or algorithms.
“AI has been in the actual use fairly recently and the industry is just realising that there needs to be trust in the system,” said Dr. Rohini Srivathsa, National Technology Officer at Microsoft. “Even in the real world, we operate based on certain norms and laws, and the same thing is happening in AI.”
Microsoft has put forward a framework for ethics in AI which includes ‘fairness’ as one of the principles. The Redmond-headquartered firm like many other industry leaders is advocating the incorporation of these principles in the AI systems before they are put to use, to ensure responsible use of the technology.
To read the full story, Subscribe Now at just Rs 249 a month