The clamour to get attention and be noticed as experts in the absence of proof points is causing people to question the credibility of all the applications AI promises
Digital technologies like social media and AI were supposed to deliver a sustained dividend that made human lives richer.
Instead ‘Big Tech’ is increasingly seen as the villain whose ubiquitous social platforms now work to divide people and target vulnerable communities than bring them together. Now its AI after Princeton’s professor Arvind Narayan last week threw shade on the evolving use of Artificial Intelligence in different sectors and called out most AI firms for peddling “Snake Oil” — an 18th century American euphemism for deceptive marketing of miracle elixirs.
His key takeaway: AI excels at some tasks but can’t predict social outcomes and commercial interests (read Big Tech and start-ups) are trying to obfuscate this fact. More importantly, he showed that in many use-cases, manual techniques were more accurate and transparent and worth continuing with.
According to Narayan, AI models that try to predict social outcomes — in criminal recidivism, job performance, predictive policing, predicting terrorist risk, predicting at-risk kids — are fundamentally dubious. Can one really tell if someone is a terrorist or a useless job applicant from screening faces or phone messages for 10 minutes? What is more worrying is that the ethical concerns about use of these technologies are further amplified by inaccuracies in the output of these AI-based software.
That’s a tough call on an emerging technology where global spending is rising at 44 per cent and is forecast to reach $35.8 billion in 2019, says IDC. This is a growing reputational problem for AI as the eco-system feeding on the budgets allocated to this important technology — start-ups, Big Tech, Big Six, VCs — looks to grow exponentially by seducing their business customers with the promise to bring futuristic concepts from Hollywood films to life.
AI’s crisis of authenticity in terms of what it is able to deliver is reaching a stage of peak PR. Narayan says that commercial interests are trying to obfuscate the fact that many Ai firms are only selling the digital equivalent of snake oil. Thus, those selling AI solutions are having trouble going beyond thought pieces and customer surveys that highlight the bleeding obvious: AI promises to change our lives forever — take care of all our mundane tasks, freeing up our time to be more creative and use our intellect to drive value.
Other advisors are cleverly muddying the waters with definitions of AI expanded to include everything from low-end RPA (robotic process automation) to higher-end intelligent robotics.
Despite this, there is no dearth of AI in our lives. As consumers, we are used to AI-based algorithms that recommend what (more) we should buy — “Customers who bought this item also bought” — or watch next on Netflix. AI is good business, when it works. Netflix says that AI-based recommendations saves the company about $1 billion each year. More importantly, three-fourths of customers only watch content based on those algorithmic recommendations. We are also at home with AI-based facial recognition on our phones or the speech-to-text software that are embedded in search engines and communication platforms.
The trouble is that the money lies elsewhere. For instance, AI continues to improve rapidly and scale up in some areas like remote medical diagnosis from scans, automating customer service through chatbots or helping spot “deepfakes”. But AI diagnosis of medical scans will only reduce patient costs in mature markets — something the bloated western medical establishment will not encourage.
In India, the potential for such AI applications like remote medical diagnosis can have more impact as it can help increase access to crucial medical services, for instance, for those unable to do so due to proximity or cost reasons. N Chandrasekaran, the Tata Chairman in his new book Bridgital Nation estimates that such AI-based models can be used in the Indian economic context to increase access and generate jobs — potentially 30 million new jobs by 2025 and as a result increase access to basic services for millions of Indians. From healthcare and education to courts and governance, AI-based model can be applied to improve access and create new jobs.
However, the money lies not in democratising health care in emerging economies but in designing predictive outcome models, where the AI technology is the weakest and the PR noise the loudest. This clamour to get attention and be noticed as experts in the absence of proof points is causing people to question the credibility of all the applications AI promises.
While two thirds of 2019 global spending is going to such ‘safe’ use cases like automated customer service agents ($4.5 billion) and sales recommendation and automation ($2.7 billion) the other third is being spent on less proven solutions in predictive social outcomes like automated threat intelligence and prevention systems ($2.7 billion), said IDC.
Therein lies the rub. To restore its reputation and chase higher valuations, AI firms and the surrounding ‘sales’ ecosystem have to find new success stories that defeats the ‘snake oil’ smear. Till then, chill and Netflix! At least AI's got that right.
The writer is a communications professional
To read the full story, Subscribe Now at just Rs 249 a month
Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper