ChatGPT was unveiled on November 30, 2022, and caught on like wildfire. It is currently being accessed by over 100 million users a week. Half a dozen other companies have released similar AI chatbots to grab some market share.
Multiple experts and policymakers have debated the dangers of artificial intelligence, or AI. There was a proposal to slow things down, with a six-month development moratorium. That was ignored. In November 2023, the Bletchley Declaration was released as a joint statement from the AI Safety Summit.
The Summit was attended by many nations, including India. Russia was about the only absentee among technologically advanced nations. The declaration read in part: “We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.”
The polite way to characterise the Declaration would be as behind the technological curve. A more accurate description would be that it’s diplomatic fiction. The signatories are all perfectly aware the Declaration cannot be enforced. They are all encouraging AI research & design (R&D) at the best pace they can.
India has an “AI for all” slogan and is part of the Global Partnership on Artificial Intelligence (GPAI). The government has multiple AI working groups, with recommendations for setting up a three-tier computer infrastructure to increase number-crunching capacity by 15 times.
Multiple corporate and academic-corporate initiatives are rightly being encouraged. Nvidia has collaborations with Reliance Jio and the Tata Group. IBM has signed three memoranda of understanding with government entities to accelerate AI innovation. The semiconductor mission aims to build high-end manufacturing capacity. IIT Madras’ Centre for Responsible AI (CeRAI) will partner Ericsson in R&D.
The payoffs may be enormous. Nasscom projects AI could add 12-13 per cent to gross domestic product (GDP), or approximately $450-500 billion, within two or three years. Other countries have similar projections. These are calculated on the basis of civilian penetration. If we look at defence-related applications, AI impact would be even more.
Despite the pious Declaration, despite all the public misgivings, no corporate entity and no nation can walk away from that pot of gold, or deliberately slow down R&D for fear of being left behind. Despite the global slowdown, despite risk-off attitudes where other sectors are concerned, funding for AI has surged. Globally, AI startups raised over $50 billion between January and September 2023, with 2000-plus deals. By September end, there were 120-odd AI unicorns.
The AI revolution is in the early stage. The breakthroughs will be exponential for several years. If there are concurrent breakthroughs in quantum computing, multiplying speed of processing, we’ll need to coin new catchwords to describe potential gains.
By 2030 at the latest, perhaps earlier, all physical infrastructure and related applications will be run by AI. This includes power grids, telecom networks, highway systems, ports, metros, airports, city traffic lights, satellite networks, municipal water supplies. AI will also be pervasive across healthcare, drug research, law enforcement, retail, financial systems, autonomous cars, what have you. AI will almost completely take over defence applications with the induction of autonomous weapons systems, robotic vehicles, munitions design, and the like.
The payoffs are so large, the dangers will be ignored. We can already see some of the downsides. AI, with its ability to drive face-recognition and compare huge data streams, can give governments 360-degree 24x7 profiles of all citizens, making dissent against authoritarian regimes more difficult. China and India both seem to be betting on this.
It can power drones that can recognise targets and “pull the trigger” without human intervention. By cloning voices and faces, it can deliver authentic-seeming fake news, or scam people, or bypass security. It perpetuates biases: Algorithms trained on current data may recommend that only males get STEM scholarships, and only upper-caste people get bank loans. If AI is used to control nuclear missile systems (it will be, or it may already be), it could cause extinction, as some have warned.
It is unlikely research into mitigation and responsible usage will proceed at the same pace. One can only hope safeguards develop fast enough to prevent catastrophic harm. Welcome to the dystopias imagined by William Gibson and Neal Stephenson.