The rapid emergence of usable artificial intelligence (AI) software has led to a widespread debate across the world on its potential impact on the economy and security. Normally the software we use, for instance, for writing, does not show any independent creativity. However, even here we do see some simple AI-type software that corrects spelling mistakes and suggests the next word when you are typing a message or a text. What we are seeing now is what goes well beyond this. This is based on language models designed to understand the question posed and, by drawing on access to material available on the internet, elaborate an answer. AI can also be used to fabricate voices and graphics.
ChatGPT is language software that has spread at lightning speed since October last year and has provoked the current concerns about AI. I asked a question on ChatGPT about the employment impact and got a balanced answer about its potential to automate routine and repetitive tasks, such as data entry, assembly line work, and customer service and the ability of generative AI with creative capacity to automate certain tasks in creative industries such as graphic design, advertising, and content creation. It mentioned widening income inequality, where low- and medium-skill workers are left behind but also pointed to the potential to create new jobs for workers with specialised skills in areas such as machine learning, data analysis, and software development.
Technological developments, particularly the more revolutionary ones, always have an employment impact on existing jobs but also create new jobs, such as motor cars replacing animal-drawn carriages. In India, generative AI with a substantial creative capacity will clearly impact adversely on the current employment levels in software and business-processing services because the services they sell can easily be done by AI-based programmes. But doing this will require some service support to prompt the more specific use of AI, particularly when it is used for software development. This is the capacity that our IT and BPO companies must develop. Incidentally AI also provides a tool for less proficient workers to narrow the gap with more able ones. For example, a worker with poor English-language skills could use AI to become as useful as an English language-educated worker in, say, marketing or any activity that requires English-language communication.
However, the debate now is more about the security implications of AI rather than its employment impact. The debate has become more acrimonious with the emergence of generative AI, which refers to AI models that are capable of creating new content, such as images, videos, and text, on their own, without explicit instructions from humans.
Yuan Harari, an influential writer, has argued that language is the basis of human culture and an AI system highly proficient in language can take over the development of culture and beliefs. But AI software lacks self-consciousness and ambition. A more accurate statement is that it can be used by some humans to deepen their influence on others. One must also note that open-source AI apps provide access to many humans and that would help to democratise the impact on culture.
The real challenge is to prevent the perverse use of AI, which, according to a response from ChatGPT, includes AI-powered surveillance systems which could infringe on privacy rights, AI-automated cyberattacks designed to evade traditional security, and AI-powered weapons to autonomously target and attack humans. It can also deepen the impact of fake news and hacking to cheat and defraud, something that we already have to live with. What makes this worse is that the ability to reproduce voices can lead to fraudulent calls, for instance, from a family member asking for money to be sent immediately to someone or somewhere.
Can generative AI become a malign force without human intervention? Take the case of a drone controlled through software. A drone programmed to attack a particular type of target is free to choose a specific target that meets the programmed criteria. But this is not something the drone defines. It is defined by the humans programming the drone.
The programming of neural networks in AI cannot reproduce self-consciousness because we still do not know the neural basis for this. AI software lacks what humans have —- judgements to evaluate alternatives or biases to distort response for some motive beyond dissemination of truth. Malign and beneficial influences depend on human consciousness and judgement. Hence the misuse of AI will depend on the malignity of specific humans.
The primary debate now is not so much about its employment impact. That impact is not ignored and it is no accident that Sam Altman of OpenAI, which created ChatGPT, believes in the principle of universal basic income, presumably to make up for the reduction in employment opportunities. But now the primary concern is the fear of widespread misuse of AI and there is a growing consensus on establishing a regulatory regime for creating and using AI. The G7 proposes to set up a Working Group for this purpose and the EU is well on the way to formulating regulations.
At one level the debate is about regulating not just the use but the basic design of AI apps to control the risks of misuse. If this is done by governments, it could well be used for political aggrandizement. China, for instance, has been reported to have introduced controls on AI development. They involve prior official approval of AI apps before they are released and also require them to ensure consistency with the “core values of socialism”! This type of politicisation must be avoided. The internet can be misused but it remains the strongest base of our freedom and democracy.
Where should India move on AI? We must recognise that this is the technology of the future in the way computerisation was several decades ago, and we must work to build local competence to develop AI and use it. Since the foundation of the main part of AI is a language model, this poses a major challenge for India with its multiple languages. In addition, the provision of AI services requires very large server systems, particularly for those that provide AI for graphics, and that creates a bias for large companies as AI providers. This is also an international trend and the sooner we get our infotech industry involved in this, the better are our chances of holding a significant presence in the AI market.
Today computers do many things involving language, mathematical calculations, and graphic design that we who were adults before the computer age started used to do by hand rather slowly and tediously. The computer age liberated us to focus on a higher level of intelligence activity but also exposed us to anonymous frauds. That is what AI will do for our children and our main task is to prepare them for the huge opportunities AI provides and ensure that the risks are kept manageable.
desaind@icloud.com