Don’t miss the latest developments in business and finance.

Gains from LLMs for India

How will large language models impact learning, working, and exporting?

Language model
Illustration: Ajay Mohanty
Ajay Shah
5 min read Last Updated : Jun 25 2023 | 10:04 PM IST
Revolutionary gains in word prediction by “large language models” (“LLMs”) like ChatGPT are all around us. What will these do for working and exporting from India? What will these do for the learning process? Gains are available but the landscape needs to be navigated with caution.

There is a lot of amazement when a question is put to ChatGPT and a coherent-sounding answer comes. In terms of accessing the world of knowledge at your fingertips, this is only as amazing as running a Google search and finding those same things. What’s new is that the pieces are wrapped together in a coherent-sounding text. We are sceptical about whether LLMs are “intelligent” as humans are, but they are indeed useful tools. The interesting questions lie in devising mechanisms which harness their strengths and avoid their pitfalls.

At an intuitive level, it’s useful to think of the LLM as a mediocre worker that has read the entire Internet and, in addition, artlessly makes things up. What is amazing is that this employee has read the entire Internet! But there are three pitfalls. First, most of the text on the Internet is problematic (and this is going to get worse, as the mass of the Internet is about to go up a hundred-fold with LLM-generated content springing up). Second, this employee is mediocre and lacks critical sense in assembling the pieces that have been read, in judging what is sound and what isn’t, and in forming a coherent picture. Third, this employee thinks nothing about making things up and slipping them into the work.

What happens when workers are given these tools? A small literature is now springing up, understanding the gains. “Generative AI at Work”, by Erik Brynjolfsson, Danielle Li & Lindsey R Raymond, April 2023, did some measurement in the field of customer support. They found gains of 14 per cent for weak employees and no gains for skilled employees. That could work very well for India in two ways. If workers in India are inferior to those seen in a first-world organisation, these tools could give the Indian workforce a leg-up and help close the gap. Within India, there is a very large low-capability workforce. LLMs could help make these workers gain enough productivity to be useful.

The big problem with LLMs is that the mistakes in the answers are obvious to an expert but not to a novice. LLMs make things up all the time, and the novice has no way of knowing what parts of the grammatically correct English are in fact false. A complex management system is required, where LLM-assisted juniors produce first drafts and these are viewed with great scepticism by multiple layers of checking, which finds and removes the errors. Such management systems have been built in the software industry, but things are a bit harder for Other Business Services because the nice tool of testing a computer programme — by trying to run it on a computer — is not available for textual work.

Consider an organisation that produces email-based customer support, which uses a multi-layer management system with juniors writing first drafts and experts reviewing and fixing up these drafts. With LLM support, their headcount will go down and their productivity will go up. The minimum bar for recruitment in the Indian labour market might go down, thus permitting going down to workers with lower wages. This will yield improved profit.

ChatGPT is useful in the field of computer programming, for translating a programme from one language into another (which Indian software companies do a lot) and for special situations where prompt engineering suffices to get to a first draft of the code. Just as Indian software services companies created a new process design, in the 1990s and 2000s, for harnessing a large low-end workforce, a new process design is now required, whereby junior coders, experts, and LLMs are brought together to deliver a next level of productivity gain in programming.

A great deal of bad programming today is done by searching using Google, and trying to cut and paste answers that someone on the Internet has written. LLMs for programming (including specialised tools for computer programming like Github Copilot) carry this same methodology to the next level. Every organisation that has such behaviours will accumulate “technical debt”, with programmes that appear to work but are only poorly understood, contain subtle flaws, and use bad algorithms. The code built in this fashion will be expensive to maintain and enhance.

A lot of the Indian workforce has poor English and writes bad text in all their communications. I can imagine LLMs helping convert prompts into sound English, and thus improving intra-organisation information flows. But this will run into problems when faced with the need to sceptically check everything that the LLM has written. Some people are in that nice middle zone where they find it difficult to write sound English but they are able to read the generated text with precision and spot errors in it. Many others are not, and for them LLMs will work badly. In this application, the top does not need it, a certain group in the middle obtains gains, and the rest lose.

Calculators reduced the need to know how to do long division, smartphones reduced the need to remember phone numbers, and it all worked out fine for the next generation. Will young people growing up in an LLM-soaked world similarly come out just fine on thinking, writing, and programming? I am sceptical about this. You have to be a master to spot the mistakes made by the LLM, and the path to being a master lies in actually knowing. Thinking, writing, programming: All these seem to be the essence of general intelligence that are mastered through long decades of iterative refinement and learning by doing. The LLMs appear to likely hinder that process.

The writer is a researcher at XKDR Forum

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Topics :BS Opinionlanguagesartifical intelligence

Next Story