The UK has about three years to come up with rules to reassure people about artificial intelligence before wrenching changes in the way work is done unleashes the potential for social unrest, an industry expert said.
Martin Weis, managing partner and global co-lead for AI at Infosys Consulting, said about 30% of the hours people put in on the job in places like the US and Britain could be done by technology by 2030. Back-office tasks, administration are most likely to be affected, threatening to upend millions of jobs.
The remarks ahead of Prime Minister Rishi Sunak’s AI Summit next week underscore the stakes for policy makers facing a rapid shift in the nature of work — and what kind of skills the labor market will need.
Weis warned that governments must quickly grasp the shake-up — or risk angering thousands of employees who are affected. That could involve investing in education and re-skilling, legislating to minimize the impact of AI, or even providing a basic income to help those who lose their jobs.
“I would say we have two to three years, because now people are trying it out,” Weis said in an interview, referring to generative AI such as OpenAI Inc.’s ChatGPT. Unless governments plan ahead, he added, the profits from productivity gains brought about by AI will fall into the hands of a few private sector firms — while the average worker could be left jobless and local economies poorer.
His warning comes ahead of Sunak’s AI Summit, which will see the UK host a range of global leaders in the sector to discuss the opportunities and risks of the technology. For Sunak, whose father-in-law Narayana Murthy founded Infosys, it’s a chance for him to set out the UK’s stall as a forward-thinking technology-centered economy — one of the ways he plans to boost output growth.
Ahead of the summit, the Institute for Public Policy Research published a report Wednesday claiming the gathering could be a “missed opportunity” as it focuses on self-regulation.
Also Read
Echoing Weis, the think-tank said that “deep global cooperation” is needed and that governments “need to set out their own bold strategy for AI,” ensuring that the technology is used for public good as well as profit.
“Self-regulation didn’t work for social media companies,” said Carsten Jung, senior economist at the IPPR. “It didn’t work for the finance sector, and it won’t work for AI. We shouldn’t just passively anticipate technological developments and hope for the best.”
Just last week, the UK’s Institute of Directors, a business lobby group, said the government “must not delay in legislating for AI.” It criticized Sunak for adopting a “wait-and-see” approach, though added that it was broadly supportive of the ideas the government published in a White Paper earlier this year on the principles underlying AI regulation.
In a survey of the IoD’s members, 51% of business leaders said AI represented an opportunity for them. But only 8% had AI governance structures in place at board level, to examine use of AI in the business or across supply chains, and 60% either lacked knowledge at board level or have failed to consider the risks and opportunities.
Companies such as major banks will soon start looking at how to use generative AI to improve productivity, Weis said. Some investment firms such as Schroders have already started using the technology to perform speedy analysis of asset classes. They will use it for everything from writing emails to parsing customer inquiries, he said.
Those employers will then “look into how much work can be done out of a location like the UK, and compare this then to an offshore location like India or the Philippines” where the cost of employing staff is cheaper.
In a boost for Sunak, Weis said the UK is ahead of the curve when looking at AI compared to some European peers. Infosys is already helping some local councils automate back-office tasks, and “the openness to look into it and to get the heads around regulation, adoption, training, upskilling, democratizing the use of AI is really there.”