Don’t miss the latest developments in business and finance.

Responsible development

Developments in OpenAI raise concern

OpenAI, ChatGPT, AI
Photographer: David Paul Morris/Bloomberg
Business Standard Editorial Comment
3 min read Last Updated : May 29 2024 | 10:37 PM IST
There has been growing concern about the quality of governance at OpenAI. In the past few months, 11 key persons have quit but the problem started with the failed attempt to oust Chief Executive Officer Sam Altman in November last year. This was followed by a reconstitution of the board. Even as it released ChatGPT 4, OpenAI was embroiled in a controversy, with actor Scarlett Johansson claiming that her voice was cloned despite her refusing to give permission. The company has also been sued by The New York Times for copyright violation. Two former board members have written a widely circulated essay explaining why they think the mission of OpenAI to develop artificial intelligence (AI) responsibly has failed. One of the former board members recently said that Mr Altman withheld information and misrepresented things. Since ChatGPT was publicly released in November 2022, it has turned the field of generative AI upside down. The release sparked fierce competition and several others have released their own versions of Gen AI programmes and a multitude of applications have been developed riding on those platforms. The competition has, however, led to considerations of safety and responsible development being superseded by commercial concerns.

This is contrary to the stated mission of OpenAI, which is to ensure that AI benefits all of humanity. OpenAI itself is a not-for-profit company but it hived off the commercial side of ChatGPT into a for-profit subsidiary, which was valued at above $80 billion in February 2024. When the move to oust Mr Altman was initiated, one of the cited reasons was that he wasn’t being “candid” with the board. But the board itself was reconstituted and Mr Altman’s vision of monetising ChatGPT seems to have clearly attained priority since. The exodus of many highly skilled workers — OpenAI cofounder Ilya Sutskever among them— could be for reasons including disagreeing with the direction of OpenAI. Corporate upheavals are normal in Silicon Valley and companies changing tack following boardroom struggles or reformatting of personnel is hardly unusual. But it is the nature of generative AI — OpenAI’s core business — that causes concern about the potential harms that could arise from lack of governance.

In the 18 months since ChatGPT first released, some of the potential harms AI can cause are already visible, alongside potential benefits it may bring. At an enterprise level, AI can automate a wide range of functions and create entirely new revenue streams. It can solve intractable scientific problems and develop new materials with exotic properties. AI can also clone voices and avatars that fool face-recognition and voice-recognition security systems. It is already being misused to run scams, and spread false political messages. Authoritarian regimes can misuse the same abilities to target dissidents, even as corporations use this capability to identify pizza-topping preferences. The other concerning trend is GenAI’s propensity for throwing up fictional “facts” and fake citations on search prompts. As dependencies on the near-magical capabilities of GenAI increase, the need for more responsible development and deployment also grows. The governance void in OpenAI is therefore more concerning than it would be in a traditional company. Policymakers will have to look for effective ways to rein in the harmful potential of AI without retarding its beneficial aspects.  

Topics :Artificial intelligenceBusiness Standard Editorial CommentOpenAI

Next Story