The boardroom battle in OpenAI had arisen from ideological differences exacerbated by an unusual corporate structure. Artificial intelligence (AI) should be developed carefully for the benefit of humanity. That’s the stated aim of not-for-profit OpenAI. Yet, its subsidiary generates over $1 billion in revenue from ChatGPT and is valued at over $80 billion. Its commercial value is undeniable and Sam Altman, the sacked and reinstated chief executive officer, has pushed hard to monetise ChatGPT, with the recent addition of new plugins to allow easy customisation. In competitive terms, Mr Altman is right. ChatGPT isn’t that much ahead of rivals such as Google Bard, Meta Llama, and X’s Grok. It has to maintain momentum to stay ahead. It is also understandable that the 700-plus employees of OpenAI would like to cash in on their skills. They would do that if the subsidiary is listed and they receive stock options. Quite apart from being an inspirational leader, Mr Altman’s strategy seems to go in the direction, which is normal in Silicon Valley.
This explains the revolt against the board of OpenAI when the news broke of Mr Altman’s sacking. An open letter asking the board to resign and Mr Altman to be reinstated was signed by almost every employee. This paved the way for Mr Altman’s return after a very brief hiatus alongside an entirely new board. The speed at which Microsoft decided to set up a new AI vertical with Mr Altman in charge makes it apparent that it would have been prepared to absorb the OpenAI team as a package deal. The competitive imperatives and the profit motive are both understandable. Thus, the charter of OpenAI is likely to become irrelevant. This, however, leads to broader concerns. The advent of multiple research teams pushing commercial generative AI means everyone is in a hurry. The altruistic concept of slower and considered development to avert possible dangers will fall by the wayside.
Elon Musk, who has just released Grok, has said many times that AI represents a big threat. Mr Altman himself has said AI could lead to the extinction of the human race. However, the pace of competitive development in generative AI has now accelerated to the point where such considerations have become secondary. Regulators will always be trying to catch up, given the kind of things AI can do. The dangers of AI are indeed manifold. Autonomous weapon systems, which can identify targets with pinpoint precision, already exist. Israel’s Iron Dome, for instance, is autonomous — no human could react fast enough to knock out incoming rockets. Surveillance systems based on combinations of face recognition and communication monitoring have made the task of authoritarian nations much easier. Criminals can use the cloning abilities of AI to generate realistic audio-visual content to carry out scams.
AI can also be an enormous force for good. It is a game changer at computationally hard tasks like drug discovery and handling nuclear power plants. It can improve the management of telecom networks, hybrid power grids, road and air traffic, and mining practices, among other things. There are enormous possibilities and India can benefit from the development. According to Nasscom, AI-related activity could add $450-500 billion to India’s output within just two or three years. Given the potential and risks associated with AI, boardrooms and regulators will have to make numerous adjustments.
To read the full story, Subscribe Now at just Rs 249 a month