Don’t miss the latest developments in business and finance.

OpenAI storm: Flawed board or star syndrome?

The firing and subsequent reinstatement of its CEO shows the scope of private sector boards is limited in addressing broader social concerns

OpenAI, artificial intelligence
Illustration: BINAY SINHA
T T Ram Mohan
6 min read Last Updated : Dec 07 2023 | 9:36 PM IST
The board fires a CEO. The principal investor reinstates the CEO, who then fires the board. So, who is accountable to whom? Recent events at OpenAI, a company at the cutting edge of technology, are a story of corporate governance gone horribly wrong. 

Most commentators seem to think that the problem was the flawed structure of OpenAI, one that combined profit and non-profit entities. OpenAI was set up as a non-profit to foster research in artificial intelligence (AI) in ways that are safe and would benefit humanity. Such research needs lots of capital, which wasn’t readily forthcoming. So, OpenAI created for-profit subsidiaries to attract capital and generate surpluses that could go to the parent after investors had been paid their share.

This structure, the argument goes, created a conflict at OpenAI that was unresolvable. Commercialising AI requires developing products that the market will lap up. This objective was bound to come into conflict with concerns about safety and the larger interests of humanity.

Is this true? Corporate organisations are said to require a laser-like focus on maximising shareholder wealth. However, in recent years, the expectation has also sprung up that they should be socially responsible in various ways. In India, companies are required to set aside a portion of profits for “corporate social responsibility”. Sustainability in relation to climate change is very much in. Corporate boards today do have multiple objectives that need to be balanced. Likewise, the board of OpenAI could very well have found ways to balance its core objectives as a non-profit with commercial objectives.

Some have noted that a greatly truncated board was part of the problem. OpenAI had nine board members to start with. Over the years, three members departed, reducing the board to six, including the chairman and the CEO. The decision to fire the CEO was taken by just four members, one of whom was the chief scientist at OpenAI. (He subsequently regretted his decision). Yes, a larger board would have meant a larger pool of expertise. But there is no law against a smaller board acting sensibly.

The problem was not the size of the board but its composition. Microsoft, the leading investor, was not represented on the board. The omission was intended to ensure that the non-profit entity was free from commercial influence.

That was a big mistake. It is delusional to suppose that a non-profit entity does not require a commercial orientation. It may be non-commercial in its objectives, but the funding side is impossible to ignore. Microsoft should have been given a seat on the board so that its input was always taken into account.

Whether the entity is a for-profit or not, it is unwise to exclude the principal investor from the board, especially where the number of investors is relatively small. The principal investor has skin in the game and is accountable for results in a way in which directors with no stakes whatsoever are not. That is the reason the government in India needs to have its representatives on the boards of public sector undertakings (PSUs). The idea of having PSU boards entirely run by professionals whose appointment is distanced from the government has been and will remain a non-starter.

Had Microsoft been present on the board, the board would have been better able to judge whether firing the CEO was sensible. The four board members who fired Sam Altman showed courage uncommon in the boardroom. However, since the decision potentially involved OpenAI shutting down, they made a mistake in not consulting Microsoft in the matter.

OpenAI faced the prospect of shutting down because 90 per cent of the employees threatened to leave after he was ousted. At other times, boards face the prospect of investors fleeing if the CEO leaves. That raises the question: How do boards deal with CEOs who happen to be regarded as “stars”? Is it possible at all for a board to uphold a higher principle when dealing with a star CEO? 

The evidence is that boards shrink from challenging stars more often than not. What board would want to risk derailing performance and cheese off investors? The long-term record of stars suggests that boards are mistaken in playing safe. CEOs become stars when they produce abnormal returns. Very often, abnormal returns accrue from taking excessive risk, as in the case of Royal Bank of Scotland and Lehman Brothers. Or they could simply be the result of outright fraud, as in the case of Theranos. Or the returns turn out to be based on premises that eventually prove to have been faulty, as in the case of GE.

GE’s Jack Welch was an iconic figure in his lifetime. “Lessons in leadership from Welch” became mandatory reading for students of management. The Welch legacy — ruthless downsizing, venturing into unrelated areas, putting the bottom line ahead of engineering — has since come to be seriously questioned. The “lesson in leadership” may well be that boards need to be wary of the star syndrome. Where abnormal returns are the result of outstanding innovation, there may be a case for indulging a star. Not otherwise.

One antidote to the star syndrome is succession planning. Where a CEO tends to be domineering in the boardroom, it may be time to show him the door. But that can happen only if a successor is in place. The board of OpenAI seems to have given little thought to this aspect when it chose to fire

Mr Altman. It appointed an interim successor and then scrambled to announce another. Boards are so much in thrall to star CEOs that they begin to regard them as indispensable. That is folly. If the board buys into the notion that after the CEO is the deluge, the deluge is almost certain to follow.

Dysfunctional boards often explain poor corporate outcomes. However, where larger issues of social concern are involved, there are limits to what the best of private sector boards can accomplish. Artificial general intelligence, as many have pointed out, may be too important to leave to corporate boards. Top-down regulation is inescapable.

Besides, great innovation has often arisen from state funding, not private funding, a point that economist Mariana Mazzucato has been hammering away at. Google’s search algorithm, Elon Musk’s well-known companies, the technologies that went into the iPhone — all these and more were driven by state funding. State funding and regulation may well be the answer to some of the fundamental issues raised by the storm at OpenAI.

 ttrammohan28@gmail.com

More From This Section

Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Topics :BS Opinionartifical intelligence

Next Story