Business Standard

Monday, December 23, 2024 | 03:09 AM ISTEN Hindi

Notification Icon
userprofile IconSearch

The paradox at the heart of Elon Musk's lawsuit against the OpenAI

Most leaders of AI companies claim that not only is AGI possible to build, but also that it is imminent

Sam Altman  Photographer: SeongJoon Cho/Bloomberg

Sam Altman Photographer: SeongJoon Cho/Bloomberg

NYT San Francisco
By Kevin Roose

It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes.

Musk sued OpenAI this week, accusing the company of breaching the terms of its founding agreement and violating its founding principles. In his telling, OpenAI was established as a nonprofit that would build powerful AI systems for the good of humanity and give its research away freely to the public. But Musk argues that OpenAI broke that promise by starting a for-profit subsidiary that took on billions of dollars in investments from Microsoft.

But amid all of the animus, there’s a point that is worth drawing out, because it illustrates a paradox that is at the heart of much of today’s AI conversation — and a place where OpenAI really has been talking out of both sides of its mouth, insisting both that its AI systems are incredibly powerful and that they are nowhere near matching human intelligence.
 

The claim centres on a term known as “artificial general intelligence (AGI).” Defining what constitutes AGI is notoriously tricky, although most people would agree that it means an AI  system that can do most or all things that the human brain can do.  Altman has defined AGI as “the equivalent of a median human that you could hire as a co-worker,” while OpenAI itself defines AGI as “a highly autonomous system that outperforms humans at most economically valuable work.”

Most leaders of AI companies claim that not only is AGI possible to build, but also that it is imminent. Building AGI is OpenAI’s explicit goal, and it has lots of reasons to want to get there before anyone else. But AGI could also be dangerous if it’s able to outsmart humans, or if it becomes deceptive or misaligned with human values. 

The people who started OpenAI, including  Musk, worried that an AGI would be too powerful to be owned by a single entity, and that if they ever got close to building one, they’d need to change the control structure around it, to prevent it from doing harm or concentrating too much wealth and power in a single company’s hands. Which is why, when OpenAI entered into a partnership with Microsoft, it specifically gave the tech giant a license that applied only to “pre-AGI” technologies. 

chart

Most A.I. commentators believe that today’s cutting-edge A.I. models do not qualify as A.G.I., because they lack sophisticated reasoning skills and frequently make bone-headed errors.

But in his legal filing, Mr. Musk makes an unusual argument. He argues that OpenAI has already achieved A.G.I. with its GPT-4 language model, which was released last year, and that future technology from the company will even more clearly qualify as A.G.I.

“On information and belief, GPT-4 is an A.G.I. algorithm, and hence expressly outside the scope of Microsoft’s September 2020 exclusive license with OpenAI,” the complaint reads.

What Mr. Musk is arguing here is a little complicated. Basically, he’s saying that because it has achieved A.G.I. with GPT-4, OpenAI is no longer allowed to license it to Microsoft, and that its board is required to make the technology and research more freely available.

His complaint cites the now-infamous “Sparks of A.G.I.” paper by a Microsoft research team last year, which argued that GPT-4 demonstrated early hints of general intelligence, among them signs of human-level reasoning.

But the complaint also notes that OpenAI’s board is unlikely to decide that its A.I. systems actually qualify as A.G.I., because as soon as it does, it has to make big changes to the way it deploys and profits from the technology.

Moreover, he notes that Microsoft — which now has a nonvoting observer seat on OpenAI’s board, after an upheaval last year that resulted in the temporary firing of Mr. Altman — has a strong incentive to deny that OpenAI’s technology qualifies as A.G.I. That would end its license to use that technology in its products, and jeopardize potentially huge profits.

“Given Microsoft’s enormous financial interest in keeping the gate closed to the public, OpenAI, Inc.’s new captured, conflicted and compliant board will have every reason to delay ever making a finding that OpenAI has attained A.G.I.,” the complaint reads. “To the contrary, OpenAI’s attainment of A.G.I., like ‘Tomorrow’ in ‘Annie,’ will always be a day away.”

Given his track record of questionable litigation, it’s easy to question Mr. Musk’s motives here. And as the head of a competing A.I. start-up, it’s not surprising that he’d want to tie up OpenAI in messy litigation. But his lawsuit points to a real conundrum for OpenAI.

Like its competitors, OpenAI badly wants to be seen as a leader in the race to build A.G.I., and it has a vested interest in convincing investors, business partners and the public that its systems are improving at breakneck pace.


©2023 The New York Times News Service

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Mar 04 2024 | 12:38 AM IST

Explore News