Multiple new media reports indicate OpenAI may have been on the brink of a new breakthrough when Sam Altman was sacked as chief executive officer. A new algorithm codenamed “Q*” (pronounced Q star) was being discussed internally. Some experts say it has the potential to “end existence” because it may be actually capable of abstract reasoning. Not much is known about the new AI model. Q* could be named thus because it mixes two known techniques of artificial intelligence (AI) training called “Q learning” and “A*”. It is rumoured that this can solve mathematical problems of up to high-school standards with 100 per cent accuracy, which is a significant advance over previous AI models like ChatGPT-4, which has around 70 per cent accuracy at such tasks. Perfect scores on high-school maths could imply Q* is capable of logical reasoning, instead of
identifying and replicating already-seen patterns from its training data.
If this is true, it would bring the new AI one step closer to being what is called AGI — artificial general intelligence — a program that is capable of reasoning as well as absorbing, deciphering, and replicating variations on patterns it has noticed. If its power improves in subsequent iterations, AGI could attain the equivalent of very high intelligence. Right now, most AI is “narrow” in that algorithms are usually crafted to solve a specific narrow range of tasks. However, the new large language models such as ChatGPT and Google’s Gemini are already more versatile. Currently, generative AI is good at tasks like writing and language translation. This works by statistically predicting the next likely word and by logging the contextual association of words to each other. Even when these models give correct answers to maths problems or write code, they are still working via statistical association. The ability to solve novel mathematical problems implies greater reasoning capabilities.
True AGI, as and when it arrives, would enable autonomous models to solve a really vast range of problems and tackle a wide range of tasks better than humans can. AGI should also by definition be able to teach itself to perform new tasks without instructions. It could end up being a scientist or mathematician in its own right, rather than a useful tool used by human scientists. In some respects, a true AGI model would have to be considered close to being self-aware, or conscious, since it may possess the ability to reason out what it itself is. By extension, it may perhaps possess traits such as curiosity, self-will, or just a desire for self-preservation, which are all associated with living creatures.
Ensuring such a model is “ethical” and “altruistic” would be difficult: The very definitions of such terms vary from culture to culture. Leaving philosophical questions about the nature of consciousness aside, such AI could be truly dangerous if its goals do not coincide with those of human beings. AI researchers have speculated about the creation of AGI since the inception of computing. If Q* lives up to the rumours, it is indeed several steps further along the road to being AGI. A day before he was fired, Mr Altman said: “We sort of push the veil of ignorance back and the frontier of discovery forward.” Was he hinting at the existence of Q*? The computer science community is rife with such rumours.
To read the full story, Subscribe Now at just Rs 249 a month