Genesis: Artificial Intelligence, Hope, and the Human Spirit
Author: Henry A Kissinger, Craig Mundie, Eric Schmidt
Publisher: John Murray
Pages: 320
Price: Rs 699
The rush of AI-related books, essays and news headlines flooding our world nowadays left me wondering what yet another book on AI could offer me. But 10 minutes into this book, Genesis, all such doubts vanished from my mind.
To start with, most books and essays about AI have a grim common theme: AI will put many, many people in today’s world out of jobs, with authors differing on whether it’s all jobs or only jobs in some specific industries or at some specific levels. Then, other AI books are written by millennials and are aimed at other millennials and try and walk them through the correct coding procedures for AI and the like.
This book is lead-written by the late Henry Kissinger, the American National Security Adviser for President Nixon during the conflict-ridden Vietnam War era. His co-authors are Eric Schmidt, the chief executive officer of Google for a decade during its formative years, and Craig Mundie, who headed research in Microsoft in its early years. So, one can safely conclude that anything that this experienced trio writes has to be something wise and not the usual tech hype about AI.
Kissinger was 95 when he contributed to this book in 2018, which you could say was AI’s early days (ChatGPT, the current hero of AI, was not yet born then). But as the co-authors point out, Kissinger had tremendous insight on how technological change can disrupt great-power politics and point out that his insights on this were shaped by World War II when he had personally observed the mass death and destruction inflicted on his fellow Jews by what Churchill called the “perverted science” of Hitler’s Third Reich. That is why, say the authors, Kissinger could correctly decode the threats of, for example, nuclear weapons. Kissinger died in November 2023 at the age of 100, and he apparently worked to the very end on this book.
The authors are clear that “Technological advances can have both benign and malign consequences, depending on how we collectively decide to exploit them”... and they warn us that “it would be a grave error to assume that we shall use this new technology more for productive than for potentially destructive purposes.” In particular, they say, “The companies that own and develop AI may accrue totalizing, social, economic, military and political power!”
In this book, the three authors explore the impact of AI on eight different areas of human activity and explore a viable strategy to balance AI’s benefits and risks. The global scientific community, they say, must immediately “find technical measures for instilling intrinsic safeguards in every AI system.”
The authors wisely point out that in the 20th century, history compelled human societies learning from the shock of two world wars to develop “an international architecture to prevent their recurrence”... and the reassertion of autonomy at the individual, social and national level to moderate commercial and technological forces in advance.
But our constant innovation exposes us to more risks. The authors point out an example from biological engineering where there are attempts being made to make physical interconnects between humans and machines by means of chips in the human brain. Or attempts in genetics that could lead to the human race being split into multiple lines, some much more powerful than others.
The authors believe that government regulators should step in and enforce some things, for example, that the data used for training the AI models is democratic and inclusive in content, and that training methods are transparent and open to public scrutiny. That’s a lot for government policymakers to do!
What makes such scrutiny tough, the authors point out, is that “good” and “evil” are not self-evident concepts. And the authors point out that today’s interconnected world means that “a dangerous AI developed anywhere would pose a threat everywhere”. And to achieve consensus on what the key human values are and how to agree upon them is, according to the authors, “the philosophical, diplomatic and legal task of the century”.
The authors then lead us through a deep discussion on whether the machines that we use for AI can be taught to use, in their computation, core human values such as “dignity”.
The book ends with the authors stating that the aim of their book is not “to instil a sense of apprehension about the rise of AI” but to say that they believe that the arrival of AI represents a new beginning for humankind, where the cycle of creation, be it technological, biological, sociological or political, is entering a new phase, and that we must meet its genesis with optimism. Therein lies the value of this thought-provoking book.