If you are one of those people who jump awake in the middle of a night and tear your hair worrying about a future in which red-eyed robot humanoids march through the world enslaving humans, this book is for you.
It is structured as an account of the work of an American group in the Silicon Valley, California, who call themselves "Rationalists', led by "the strange, irascible and brilliant" Eliezer Yudkowsky, whose mission is to save mankind from the wrong kind of Artificial Intelligence.
This book also opened my eyes to the existence of the many well-funded institutions throughout the world with names such as The Future of Life Institute (at the Massachusetts Institute of Technology), Future of Humanity Institute (at Oxford University), The Center for Applied Rationality and The Machine Intelligence Research Institute (both in Berkeley, California). That is a lot of talent and money being thrown at securing mankind's future!
The biggest fear that this group has is best phrased by Mr Yudkowsky himself: "The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else." That also, incidentally, is where the author found the title of this book.
The big-picture view of what the Rationalists stand for is their belief that the real danger from machines with Artificial Intelligence is not that they take the shape of destructive humanoid robots but from the various "cognitive biases" that they may be programmed into them, unknowingly, by the scientists who created them. In this sense, this book serves as an excellent recounting of these "biases".
A cognitive bias is a systematic error in how we think, the author reminds us. If, for instance, a person living in Britain today is asked what is more likely to kill him: A terrorist attack or having a bath? Our cognitive bias may pick a terrorist attack, but statistics show in the past 10 years, 50 people in Britain have died from terrorist attacks (most in 2017 at the Manchester concert) whereas the average annual death rate from drowning in a bath in Britain is 29! Researchers cite this as an example of systemic bias called "the availability heuristic". We think of terrorism when asked that question simply because information about terrorist attacks in more "available" to our mind: Most terrorist acts in the world are reported in headlines whereas deaths by drowning in bathtubs don't make that many headlines.
The author walks us through the many such "cognitive biases" that we as humans have, and which people like the Rationalists believe will be unknowingly programmed into the Artificial Intelligence computer programs that run the robots.
Using this kind of thinking, the Rationalists believe that today's research and innovations in Artificial Intelligence poses a far greater threat to mankind, than, hold your breath, Climate Warming!
This book is also an eye-opening tour of the contemporary method of opinion-formation as used by the Rationalists. In the early 2000s, they made postings on the Internet Relay Chat, a precursor to today's blogs and social media groups. The blog Slate Star Codex ("a blog about science, medicine, philosophy, politics, and futurism") is an example of a blog that they use. Another blog they frequent is "moreRightLessWrong". The author points out the common links between the Rationalists and the alt-right movement in the United States. The author also draws our attention to another contemporary phenomenon called the "one percent rule" — that a hyperactive 1 per cent in any internet community does all the posts (Wikipedia edits, YouTube videos, message board posts). Astonishingly, all these blog sites share common posters and readers.
This book has a pretty well-researched history of the attempts to create Artificial Intelligence. It started immediately after World War II when Alan Turing, who achieved fame and repute by leading a US/British team to crack German coded military communication, joined Dartmouth College. Their efforts were funded by the Ford Foundation. Early attempts to prove the legitimacy of their (and others') efforts often centered on their machines matching the skills of champions in board games. Thus, Deep Blue beat the reigning world chess champion Garry Kasporov was greeted by much acclaim in the media. Such efforts to prove that the machines they create are “human-level” continue to this day: The computer AlphaGo created by the Google-owned AI company, DeepMind, has enthralled media by engaging in a series of contests with world ranking champions of the Chinese/Korean game “Go”. Go is a board game many orders-of-magnitude more complex than chess. Yet, DeepMind's tool beat Lee Sedol, a South Korean who is considered the undisputed world champion.
The most revealing part of the book to me was the lengthy end-section, which examines whether the Rationalist movement is a cult. He reminds us that a “cult” normally means “a charismatic figurehead and other high-status inner-circle members… unorthodox sexual practices, a message of impending apocalypse, a promise of eternal life, and a way to donate money to avoid the apocalypse and achieve paradise”. His account of the Rationalists and their leader Eliezer Yudkowsky and “his wife and girlfriends” makes you wonder what the anti-Artificial Intelligence movement is truly all about.
The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World
By Tom Chivers
W&N
Pages: 304; Price: Rs 599