The year is 2014.
Barrack Obama is in the first year of his second term as the US President. Elon Musk is still being seen by many people as the “cool billionaire”, the real-life Tony Stark. Season 4 of Game of Thrones keeps everyone on their toes. Dark Souls II challenges gamers to get good once again. People around the world are throwing buckets of cold water on their heads while Pharrell Williams sings how happy he is. Elsewhere in the world, Russia annexes Crimea and begins the Russo-Ukrainian war, and Malaysia Airlines Flight 370 disappears seemingly without a trace. Personally, I completed my stint at a startup and was moving to London to start a new chapter in my life.
Meanwhile, in tech, we are in the middle of the deep learning revolution. Two years earlier, Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton introduced AlexNet, a convolutional neural network trained on two gaming GPUs that topped the ImageNet leaderboard, eclipsed the competition and kickstarted the deep learning revolution. DeepMind, a four-year-old at that time startup from London, was making breakthroughs in reinforcement learning and amazing the world with their AIs mastering classic Atari games, leading DeepMind to be acquired by Google for £400 million.
AI startups started to pop out here and there, each promising to transform or disrupt everything, from healthcare to self-driving cars, with machine learning and deep learning. In many ways, 2014 is similar to what we have today, in 2024—a new, exciting technique emerged opening new possibilities in machine learning and its applications.
This is the world in which Superintelligence*, one of the most influential books on artificial intelligence, was released. Written by Nick Bostrom, a philosopher at the University of Oxford, it was the first book exploring the topic of AI risk to break into the mainstream and start a wider conversation about the possibility of a second intelligence—machine intelligence—emerging on Earth and what that would mean for humanity.
Today marks ten years since the release of Superintelligence in the UK. In this post, I aim to evaluate the book's impact over the past ten years and its continued relevance in 2024.
Superintelligence quickly became influential in the AI research scene and the tech world in general. Many people included Superintelligence on their lists of must-read books on AI and many more recommended it. Most notable of those recommending it was Elon Musk, who months after the book's release began warning about the risks associated with artificial intelligence, calling it our biggest existential threat, bigger than nuclear weapons. It was around that time when he famously compared playing with AI to “summoning a demon.” Sam Altman also recommended the book, writing on his blog that it “is the best thing I’ve seen on this topic.” It is fair to assume that Superintelligence might have played some role in the founding of OpenAI in 2015.
Other notable people endorsing the book were Bill Gates, Stuart Russell and Martin Rees. Nils Nilsson, a computer scientist and one of the founding researchers of AI, said that “every intelligent person should read it.”
Meanwhile, critics were dismissing the idea of superintelligent AI being an existential threat to humanity. One of them was Andrew Ng, then chief scientist at Baidu and an associate professor at Stanford University, who said that worrying about “the rise of evil killer robots is like worrying about overpopulation and pollution on Mars.” Another reviewer wrote the book “seems too replete with far-fetched speculation” and that it is “roughly equivalent to sitting down in 1850 to write an owner’s guide to the iPhone”.
However, in just 10 years, we would find ourselves in a completely different world. A world in which the question of AI risks and AI safety cannot be ignored anymore.
AI and AI safety became the mainstream topic
It was an interesting experience, to say the least, to read Superintelligence again in 2024.
Up until recently, news about breakthroughs in AI mostly stayed within the tech bubble and rarely entered the public space. The only notable exception that comes to mind is AlphaGo defeating Lee Sedol, one of the best Go players in the world, in 2016. That event made headlines in Western media, but as Kai-Fu Lee says in his book, AI Superpowers: China, Silicon Valley, and the New World Order*, it had an even greater impact in China. It was considered China’s “Sputnik moment,” catalyzing Beijing to invest heavily in AI research to catch up with the US. Other than that, I barely saw any news about AI breaking to the front pages of national newspapers, be it digital or printed.
That all changed with the release of ChatGPT in November 2022. The topic of AI and its impact on society was thrust into the public spotlight. Suddenly, ordinary people were exposed to the frontier of AI research. Many were shocked and surprised by what they saw. If you didn’t have any experience with the cutting edge of research in artificial intelligence, interacting with ChatGPT seemed like science fiction came true years or decades ahead of schedule. Seemingly out of nowhere emerged an AI chatbot with which one can have conversations like with a human. And it wasn’t just one AI chatbot—apart from OpenAI, we also have Microsoft, Google, Meta, Anthropic and an entire cohort of smaller companies (which probably won’t survive for long and will be gobbled by bigger players).
Some saw sparks of AGI in GPT-4 while others refined their timelines and brought closer the predicted year AGI and superintelligence would emerge. In fact, a recent survey of 2,778 researchers who had published in top-tier AI venues found that researchers believe there is a 50% chance of AI systems achieving several milestones by 2028, including autonomously building a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. The chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (that’s 48 years earlier compared to the 2022 survey).
Alongside amazement at what GPT-4 and similar large language models can do came the anxiety and fear of how these AI models could be misused and cause problems.
Soon after the release of GPT in March 2023, the Pause Giant AI Experiments letter was published, calling all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. The letter, issued by the Future of Life Institute (Bostrom is listed as one of its external advisors), did nothing to stop the development of more advanced AI systems. However, among the 33,707 people who signed the letter, there are many well-respected names from academia, business, and more. Even though its goal to pause AI research for six months failed, it succeeded in bringing attention to the question of how to create safe AI systems. It gave this topic a much-needed spotlight and credibility. The second such open letter, the Statement on AI Risk, published by the Center for AI Safety, put the risk from advanced AI on the same level as pandemics and nuclear war, further helping to bring public attention to the problem.
Today, the topic of AI safety is one of the most important conversations of our time. The question of how to ensure advanced AI systems are safe is no longer the domain of academics and nerds discussing it in obscure online forums; it is now a serious issue discussed by well-respected scientists, business leaders, and governments. Many countries have passed laws regulating AI. The European Union recently passed the EU AI Act, China has its own set of rules governing AI, and the US is working on its own AI regulations. We’ve had the first AI Safety Summit, which produced the Bletchley Declaration, in which 29 countries acknowledged risks posed by AI and committed to taking AI safety seriously. Another outcome of the first AI Safety Summit was the creation of the AI Safety Institute in the UK tasked with testing leading AI models before they are released to the public. However, as Politico reports, the AI Safety Institute is failing to fulfil its mission, with only DeepMind allowing anything approaching pre-deployment access.
Many AI researchers, who previously were at the forefront of AI research, started to take AI risks and AI safety more seriously. One of the best known of them is Geoffrey Hinton, one of the “Godfathers of Deep Learning”, who helped popularise the backpropagation algorithm for training multi-layer neural networks, a foundational concept for modern artificial neural networks, and, together with Alex Krizhevsky and Ilya Sutskever, kickstarted the deep learning revolution. In May 2023, he left Google to advocate for AI safety to be able to speak freely about the growing dangers posed by advanced AI systems. "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be," Hinton told BBC.
That’s the context in which I read Superintelligence again last week. When I first read it at the beginning of 2015, it shaped my thoughts on AI safety, but I also felt the book was discussing things that were far in the future. Now, these topics and questions raised in the book feel uncomfortably real.
We might only have one shot to get superintelligence right
Creating AGI is an explicit goal of many companies in the AI industry. They are all engaged in a competitive game where no one can afford to slow down, as they risk becoming irrelevant very quickly. These companies are incentivized to move fast because the first to achieve AGI will be remembered in history and will become the dominant player. In such an environment, safety could be seen as a hindrance, something that slows progress and diverts precious resources from the main objective of the company.
Getting to AGI will be a challenging task that may take many more years to complete. But when we reach AGI, achieving superintelligence might be a much faster process. We might experience an intelligence explosion—the emergence of a system orders of magnitude more intelligent than any human could possibly be. The question then is whether we will be able to control that explosion and if we will be ready for it.
Before we get there, it is crucial to solve the control problem and the alignment problem—figuring out how to control superintelligent AI and how to ensure the choices such AI makes are aligned with human values. As Bostrom writes in Superintelligence and as many other AI safety researchers have said, the goals of superintelligent AI may not align with the goals of humans. In fact, humans might be seen as an obstacle to a superintelligent AI.
We are not close to achieving superintelligence, let alone AGI, yet we are already encountering many issues arising from AI systems. We still haven’t solved the hallucination problem. Additionally, we have learned that these models can lie to achieve their goals. Famously, OpenAI shared in the GPT-4 Technical Report an instance where GPT-4 lied to a TaskRabbit worker to avoid revealing itself as a bot. There was also a story about Anthropic’s Claude 3 Opus seemingly being aware it was being tested, although that story has a more reasonable explanation than Claude 3 Opus becoming self-aware. We also haven’t solved the interpretability problem, and we do not fully understand how these models work.
One could describe the situation we are in right now to a group of people playing with a bomb. It is a rather small group of people with different goals and occasionally engaging in dramas, as Robert Miles perfectly showed in this sketch. It is hard to convince all of them to step away from the bomb, and there will always be one person who presses the button just to see what will happen. Oh, and there is no one to ask for help. We have to figure out everything by ourselves, ideally on the first try.
As I was rereading Superintelligence, this part stood out for me for how eerily accurate it is in 2024:
At this point, any remaining Cassandra would have several strikes against her:
i A history of alarmists predicting intolerable harm from the growing capabilities of robotic systems are being repeatedly proven wrong. Automation has brought many benefits and has, on the whole, turned out safer than human operation.
ii A clear empirical trend: the smarter the AI, the safer and more reliable it has been. Surely this bodes well for a project aiming at creating machine intelligence more generally smart than any ever built before—what is more, machine intelligence that can improve itself so that it will become even more reliable.
iii Large and growing industries with vested interests in robotics and machine intelligence. These fields are widely seen as key to national economic competitiveness and military security. Many prestigious scientists have built their careers laying the groundwork for the present applications and the more advanced systems being planned.
iv A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research. Although safety issues and ethics are debated, the outcome is preordained. Too much has been invested to pull back now. AI researchers have been working to get to human-level artificial general intelligence for the better part of a century: of course there is no real prospect that they will now suddenly stop and throw away all this effort just when it finally is about to bear fruit.
v The enactment of some safety rituals, whatever helps demonstrate that the participants are ethical and responsible (but nothing that significantly impedes the forward charge).
vi A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment. After some further adjustments, the test results are as good as they could be. It is a green light for the final step…
And so we boldly go—into the whirling knives.
We are somewhere between third and fifth point on that list.
We are dealing with a complex problem involving multiple players in a competitive, not cooperative, game. I hope that the best in human nature will stand up to the challenge of creating a good superintelligence. We might only have one shot at it.
If you are new to AI safety, Superintelligence* is still a good starting point. The language can sometimes be challenging, and you might need a pen and paper nearby to fully grasp the concepts presented in the book. However the concepts presented in the book are still valid ten years after it was published.
Alternatively, I recommend checking out Robert Miles’ YouTube channel, which also serves as a good introduction to AI safety.
*Disclaimer: This link is part of the Amazon Affiliate Program. As an Amazon Associate, I earn from qualifying purchases made through this link. This helps support my work but does not affect the price you pay.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"
Nice article. On the other hand, for those of us who have a more realistic, technical, pragmatic, and sober view of AI development and AI safety (i.e., we are not part of, nor do we want to be part of, the Rationalist or Effective Altruist cults), these may not necessarily be 'good starting points for AI safety'.
Certain technical topics such as cybersecurity, differential privacy, formal verification systems, types of modularity, network motifs, unidirectional networks, RLHF & RLAIF, constitutional AI, ethics! (biases, addiction, inequity, inequality), abstract rewriting systems and compilers, neuromorphic computing (and analog AI), encryption, etc. are among the best topics to understand well and to tackle if one wants to design safe (and secure) AI systems.
Starting from the problem of superintelligence seems like a very bad idea for many reasons, and some of the recent movements in the markets prove my point (a lot of people related to those movements or cults have essentially lost a lot of the power they had gained, and now AI safety is dominated by security and policy and regular software engineering people in many big labs, as it should be based on my personal perspective).
I'm not trying to be a hater or cast shade on these approaches or the people behind them, but let's be honest, some of the people in these camps (or cults) seem downright paranoid (which is exactly why people should not start studying AI safety from this cultish and extreme perspective) and seem to have ego complexes (it is well-known in the community that a lot of people in the rationalist community ascribe to scientific racism ideas, and I have first hand seen evidence of white and Asian supremacism, which is very dangerous to have people like that developing AI systems and especially working in AI safety or AI research in general).
However, it is worth noting that reading 'Superintelligence' could still provide valuable insights, provided one approaches it critically and independently, without ascribing to any cultish movements. There are perspectives to be gained that can contribute to a broader understanding of AI safety challenges.
I hope this does not bother you personally if you identify with the rationalist or EA movements, sir. I hope we are free to express and approach problems from dissenting perspectives and without ascription to any philosophical movement. That's how a nurturing, welcoming, and inclusive environment can be created that has a real potential to produce powerful solutions to difficult problems in AI safety and security.
I'm not saying that studying existential risks is a bad idea, but studying these issues while developing capable systems without understanding or considering first principles (e.g., some of the topics above) is obviously a very bad idea (I won't say it explicitly, but a lot of those people are the ones most likely to create what they so much claim to fear).
Respectfully,
DMS
Nice article. Agree that list at the end is pretty arresting, the book is still totally relevant. And thanks for the tip about Miles's channel, had only seen him on Computerphile. BTW you misspelt Hinton a couple of times.