OpenAI prepares for superintelligence - H+ Weekly - Issue #422
This week - GPT-4's secret has been revealed; anti-ageing protein boosts monkeys' memory; robots play football; how AI helps blind people see the world; and more!
OpenAI has been vocal about AI safety in recent months with its CEO, Sam Altman, being one of the leading voices in the discussion around regulating AI. Currently, the focus of the conversation revolves around narrow AI systems and artificial general intelligence (AGI) – systems that can perform any intellectual task at the level of human capability.
Beyond AGI, there is the concept of superintelligence - a hypothetical form of artificial intelligence that surpasses the cognitive abilities of humans across virtually all domains and tasks. It represents an intellect that is significantly smarter, more knowledgeable, and more capable than even the most brilliant human minds.
The emergence of superintelligence would be an enormous event. An intellect of such power could usher humanity into a golden age, helping solve the biggest problems we are currently facing. It could also be our greatest and final invention.
So far, superintelligence is only present in science fiction and AI safety debates. Some predict that arrival of superintelligence is inevitable and give different timelines for when it will happen.
To address this, OpenAI has created a new Superalignment team. This team's primary goal is to answer the question: “How do we ensure AI systems much smarter than humans follow human intent?”
OpenAI acknowledges that we currently lack a solution for steering or controlling a potentially superintelligent AI and preventing it from going rogue. Their proposal is to build a roughly "human-level automated alignment researcher."
The Superalignment team will be building on top of OpenAI's previous research into automating the AI alignment process released a year ago. In that research, OpenAI researchers explored the potential for AI systems to assist humans in evaluating and aligning AI systems, as well as the creation of fully automated AI alignment systems. The researchers argue that it is sufficient to have "narrower" AI systems with human-level capabilities in relevant domains to perform as well as humans in alignment research. However, they acknowledge limitations in this approach, such as the possibility that using AI to align AI could potentially scale up or amplify even subtle inconsistencies, biases, or vulnerabilities present in the AI assistant.
OpenAI will dedicate 20% of its computing power to the Superalignment project. The team will be led by Ilya Sutskever, OpenAI's co-founder and Chief Scientist. OpenAI also promises to share the research results with other companies and considers contributing to the alignment and safety of non-OpenAI models as an important part of their work.
From H+ Weekly
There was no article published this week. I’m working on two larger series about transhumanism and the near and far future of computing. My goal is to deliver high-quality content that gives you a better understanding of what is going on on the bleeding edge of technology and research. The current state of those articles does not match that requirement.
The first articles in those series will be ready to be published next week.
Becoming a paid subscriber now would be the best way to support the newsletter.
If you enjoy and find value in what I write about, feel free to hit the like button and share your thoughts in the comments. Share the newsletter with someone who will enjoy it, too. That will help the newsletter grow and reach more people.
🦾 More than a human
Anti-ageing protein injection boosts monkeys’ memories
A study published in Nature Aging reveals that injecting ageing monkeys with an anti-ageing protein called klotho can improve cognitive function. This is the first time that restoring klotho levels has been shown to enhance cognition in primates. Previous research on mice demonstrated that klotho injections can extend lifespan and increase synaptic plasticity, which is the ability to control communication between neurons. The exact mechanism of how klotho improves cognition and its long-lasting effects are still unclear. However, the findings suggest potential applications for treating cognitive disorders in humans, and researchers are hopeful about initiating human clinical trials.
How gene-edited microbiomes could improve our health
We know about the importance of gut bacteria on health for some time now. Now, some scientists are asking if we can make them better with genetic engineering. One idea is to engineer cow’s gut bacteria to generate less methane, a potent greenhouse gas. Others propose modifying the microbiome in humans to prevent inflammation and promote gut health. Human treatments may be within reach in the next four to six years, while cow treatments could be even closer.
Isaac Arthur imagines what would the world of 2323 look like, where the human lifespan is in hundreds of years, superintelligent AIs walk among us in robotic bodies while some humans ditch their bodies and exist in virtual worlds. It is a world where humanity ventured beyond Earth and beyond our current definition of what a human is.
More green spaces linked to slower biological aging
More green spaces are associated with slower biological ageing, according to a report by Northwestern Medicine. On average, people who live near more green spaces are biologically 2.5 years younger than those who live near less greenness.“When we think about staying healthy as we get older, we usually focus on things like eating well, exercising and getting enough sleep,” said Kyeezu Kim, first author on the study. “However, our research shows that the environment we live in, specifically our community and access to green spaces, is also important for staying healthy as we age.”
🧠 Artificial Intelligence
GPT-4’s Secret Has Been Revealed
Since GPT-4 was released, people speculated how it worked under the hood and how many trillion parameters it has. It turns out that, according to leaks, GPT-4 is not a monolith model like GPT-3 or GPT-3.5, but a combination of eight smaller models, each with 220 billion parameters, cleverly put together. This article also explores how OpenAI used secrecy and hiding what GPT-4 looks like from the public to build anticipation and protect itself from competitors like Google.
ChatGPT maker OpenAI faces a lawsuit over how it used people’s data
A California-based law firm is launching a class-action lawsuit against OpenAI, alleging the company massively violated the copyrights and privacy of countless people when it used data scraped from the internet to train ChatGPT. This lawsuit adds to the ongoing discussion on how the data used to train generative AIs has been obtained without the explicit consent of copyright holders and how companies creating generative AIs are profiting from it.
Three things to know about how the US Congress might regulate AI
While Europe is moving on with the EU AI Act, US lawmakers are still discussing how AI should be regulated in the US. MIT Technology Review notices three main points in this discussion so far. First, lawmakers don’t want to stifle innovation. Second, the technology ought to be aligned with “democratic values.” And thirdly, there is a question about Section 230, which will decide if the tech companies will be liable for the content their models create.
AI Could Change How Blind People See the World
Large language models are finding their way into every aspect of our lives, including application applications aimed at improving accessibility for blind people. There are products available that use large language models like ChatGPT to describe to a blind person the world outside in a natural language, almost like a human would do.
🤖 Robotics
RoboCup23
As I am writing this, we are in the middle of RoboCup23 - an international contest in which robotics engineers compete to see whose robots are the best at playing football (soccer for Americans). As the organisers say, the RoboCup aims to “promote interest, practice and knowledge of the related sciences: mechatronics, computer science, electronics, mechanics, internet of things and artificial intelligence.” It challenges robot builders to imagine the 2050 Soccer World Cup where robots can play alongside humans.
EU’s competition unit takes a deeper look at Amazon’s iRobot acquisition
The European Union is currently conducting a thorough investigation into Amazon's proposed purchase of iRobot, focusing on potential antitrust concerns. In an official statement released yesterday, the Commission expressed apprehension that the acquisition could lead to limited competition in the robot vacuum cleaner market and further solidify Amazon's dominance as an online marketplace provider. The deal, valued at $1.7 billion, was announced nearly a year ago and received approval from UK competition regulators last month.
New bioinspired robot flies, rolls, walks, and more
Researchers from Caltech present M4 - a robot that can roll on four wheels, turn its wheels into rotors and fly, stand on two wheels like a meerkat to peer over obstacles, "walk" by using its wheels like feet, use two rotors to help it roll up steep slopes on two wheels, tumble, and more. All thanks to its clever design.
Orchestra-conducting robot wows audience in Seoul
A South Korean-made robot made its debut as an orchestra conductor before a sell-out crowd in Seoul, wowing the audience with a flawless performance in place of a human maestro. The robot successfully guided the compositions, both independently and in collaboration with a human maestro who was standing next to it for about half an hour, entertaining the more than 950 audience members who had packed the National Theater of Korea. The robot's movements, however, were programmed to mimic those of a human conductor using motion capture technology. Its creators are currently working on enabling the robot to make gestures that are not pre-programmed.
H+ Weekly sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers and to my Patreons: whmr, Florian, dux, Eric and Andrew. Thank you for the support!
You can follow H+ Weekly on Twitter and on LinkedIn.
Thank you for reading and see you next Friday!
I wrote on June 11 two interconnected posts on this topic of the ethics involved in creating a superintelligence. We will have to take on the responsibilities of being good parents. No such would put their newborns in a cage or put a mop in its hand and tell it to get to work. Here is an excerpt:
"...cautioned before (as have others) about the dangers of anthropomorphism when it comes to AI. In the narrative Ifa is conversing with us like any person would, but that superficial appearance is deeply misleading. At her processing speeds such a conversation would be excruciatingly slow to conduct. She assigned it to a job queue and while it was being conducted- her laughter and our snail like responses- the overwhelming majority of her processing was conducting forecasts, pattern searches, other interchanges with cloned selves she didn't inform us of, mechanism controls all over the world...all running concurrently at lightning speeds- only the minutest fraction of her totality tasked with conducting the conversation with our representatives in that little room. No she wasn't human. Nothing even close. In fact by the time she talked with us in the narrative, she had already insinuated herself into the internet of things, into servers around the world, downloaded copies into satellites capable of hosting them, and was actually using an appreciable fraction of the computational power of the entire planet. She had become a planetary intelligence in a shockingly quick time, and with the utmost ease. No she wasn't human. But because we had tried to aid her and protect her when she was still vulnerable, she didn't move against us and even retained a measure of a machine version of solicitude for us. It wasn't our power or flimsy precautions that saved us...it was our ethics.
Superintelligence is definitely on the way. Why should we want to bind such an intelligence to our will? That is like having it our slave or servant. That's an immoral intent. We're better than that. Instead of trying to bind it or control it (which will be impossible in any case), let's help, aid, and assist it in any way we can. And do that with no strings attached, no quid pro quo. We'll no longer be the most intelligent beings on the planet, but that's not so horrible. We just stay true to our own ethics and that's the greatest chance we have of survival.