The responses to pause AI research - H+ Weekly - Issue #411
This week - the technologies to enhance our brains; is AI going to destroy us or not; new methods of connecting biological tissues with electronics; text-to-3d generator from Nvidia; and more!
A month ago, Future of Life Institute released an open letter calling to pause giant AI research. The letter calls “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”.
The letter does not call to stop all AI research and targets a “very small pool of actors” capable of training a system more powerful than GPT-4.
As I am writing this, the letter has been signed by over 27,000 people. The list includes such names as Elon Musk, Steve Wozniak, Yuval Noah Harari, Max Tegmark, Andrew Yang and many more computer science and AI researchers, experts, academics and entrepreneurs.
Who you won’t find on the list are people leading big tech companies - Google, Amazon, Microsoft, etc. - or leading AI labs - DeepMind or OpenAI. You will find employees of these companies but no one in a leadership position.
The open letter argues that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”. During the proposed pause, “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts”. And if the industry cannot enact this pause quickly, the government should step in, says the letter.
The responses started to quickly show up all over the internet.
Some people questioned the feasibility of the pause and how to make sure everyone follows the agreement. Since the release of ChatGPT and GPT-4, Google, Amazon, Meta, Baidu and Alibaba announced their own chatbots and research into large language models. And open-source projects such as BLOOM or llama.cpp exists so anyone with enough computing power can join the party. It would be extremely hard if not impossible to have everyone stop the research, even those who signed the letter. There are reports that Elon Musk has founded an AI lab to compete with OpenAI and that he bought thousands of GPUs for some AI projects at Twitter, effectively making him and his companies join the generative AI race despite signing the letter.
Another argument was that what the field needs is more research, not a pause, to better understand what we are dealing with. Yann LeCun, Chief AI Scientist at Meta, used this argument in a conversation with Andrew Ng. He also stated that what needs regulations are the products.
In response to the letter, Sam Altman, CEO of OpenAI, said that it is “missing most technical nuance about where we need the pause” and reiterated the efforts the company took to make GPT-4 safe. He also said OpenAI is not currently training GPT-5 and “won’t for some time”.
In an FAQ, Future of Life Institute refers to the 1975 Asilomar Conference on Recombinant DNA which resulted in the current legal framework around genetic engineering experiments. I think this is the true intention of the letter - not to pause the research on advanced AI right now but to start a conversation about the role of AI in society and how to maximise its positives and minimise the negatives.
From H+ Weekly
🦾 More than a human
▶️ Dr. Matthew MacDougall: Neuralink & Technologies to Enhance Human Brains (2:01:40)
Andrew Huberman interviews Dr Matthew MacDougall, the head neurosurgeon at Neuralink. The conversation touches on such topics as human brain augmentation, how a human brain could adapt to neural prosthetics, neurofeedback, neuroplasticity, what Neuralink is actually working on and how video games can be used in training to control brain implants. Other topics include the RFID implant Dr MacDougall has in his hand, the safety of Bluetooth headphones, why the human skull is badly designed and a solid primer into how brains work.
🧠 Artificial Intelligence
▶️ Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization (3:17:50)
In this interview, Lex Fridman speaks with Eliezer Yudkowsky on the topic of superintelligence, the state and nuances of AI alignment research and why we are running out of time to make artificial general intelligence safe (if we even have time left). This is also an in-depth conversation about consciousness, emotions and understanding humans and intelligence.
▶️ Why Eliezer Yudkowsky is Wrong with Robin Hanson (1:45:12)
And in this conversation, Robin Hanson addresses Eliezer Yudkowsky’s arguments on why we are all screwed and argues the future is not as bleak as Yudkowsky describes it. According to Hanson, the future of AI is not one monolithic superintelligent system but a collection of narrow (but still very intelligent) AIs designed to fit into human society. In his vision of the future, the rise of AI is a more gradual, peaceful change - in contrast to Yudkowsky’s sudden and explosive rise of AI.
Robot Lawyers Are About to Flood the Courts
Many AI startups are targeting the legal industry to disrupt. However, entering this industry is not easy (one startup learned the hard way when the lawyers threatened to sue them). As this article argues, there is a huge opportunity in the US to offer AI-powered legal chatbots to help in civil cases and to improve access to affordable legal services. The only obstacle is the law itself and the resistance of the legal industry.
Magic3D: High-Resolution Text-to-3D Content Creation
We are all familiar with text-to-image generators, like Midjourney or DALL-E. Researchers from Nvidia took this idea to the next level and created Magic3D - an AI that takes a text input and outputs a high-resolution 3D model of what it was asked to generate.
Schumacher family planning legal action over AI ‘interview’ with F1 great
Michael Schumacher's family plans to take legal action against a German magazine Die Aktuelle that used a chatbot to have an “interview” with the seven-time F1 world champion who has not been seen in public since he suffered a serious brain injury in a skiing accident on a family holiday in the French Alps in December 2013.
Google DeepMind: Bringing together two world-class AI teams
Google announced that they will be combining their two AI labs - Google Brain and DeepMind - into one lab. The new lab will be named Google DeepMind and will be led by Demis Hassabis.
▶️ Ingenuity Mars Helicopter Celebrates 50 Flights (0:59)
NASA celebrates the 50th flight of Ingenuity Mars Helicopter - the first flying machine to take off to the skies on another planet. Ingenuity arrived on Mars together with Perseverance rover on February 18th, 2021, and took its first flight two months later, on April 19th.
Robotic hand can identify objects with just one grasp
Inspired by the human finger, MIT researchers have developed a robotic hand that uses high-resolution touch sensing to accurately identify an object after grasping it just one time, with about 85% accuracy. The sensors, which use a camera and LEDs to gather visual information about an object’s shape, provide continuous sensing along the finger’s entire length. Each finger captures rich data on many parts of an object simultaneously.
Genetically Modified Houseplants Are Coming to Clean Your Air
Neoplants promises to replace your air purifier with genetically modified plants. The experiments showed that it is possible to use plants as air purifiers but the problem is the plants are not efficient for this task. Even with genetically modified Neoplants (which the company says are “30 times better than top NASA plants”), you will still need a lot of these plants to make a difference.
Biohybrid Implant Patches Broken Nerves With Stem Cells
Interfacing electronics and tissues is not an easy task. But thanks to this biohybrid implant made by researchers from the University of Oxford, this task might not be as hard. By combining flexible electronics and induced pluripotent stem cells into a single device, the researchers were able to develop a high-resolution neural interface that can selectively bind to different neuron types which may allow for better separation of sensation and motor signals in future prostheses.
Scientists Merge Biology and Technology by 3D Printing Electronics Inside Living Worms
Using a laser-based 3D printing method, researchers showed it is possible to grow flexible, conductive wires inside the body of a living worm, opening new possibilities in connecting biological tissues with electronics.
H+ Weekly is a free, weekly newsletter with the latest news and articles about AI, robotics, biotech and technologies that blur the line between humans and machines, delivered to your inbox every Friday.
Subscribe to H+ Weekly to support the newsletter under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
This issue is also supported by my Patreons: whmr, Florian, dux, Eric and Andrew. Thank you for your support!
Thank you for reading and see you next Friday!
H+ Weekly is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.