Did an AI breakthrough cause the schism at OpenAI? - Weekly Roundup - Issue #442
Plus: Meta disbands Responsible AI team; EU AI Act hits a roadblock; new models from Anthropic and Inflection; Fourier shows an entire squad of humanoid robots; and more!
Welcome to Weekly Roundup Issue #442. The firing of Sam Altman as the CEO of OpenAI and the subsequent turmoil at OpenAI was the biggest story in tech this week. However, this was just one of several noteworthy things that happened last week. Meta disbanded its Responsible AI team, while the EU AI Act encountered a significant roadblock over foundation models. In other developments, Anthropic and Inflection unveiled new AI models, Fourier presents an entire squad of humanoid robots, and advancements in cell therapies show promise in targeting difficult-to-treat cancers, marking another exciting week in AI, robotics and biotech.
Here is a recap of the events of last weekend at OpenAI. Since the release and the unexpected success of ChatGPT almost a year ago, a rift between the “product” side (led by Sam Altman) and the “research” side (centered around OpenAI chief scientist, Ilya Sutskever) began to widen. The tension reached a breaking point last Friday when Sutskever convinced OpenAI’s board of directors that Altman’s actions were contrary to the company’s mission. The board then removed Altman as the CEO of OpenAI. Soon after that, Greg Brockman, the president of OpenAI, and a group of OpenAI employees joined Altman and left the company. Those events sparked a rebellion within the company - almost all remaining employees were ready to leave OpenAI. Seizing the opportunity, Satya Nadella, the CEO of Microsoft, offered positions at Microsoft to Altman, Brockman, and all OpenAI employees planning to leave the company.
That's what the situation looked like on Monday when I published the article going deeper into the timeline of events and the reasons for the schism. In the days after I published that article the situation changed and new reports emerged as to what was the reason for Sutskever taking these drastic steps.
On November 22nd, OpenAI announced that Sam Altman will be reinstated as the CEO of OpenAI. About an hour after OpenAI made this news public, Greg Brockman was at OpenAI offices being welcomed by a large group of happy OpenAI employees. One condition for Altman’s return was the establishment of a new board of directors. The full list of people on the new board hasn’t been revealed yet. However, some known names include former Salesforce co-CEO Bret Taylor as chair, along with Larry Summers, former U.S. Treasury Secretary, and Adam D'Angelo, who was on the board that ousted Altman, serving as directors. It is also expected that Microsoft, OpenAI’s biggest investor and technological partner, will also get at least one seat on the new board. “One thing, I’ll be very, very clear, is we’re never going to get back into a situation where we get surprised like this, ever again,” said Satya Nadella in an interview, hinting at Microsoft’s closer oversight at what is going on inside OpenAI.
While Sam Altman was returning to OpenAI, a new narrative emerged as to what was happening behind the scenes at OpenAI in the days leading to the schism. As Reuters reports, a group of OpenAI employees wrote a letter to the board of directors warning about a powerful AI discovery that they said could threaten humanity. The discovery in question is referred to as Q* (read as Q-star). Q* is a new, very promising AI model developed at OpenAI that some believe can lead to the creation of artificial general intelligence (AGI), which OpenAI defines as autonomous systems that surpass humans in most economically valuable tasks. Additionally, Reuters reports that researchers behind the letter to the board have also flagged work by an "AI scientist" team. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work.
The creation of Q*, alongside Altman’s push towards commercializing the technology before understanding its consequences (exemplified by the release of GPTs and GPT Store at OpenAI Dev Day), was the reason why Sutskever and his colleagues approached the board and put in motion the chain of events that led us here.
Although the situation at OpenAI has calmed down, the schism at OpenAI will have an impact on the entire AI industry. In this business vs safety clash, the business has won. Microsoft is going to have more control over OpenAI and OpenAI itself will go through a reconciliation process. For some people, there may not be room in the new OpenAI and they may leave the company as a result.
Altman will have more freedom within the company, as he is now backed by investors who led the efforts to bring him back as the CEO of OpenAI. As he said during the Asia-Pacific Economic Cooperation summit, just one day before his firing, "four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime”. From now on, Altman may not have to work that hard to “push the veil of ignorance back”.
The failure of OpenAI’s unusual mix of for-profit and nonprofit corporate structures may make investors more reluctant to invest in companies with similar or other unconventional structures. It remains to be seen if the recent events will cause changes in how OpenAI is organised.
The turmoil at OpenAI could send a signal that AI safety is, at best, a distraction and, at worst, a source of problems that can possibly derail the entire company. In business, if you can’t provide a stable service, your customers will find someone else who can. It is possible that businesses building their products and services using advanced AI models will put pressure on AI companies to prioritise stability over safety in order to avoid chaos when business incentives clash with safety concerns.
The world of AI is in a different place than it was a week ago. The AI community has changed and those who are in favour of safety may not like the new landscape.
If you enjoy this post, please click the ❤️ button or share it.
I warmly welcome all new subscribers to the newsletter this week. I’m happy to have you here and I hope you’ll enjoy my work. A heartfelt thank you goes to the one person who joined as a paid subscriber this week.
The best way to support the Humanity Redefined newsletter is by becoming a paid subscriber. Don't miss out on our special Black Friday offer: Get 20% off your subscription for the first year! This limited-time deal ends on Monday, 27th November. Subscribe now and be a part of our growing community.
If you enjoy and find value in my writing, please hit the like button and share your thoughts in the comments. Additionally, please consider sharing this newsletter with others who might also find it valuable.
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🧠 Artificial Intelligence
Meta disbanded its Responsible AI team
The Responsible AI (RAI) team at Meta is no more. According to a report published by The Information, RAI, which was already running on a skeleton crew after layoffs last year, has been disbanded. Its members have been reassigned to other AI teams at Meta, mostly to the generative AI product team while some will join AI infrastructure teams.
Transforming the future of music creation
Lyria is Google DeepMind’s newest AI music generation model and, thanks to a partnership with YouTube, will soon be available on YouTube through two experiments. The first one, Dream Track, allows people to generate a completely new song from a text prompt and in the style of a specific musician (the list includes artists such as Charlie Puth, Charli XCX, Demi Lovato, and more) for use by creators in YouTube Shorts. The second experiment, Music AI Tools, is a suite of AI tools designed to assist musicians, songwriters, and producers in creating new music. All tracks and sounds generated by Lyria will be watermarked with DeepMind’s watermarking tool, SynthID, to make it easy to identify AI-generated content.
EU’s AI Act negotiations hit the brakes over foundation models
A key meeting on the EU's AI Act faltered due to disagreements over managing foundation models, with major EU countries opposing the proposed tiered approach. This approach is challenged especially by France, Germany, and Italy, fearing it could disadvantage European AI companies against global competitors. The deadlock, prompted by different views on regulation and concerns about stifling innovation, has put the entire AI Act at risk. With the European Parliament adamant on regulating foundation models and the European Commission's lack of defence for its initial proposal, the future of the EU's pioneering AI legislation is uncertain, potentially affecting its global standing in AI regulation.
UK won’t regulate AI anytime soon, minister says
During a Financial Times conference on Thursday, the country’s first minister for AI and intellectual property, Viscount Jonathan Camrose, confirmed government concerns over regulation curbing growth and said that a UK law on artificial intelligence won’t be coming “in the short term.” While he refrained from criticising other nations’ approaches, he noted that “there is always a risk of premature regulation,” which could do more harm than good by “stifling innovation.”
Inflection-2: The Next Step Up
Inflection, an AI company founded by co-founder of DeepMind, Mustafa Suleyman, has released its newest model, Inflection-2, claiming it to be “the best model in the world for its compute class and the second most capable LLM in the world today”. The benchmark results included in the press release show Inflection-2 being a very performant model, being only bested by GPT-4.
Introducing Claude 2.1
Anthropic’s Claude 2 model got an update that brings a 200K token context window, meaning that Claude can now take in prompts as long as 150,000 words, or over 500 pages of text. Other updates include a 50% reduction in hallucination rates and new API tools for more seamless integration into customer applications.
If you're enjoying the insights and perspectives shared in the Humanity Redefined newsletter, why not spread the word?
Fourier released a video showing not one but an entire squad of their upcoming humanoid robot GR-1 which the company plans to start mass production this year. I featured Fourier as one of ten companies working on making commercial humanoid robots a reality.
Imagineer Morgan Pope Uses Electromagnetism to Spark Emotions
IEEE Spectrum features an article on Morgan Pope, a robotics researcher at Disney, who focuses on creating robots that are characters with their own personalities, capable of evoking emotions. His work includes the robotic bunny on rollerblades coming out of the box on the stage at SXSW or a cute droid reminiscent of those in Star Wars. “We have a very different mission compared to conventional roboticists,” he says. “We’re trying to use electromagnetism to create emotions.”
Spider-inspired, shape-changing robot now even smaller
In a new study, engineers at the University of Colorado Boulder debuted mCLARI, a 2-centimetre-long modular robot that can passively change its shape to squeeze through narrow gaps in multiple directions. It weighs less than a gram but can support over three times its body weight as an additional payload. The robot can manoeuvre in cluttered environments by switching from running forward to side-to-side, not by turning but changing its shape, giving it the potential to aid first responders after major disasters.
This robotic digger could construct the buildings of the future
In Europe, where there's a chronic shortage of construction workers, researchers at ETH Zurich’s Robotic Systems Lab are developing autonomous machines to aid in building projects. They've successfully trained an excavator, named HEAP, to autonomously construct stone walls. HEAP uses LiDAR and machine vision to map the site to identify and position stones with high precision. This technology could lead to faster and more sustainable construction, utilizing locally sourced materials like stones and rubble. It is also an example of a growing cohort of new startups and technologies designed to bring robotics and automation to construction sites.
3D-Bioprinted Implants Show Promise in Model of Severe Brain Injury
University of Oxford researchers have developed a groundbreaking 3D droplet printing technique to fabricate engineered brain implants that closely resemble the cerebral cortex, potentially revolutionizing treatments for severe brain injuries. These engineered tissues, when implanted into mouse brain slices, demonstrated impressive integration and functional alignment with host cells, marking a significant advancement over previous techniques. This innovative approach paves the way for more personalized brain injury treatments and could have broad applications in drug testing and neurological research.
Innovative new cell therapies could finally get at tough-to-target cancers
Recent advances in CAR T therapies, traditionally used for blood cancers, are showing promise in treating solid tumours. BioNTech's clinical study of BNT211 reported significant tumour reduction in patients with solid tumours, particularly ovarian and germ-cell cancers. However, the therapy also led to serious side effects like cytokine release syndrome. Despite challenges like maintaining the therapy's duration and balancing efficacy with toxicity, researchers remain optimistic about the potential of CAR T therapies in treating solid tumours.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
Humanity Redefined is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!