Meta’s AI Superteam is Falling Apart - Sync #536
Plus: Microsoft and OpenAI drift apart; Perplexity and Cognition raise hundreds of millions; Texas bans lab-grown meat; the AI Darwin Awards; FDA approves human trials for pig kidney transplants
Hello and welcome to Sync #536!
In this week’s issue, we take a closer look at how Meta’s AI superteam is doing as recent reports suggest it’s not going all that well.
Elsewhere in AI, a series of events are pushing OpenAI and Microsoft further apart. Microsoft has approved OpenAI’s transition to a for-profit company and signed a deal to use Anthropic’s models in its products. OpenAI, meanwhile, has secured a $300 billion deal with Oracle for computing power. In other news, both Perplexity and Cognition announced new funding rounds, bringing in hundreds of millions of dollars; Anthropic is endorsing SB 53; and Nvidia has announced a new GPU for AI workloads.
Over in robotics, Fourier unveiled its new companion humanoid robot, while a new article pours cold water on AI hype.
This week’s issue of Sync also includes news that the FDA has approved human trials for pig kidney transplants, AI Darwin Awards, the Man vs Machine Hackathon, and more!
Enjoy!
Meta’s AI Superteam is Falling Apart
Meta learns the hard way that money can buy talent, but it cannot buy loyalty
It was a busy summer at Meta. The company went on an unprecedented and aggressive poaching run, trying to convince top AI talent to join its newly forming Superintelligence team to fulfil Mark Zuckerberg’s dreams of personal superintelligence. Meta managed to get some of the best researchers and engineers from the likes of OpenAI, Google, or xAI, offering them multimillion-dollar contracts and lucrative positions.
However, Meta is now learning the hard way that although you can pay someone to work with you, you cannot buy their loyalty.
Buying Superintelligence Team
Meta Superintelligence Labs (MSL) was unveiled as the crown jewel of Zuckerberg’s $65 billion AI ambitions. The division is split into four groups: the TBD Lab, focused on long-term research into superintelligence; a products team building new AI features; an infrastructure group responsible for compute; and the company’s long-standing Fundamental AI Research (FAIR) unit.
To lead the charge, Zuckerberg brought in Alexandr Wang, founder of Scale AI, after Meta struck a deal worth $14 billion for a 49% stake in his company. With Wang onboard, and with Zuckerberg personally involved in recruitment, Meta went on a poaching spree to get AI talent from its competitors. Some recruits were reportedly offered packages approaching $100 million, a signal that no price was too high to secure the brightest minds and put them to work on what Zuckerberg calls “personal superintelligence.” In the end, Meta succeded in atracting some of the best AI talent, with many of them making key contributions to world’s top AI models.
Meta’s aggressive recruitment strategy has left a mark on the wider AI industry. Some companies have lost CEOs, while others have lost “talented and hardcore” colleagues. The departures have had an emotional impact. Wired obtained an internal memo from OpenAI’s chief research officer, Mark Chen, who wrote: “I feel a visceral feeling right now, as if someone has broken into our home and stolen something.”
A Fragile Culture
Meta’s aggressive tactics have left a mark not only on the AI industry but also inside the company itself.
The rush to buy talent created a divide. New hires, paid far more than veteran engineers and given star-like treatment in the elite TBD Lab, left long-serving staff questioning why loyalty and institutional knowledge seemed to count for less than outside prestige.
This sense of inequality has fuelled resentment. Some employees pushed for raises or tried to move into the superintelligence division. Others left altogether, frustrated that their contributions were being overlooked. In trying to attract the brightest stars, Meta risks alienating the people who had been quietly holding the company’s AI efforts together.
Early Departures
Meta’s challenge is not simply internal tension but attrition at the very top of its new AI teams. Some of the biggest names Zuckerberg fought hardest to recruit have already left.
Several new hires quickly discovered the reality didn’t match the promise. Avi Verma never even started before returning to OpenAI, while Ethan Knight lasted only weeks before doing the same. Rishabh Agarwal left after five months for a startup, and even Shengjia Zhao, one of Meta’s most prized hires, nearly walked away after a week before being persuaded to stay with a title upgrade and a pay bump.
In total, according to Business Insider, at least eight senior staff have quit MSL within weeks of its creation. Not a good sign.
The Wider Talent Wars
Meta is hardly alone in facing these headaches. Across Silicon Valley, the competition for AI expertise has reached fever pitch, and everyone tries to protect themselves from loosing their AI talent. OpenAI paid out multimillion-dollar one-off bonuses this summer to keep researchers from leaving. At Safe Superintelligence, the secretive new lab co-founded by OpenAI’s Ilya Sutskever, employees are reportedly discouraged from even listing the company on LinkedIn to avoid poaching attempts.
In this climate, loyalty is increasingly fleeting. Researchers bounce from one lab to another with ease, driven by a mix of financial incentives, intellectual curiosity, and shifting corporate strategies.
Meta’s Response
Unsurprisingly, Meta has sought to play down the reports of turmoil. Spokespeople describe attrition as “normal” and dismiss media coverage as “navel-gazing” or exaggerated. The company insists that high-profile role changes were part of long-term planning, not knee-jerk counteroffers.
Still, the optics are difficult to ignore. For a division launched with such fanfare, MSL’s early days have been defined less by breakthrough research and more by corporate intrigue, salary disputes, and high-profile exits.
Mark Zuckerberg is gambling on AI. He is betting that sheer scale—in compute, money, and talent—will push Meta into the top tier of AI research. For now, he has succeeded in assembling an all-star roster. But keeping that roster intact is proving to be a different matter.
Money can bring people through the door, but it cannot guarantee loyalty. Unless Meta can stabilise its culture and give its new hires a reason to stay beyond their paycheques, its “superteam” risks becoming just another cautionary tale in Silicon Valley’s AI race.
If you enjoy this post, please click the ❤️ button or share it.
Do you like my work? Consider becoming a paying subscriber to support it
For those who prefer to make a one-off donation, you can 'buy me a coffee' via Ko-fi. Every coffee bought is a generous support towards the work put into this newsletter.
Your support, in any form, is deeply appreciated and goes a long way in keeping this newsletter alive and thriving.
🦾 More than a human
The FDA approves human trials for pig kidney transplants
The FDA has approved trials to test pig-to-human kidney transplants. The study will start with older patients who have end-stage kidney disease and rely on dialysis, who will receive pig kidneys genetically engineered to lower the risk of rejection.
'Near Telepathic' Wearable Lets You Communicate Silently With Devices
Meet AlterEgo, a startup that has created a wearable allowing silent communication with computers by detecting subtle signals in the jaw and throat. The device then translates subvocal speech into commands and delivers responses through bone-conduction audio. Unlike brain implants, it’s non-invasive, promises to be privacy-focused, and could transform how people interact with computers, while also offering new possibilities for those with speech impairments.
More Misrepresentations of Transhumanism
”Misunderstandings and misrepresentations of transhumanism abound,” writes Max More as he responds to Matt Taibbi’s interview with Dr Aaron Kheriaty. More, who helped shape modern transhumanism, explains that transhumanism is not a religion but a philosophy focused on human improvement and freedom of choice. It builds on Enlightenment humanism, values human nature while aiming to enhance it through technology.
🧠 Artificial Intelligence
OpenAI and Oracle reportedly ink historic cloud computing deal
Oracle has reportedly signed a landmark $300 billion deal to supply OpenAI with cloud computing power over five years beginning in 2027, according to the Wall Street Journal. If accurate, it would be among the largest cloud contracts ever. The company has already broadened its partnerships beyond Microsoft Azure, working with Oracle and SoftBank on the $500 billion Stargate Project and reportedly striking a separate cloud agreement with Google earlier this year.
Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic
Microsoft is adding Anthropic’s models, such as Claude Sonnet 4, to Office 365 apps like Word, Excel and PowerPoint, instead of relying only on OpenAI’s models. The change comes as Microsoft looks to reduce dependence on OpenAI. At the same time, OpenAI is working to become less reliant on Microsoft by building chips and launching new platforms.
OpenAI secures Microsoft’s blessing to transition its for-profit arm
OpenAI has reached a nonbinding deal with Microsoft to turn its for-profit arm into a public benefit corporation (PBC), which could help it raise more funding and eventually go public. The nonprofit that controls OpenAI would retain oversight and receive a stake in the new PBC worth more than $100 billion. The agreement, still subject to regulatory approval, marks the end of months of negotiations between the two companies.
OpenAI Executives Rattled by Campaigns to Derail For-Profit Restructuring
OpenAI’s plan to transition into a for-profit company is facing heavy scrutiny in California, where charities, labour groups and regulators fear it may break nonprofit laws and prioritise profit over public benefit. The company has offered concessions to keep its nonprofit parent in control, but if the plan fails, it risks losing billions in funding and investor support. Executives have even considered moving out of California, though OpenAI says it wants to remain in the state and work with regulators.
Perplexity reportedly raised $200M at $20B valuation
Perplexity, an AI search startup competing with Google, has raised $200 million at a $20 billion valuation, according to The Information. The funding comes just two months after it secured $100 million at an $18 billion valuation. Founded three years ago, Perplexity has raised $1.5 billion in total and says its annual revenue is close to $200 million.
Cognition raises over $400M at a $10.2B post-money valuation
Cognition has raised over $400 million at a $10.2 billion valuation to grow its AI-powered software engineering tools. As the company writes in the press release, its AI engineer, Devin, has quickly gained traction, and the recent acquisition of Windsurf has doubled revenue and expanded its product range.
Anthropic is endorsing SB 53
Anthropic has endorsed California’s SB 53, a bill by Senator Scott Wiener that would require major AI developers such as OpenAI, Google, and Anthropic to publish safety frameworks, release transparency reports, and report critical incidents to the state. Aimed at preventing “catastrophic risks” like mass casualties or billion-dollar damages, the bill also provides whistleblower protections and penalties for non-compliance. While some tech groups argue regulation should remain federal and warn of stifling innovation, experts say SB 53 is a more measured approach than past proposals, like SB 1047, and now has stronger momentum with Anthropic’s backing.
Anthropic Judge Blasts $1.5 Billion AI Copyright Settlement
Last week, Anthropic announced a $1.5 billion copyright settlement with authors, but a federal judge has delayed approval, calling the deal unclear and unfair. The judge criticised the lack of detail on which works are covered, how authors will be notified, and how claims will be handled, warning that it could be forced on authors. He said the agreement must include proper notice, clear opt-in rules, and protections for both authors and Anthropic. The deal, one of the largest of its kind, could set a standard for future AI copyright disputes, but it must be revised before moving forward.
Introducing Perplexity for Government
Perplexity has announced Perplexity for Government, an initiative to provide U.S. federal agencies with secure, cutting-edge AI tools tailored to public sector needs. The move mirrors similar initiatives from other companies, such as OpenAI and Anthropic, and offers automatic access to Perplexity’s most advanced models with zero data usage protections, free of charge for all federal users. It also introduces Perplexity Enterprise Pro for Government, a custom enterprise-grade platform offered at a nominal cost of $0.25 per agency for the first 15 months.
NVIDIA Unveils Rubin CPX: A New Class of GPU Designed for Massive-Context Inference
Nvidia has unveiled the Rubin CPX GPU, a processor designed for massive-context AI, capable of handling million-token coding and generative video workloads. Integrated into the Vera Rubin NVL144 CPX platform, it delivers 8 exaflops of performance, 100TB of memory, and up to 30 petaflops of compute per GPU, offering 7.5 times the performance of earlier systems. Nvidia Rubin CPX is expected to launch at the end of 2026.
OpenAI Backs AI-Made Animated Feature Film
OpenAI is helping to create Critterz, an animated film produced primarily with AI tools, to demonstrate that movies can be made faster and more cost-effectively thanks to AI. Produced by studios in London and Los Angeles, the film will still use human artists and voice actors alongside AI. With a budget under $30 million and a planned Cannes 2025 debut, Critterz is a test of whether AI can succeed on the big screen.
Inside the Man vs. Machine Hackathon
Man vs. Machine was a hackathon where teams of coders competed with and without AI tools to see if AI really improves coding. Organised by the nonprofit METR, it drew 37 teams and offered $12,500 to the winners, judged on creativity, usefulness, technical skill, and execution. Finalists were evenly split between human-only and AI-supported groups. Ultimately, an AI-powered code-review heat map won first place, while the human team’s writing tool took second, showing that while humans can still impress, “machine” teams have a consistent advantage in this hackathon format.
AI Darwin Awards
Someone has launched a Darwin Awards-style competition for AI. As they put it, the AI Darwin Awards proudly continue this noble tradition by honouring the visionaries who looked at artificial intelligence and thought, “You know what this needs? Less safety testing and more venture capital!” The nominees for the 2025 awards can be viewed here, and you can also submit your own nomination.
AI doomerism isn’t new. Meet the original alarmist: Norbert Wiener
AI doomerism is nothing new. Norbert Wiener, the founder of cybernetics, warned as early as the 1940s and ’50s about the dangers of intelligent machines, predicting mass unemployment, the misuse of automation, and even autonomous weapons. His warnings, compared to those of earlier thinkers like Samuel Butler, mirror today’s debates, where dystopian predictions clash with industry scepticism, while the media magnifies the tension.
LegoGPT can design stable structures using standard LEGOs from text prompts
Researchers at Carnegie Mellon University have developed an AI model that generates stable LEGO structures from text prompts. Unlike other 3D generative models, the system—dubbed LegoGPT—predicts the next brick to place and checks whether the design will remain stable. Tests with robots and hand-built models showed that the AI can create a wide variety of reliable structures.
🤖 Robotics
Reality Is Ruining the Humanoid Robot Hype
This article pours cold water on the hype around humanoid robots. It notes that while companies and analysts predict massive growth and billion-dollar markets, only a handful of robots are currently in use. Building them is possible, but challenges such as limited demand, short battery life, reliability, and safety standards stand in the way. Some experts question whether walking robots are really the best option, suggesting that wheeled robots may be more practical for now. The article concludes that the future of humanoids remains largely potential rather than reality.
Zoox is live in Las Vegas
Zoox, an Amazon-owned robotaxi company, has officially launched its fully autonomous, purpose-built ride-hailing service on and around the Las Vegas Strip. Rides are currently free during the introductory phase, with paid options planned once regulatory approval is secured. The launch comes after more than a decade of development, and Zoox aims to expand to San Francisco and other cities in the future.
▶️ Fourier GR-3: A Caring and Capable Companion (5:30)
In this video, Fourier debuts GR-3, its latest humanoid robot designed to be a “caring and capable” companion. Fourier highlights the usefulness of its robot as an assistant helping with chores, reminding about events, as well as its multilingual capabilities, being able to recognise people, remote teleoperation, and its softer appearance. Fourier did not disclose how much the robot will cost or when it will be available for purchase.
▶️ Climbing Robot (with Claws!) (4:13)
Meet LORIS, a four-legged climbing robot developed at Carnegie Mellon University. Instead of using suction-based or traditional microspine grippers, LORIS employs a unique system of splayed microspine grippers mounted on passive wrist joints, allowing it to cling securely to rough, irregular surfaces such as rocks or walls. One day, descendants of LORIS may be used to explore caves on the Moon or Mars.
▶️ Hand-Launched Foldable Micro Air Vehicle (0:52)
Here is an interesting concept—a microdrone that unfolds after being thrown into the air. It then autonomously stabilises itself in a hovering state and is ready to go.
🧬 Biotechnology
Texas banned lab-grown meat. What’s next for the industry?
Texas has introduced a two-year ban on lab-grown meat, leading Wildtype Foods and Upside Foods to sue state officials. These companies, part of a young industry making meat from animal cells without slaughter, see the ban as an attempt to block growth before products can scale. Supporters argue the pause protects traditional farming and ensures proper labelling, but critics say it limits consumer choice, innovation, and potential climate benefits.
AstraZeneca pauses £200m investment in Cambridge research site
AstraZeneca has halted a £200 million expansion of its Cambridge site, meaning none of its promised £650 million UK investments will go ahead, while it shifts focus to a $50 billion programme in the US. This follows other setbacks for the UK’s pharmaceutical industry, with Merck cancelling a £1 billion London lab, Eli Lilly pausing a £279 million project, and Sanofi warning it won’t invest further without government action. Industry leaders say the UK is becoming less attractive for drug development, despite government claims that life sciences are central to the country’s economic future.
Researchers Create 3D-Printed Artificial Skin That Allows Blood Circulation
Swedish scientists have developed two new 3D bioprinting methods that could help regenerate fully functioning skin for people with severe burns and injuries. One method creates thick, cell-rich skin using a special bio-ink, while the other uses dissolvable hydrogel threads to form blood vessels that keep the tissue alive. In tests on mice, the printed skin grew new dermal layers and blood vessels, showing promise for long-term use, but more work is needed to address risks like infection.
▶️ How AI Could Generate New Life-Forms | Eric Nguyen (11:42)
In this video, Eric Nguyen imagines a future where AI designs life by generating DNA. He presents Evo, an AI that built a working CRISPR system and began drafting genomes, opening possibilities for personalised medicine, cures, and even new life forms. While warning of biosecurity risks, he urges balancing safety with progress to truly understand life by creating it.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"