Unpacking Nvidia's $20 billion Groq deal - Sync #551
Plus: US bans Chinese drones; frozen Waymos; Meta is developing new AI models; Sam Altman's Neuralink competitor; the most complex model we actually understand; and more!
Hello and welcome to Sync #551!
I thought this would be a quiet week—then the news broke that Nvidia is absorbing Groq, which is our main focus this week. Elsewhere in AI, Meta’s AI chief, Alexandr Wang, confirmed the company is working on two new AI models. Meanwhile, Yann LeCun is targeting a €3bn valuation for his AI start-up, and vibe-coding start-up Lovable has reached a $6.6bn valuation.
Over in robotics, a power outage in San Francisco shut down Waymos, the US banned Chinese drones, researchers unveiled the world’s smallest programmable robots, and we look at how Disney designed the robots in Avatar to look realistic.
Additionally, this week’s issue of Sync features Sam Altman’s response to Neuralink, a guide to reading AI benchmarks, the most complex model we actually understand, a lab-grown human womb lining, and more.
Enjoy!
Why did Nvidia pay $20 billion for a $7 billion company?
Nvidia just spent $20 billion on Groq, a nine-year-old chip startup worth $7 billion three months ago and largely unknown outside AI circles. The company makes less than $500 million in annual revenue. By any measure, this deal looks absurd.
But Nvidia isn’t being reckless. The deal is a response to two significant threats: the shift from training AI to running it at scale, and the defection of its biggest customers to custom chips. Nvidia’s general-purpose GPUs weren’t built for the inference workloads now dominating AI. Google, Amazon, and Microsoft know it, and they're building alternatives. Nvidia saw the threat and moved fast to “acquire” the company with the expertise it needs.
What is Groq?
Groq (the chip company, not to be confused with xAI's chatbot, Grok) was founded in 2016 by a group of former Google engineers, led by Jonathan Ross, one of the engineers behind Google's Tensor Processing Unit (TPU), and Douglas Wightman, an entrepreneur and former engineer at Google X. Their bet: the market needs chips built solely for inference, not general-purpose processors that do everything.
Nvidia’s GPUs can handle everything from AI to graphics and scientific simulations. They’re powerful and versatile, but that versatility comes with trade-offs. Running AI models on GPUs means dealing with delays. Data has to move between different memory systems, operations happen in unpredictable order, and the hardware needs constant coordination. Those are the problems that Groq’s Language Processing Unit, or LPU, is designed to address. LPUs avoid these bottlenecks entirely—they rely on fast on-chip memory and execute everything in a predetermined sequence. The result is far lower latency for inference workloads.
Groq claims LPUs can run models many times faster than GPUs while using a fraction of the power. There’s a catch, though. Nvidia GPUs offer 192GB of high-bandwidth memory. Groq’s LPUs have just 230MB of on-chip SRAM—nearly 1,000x less— meaning only smaller models can fit on them. You can see this trade-off on Groq’s pricing page—impressive speed at competitive prices, but only for models on the smaller end of the spectrum.
These trade-offs explain why Groq remained a niche player—until Nvidia came calling. To understand why, we need to look at the threats Nvidia faces.
The threats to Nvidia's dominance
Nvidia has been at the centre of modern AI from the start. AlexNet, a neural network that launched the deep learning revolution of the 2010s, ran on two Nvidia GPUs. Since then, Nvidia’s chips have dominated the industry.
But the AI industry is changing, and two forces now threaten that dominance: the shift from training to inference, and the push from tech giants towards vertical integration of their AI stacks.
First: the shift from training to inference
The first chapter of the generative AI boom was dominated by training—the expensive, compute-hungry process of building ever-larger models. But the next wave is about inference—running those models billions of times per day, fast enough to feel instant and cheap enough to be profitable.
Every ChatGPT query, every AI agent completing a task—that's inference. As AI moves toward agents and reasoning models that plan and execute complex tasks, inference demands will grow exponentially. Nvidia's GPUs, built for versatility, aren't the most efficient choice for this workload.
Second: vertical integration by hyperscalers
Nvidia's position as the default AI chip provider faces threats from its biggest customers—the hyperscalers and AI-native platforms. They have strong incentives to diversify, especially to gain full control of their AI stacks. Controlling both hardware and software enables optimisation for greater performance and efficiency while eliminating dependence on a single supplier.
They already are. Google invests heavily in its custom TPU chips. Gemini 3 Pro was trained entirely using TPUs, and over time, more inference work will shift to them. Amazon is deploying Trainium AI chips in its datacenters, and OpenAI is working with Broadcom to introduce its own silicon in 2026. Microsoft and Meta are also exploring custom silicon.
If Nvidia does nothing, the industry that propelled it to a nearly $5 trillion valuation will eventually replace it with custom silicon.
Nvidia’s $20 billion bet
Nvidia understands where the AI industry is heading, and the deal with Groq gives an answer to both threats.
Groq described the deal as a non-exclusive licensing agreement under which founder Jonathan Ross, president Sunny Madra, and other key team members will join Nvidia to help scale the inference technology. Groq will remain independent, with finance chief Simon Edwards stepping into the CEO role.
Nvidia’s deal with Groq follows a quasi-acquisition pattern that is becoming more common in Silicon Valley. Instead of buying a company outright and facing regulatory scrutiny, the larger company licenses IP and hires key employees while the target receives billions and continues as an independent entity (although as a husk of its former self). Meta has done that with ScaleAI, Google with Character.AI, Amazon with Adept and with robotics startup Covariant, and Microsoft with InflectionAI.
Neither Nvidia nor Groq disclosed financial details, but CNBC reports that Nvidia spent about $20 billion. If true, this would be Nvidia's largest acquisition ever, surpassing the $7 billion it paid for Mellanox in 2019.
The reported $20 billion figure is nearly triple Groq’s valuation from three months ago. For a company generating less than $500 million in revenue, the price seems absurd. But revenue isn’t the point. Nvidia didn’t even acquire Groq’s cloud business. The point is securing scarce talent.
Big tech companies know that a handful of exceptionally talented engineers is a massive asset. Those people, the best in their fields, can change the trajectory of the whole AI industry. Technology licensing is part of Nvidia's deal with Groq, but the real prize is the team—engineers who know how to build inference chips, some of whom designed Google's TPU.
Nvidia hopes this deal becomes its next Mellanox. That acquisition paid off massively—the networking expertise Nvidia gained helped build the fast interconnects linking thousands of GPUs in datacenters, now critical infrastructure for AI. The bet is that Groq's team will do the same for inference. That they will incorporate LPUs, or create something new, that will make Nvidia GPUs competitive with custom silicon on inference workloads.
Will it work?
Training models made Nvidia dominant. But inference requires a different approach. General-purpose GPUs aren't the most efficient choice for workloads that do one thing very well. Nvidia knows this. Its customers know it too.
By acquiring Groq’s team and technology, Nvidia is hedging against a future where specialised chips fragment the market. If it works, customers get purpose-built inference GPUs without leaving Nvidia’s ecosystem. If it doesn’t, alternatives are already shipping.
The clock is ticking. Google and Amazon deploy their own chips. OpenAI’s arrive in 2026. Nvidia just bought itself an asset to compete. What the Groq team builds next will determine whether that $20 billion bet preserved Nvidia’s dominance—or just delayed the inevitable.
If you enjoy this post, please click the ❤️ button and share it.
🦾 More than a human
Sam Altman’s New Brain Venture, Merge Labs, Will Spin Out of a Nonprofit
Sam Altman is cofounding a new startup called Merge Labs, which is being spun out of the nonprofit Forest Neurotech. The company is developing a brain–computer interface that uses ultrasound to read brain activity, rather than implanted electrodes, and aims to make the technology more practical and less invasive. Merge Labs will compete with companies like Neuralink as interest grows in using brain interfaces for medical treatment and human–machine interaction.
🧠 Artificial Intelligence
Meta Is Developing a New AI Image and Video Model Code-Named ‘Mango’
Meta’s AI chief, Alexandr Wang, shared that Meta is working on two new AI models set for release in the first half of 2026: Mango, which focuses on images and video, and Avocado, a text-based model designed to be better at tasks like coding.
Meta’s Yann LeCun targets €3bn valuation for AI start-up
Financial Times reports that Yann LeCun, a former Meta chief AI scientist, is in early talks to raise €500 million for a new start-up called Advanced Machine Intelligence Labs, which could be valued at about €3 billion before it officially launches. The company, expected to be announced in January, will work on advanced AI systems that better understand the physical world.
▶️ Sam Altman: How OpenAI Wins, ChatGPT’s Future, AI Buildout Logic, IPO in 2026? (58:22)
In this conversation, Sam Altman explains how OpenAI is responding to growing competition in AI by moving quickly, improving products, and investing at scale. He says staying slightly paranoid helps OpenAI spot threats early, but argues that long-term success won’t be decided by AI models alone. Instead, strong products, reliable infrastructure, and widespread user adoption—especially ChatGPT’s consumer popularity feeding into enterprise use—are what he believes will keep OpenAI ahead.
Nvidia and Alphabet VC arms back vibe coding startup Lovable at $6.6 billion valuation
Lovable, a Swedish vibe coding startup, has raised $330 million in a Series B round at a $6.6 billion valuation, with investors including Alphabet’s and Nvidia’s venture arms.
Google Fires Executives After Failing to Secure AI Memory Chips Amid Shortage
Google has reportedly fired several procurement executives after they failed to secure enough high-bandwidth memory (HBM), a key component for AI hardware, during a global shortage. By not locking in long-term agreements with major manufacturers such as Samsung and SK Hynix, Google was left exposed as competitors bought up available stock, highlighting how important strong hardware supply-chain planning has become in the AI race.
Google works to erode Nvidia’s software advantage with Meta’s help
Google is reportedly developing an internal project called TorchTPU to make its Tensor Processing Units (TPUs) run PyTorch, a popular deep learning framework created at Meta, more efficiently, reducing developers’ reliance on Nvidia’s CUDA software. By improving compatibility, possibly open-sourcing parts of the project, and working closely with Meta, Google aims to make it easier for customers to adopt TPUs and compete more strongly in the AI chip market as demand for alternatives to Nvidia grows.
Andrej Karpathy: 2025 LLM Year in Review
We are approaching the end of the year, which means “year in review” content is all over the place. If there is one you should read, it is this one from Andrej Karpathy. He notes that LLMs are emerging as a new kind of intelligence, simultaneously a lot smarter than he expected and a lot dumber than he expected. He describes them as more like “ghosts” than animals, with jagged intelligence that excels in some areas while failing in others, driven by new training methods rather than bigger models—making them extremely useful today, but still clearly unfinished.
New York Signs AI Safety Bill Into Law, Ignoring Trump Executive Order
Days after President Trump moved to block states from regulating AI, New York Governor Kathy Hochul has signed a new law that will require large AI companies to follow and publish safety plans from 1 January 2027. Companies earning over $500 million must report serious AI problems within 72 hours or face fines, with a new state office set up to enforce the rules.
Understanding AI Benchmarks
In this post, Shrivu Shankar explains how AI benchmarks work, how to read them and what to pay attention to to not get misled by them. The post also briefly explains what popular benchmarks actually measure, as well as their pros and cons.
Introducing Bloom: an open source tool for automated behavioral evaluations
Anthropic presents Bloom, an open-source tool that helps researchers quickly test how often and how strongly AI models show certain behaviours. Bloom automatically creates many scenarios for a chosen behaviour and scores the model’s responses, producing results that closely match human judgments. By making behavioural evaluation faster, more flexible, and easier to run at scale, Bloom supports better research into AI alignment and safety.
▶️ MiniMax M2.1, Why it matters (4:49)
This video discusses the MiniMax M2.1 model release and how it reflects a broader shift in the AI industry towards agentic performance. Built on the base M2 model, M2.1 is fine-tuned to prioritise how well a model can take actions and operate in real-world environments, rather than simply what it knows. It also highlights how quickly MiniMax went from version M2 to M2.1, suggesting that we may soon see quicker iteration loops and a faster rate of improvement in AI performance.
▶️ The most complex model we actually understand (35:28)
Modern AI is poorly understood, but progress is being made in this area. This video explains how researchers discovered the phenomenon of grokking—a sudden, qualitative shift in how a model solves a problem during training—and how a follow-up study showed that a small neural network learned to perform modular addition using trigonometric functions, making it one of the most complex neural networks we currently understand. It is a fascinating video, and I highly recommend watching it.
🤖 Robotics
Frozen Waymos backed up San Francisco traffic during a widespread power outage
A power outage in San Francisco caused Waymo’s self-driving taxis to stop in the streets and create traffic jams. With traffic lights down and poor mobile connections, the taxis could not move safely, so Waymo paused its ride-hailing service. The company later resumed trips and said it would improve how its taxis handle major outages in the future.
U.S. Bans New China-Made Drones, Sparking Outrage Among Pilots
US authorities have banned the import and sale of new drones and key equipment from major Chinese makers like DJI and Autel, citing national security concerns over data access and potential interference. Although drones already in use are not affected, many American pilots and small businesses are worried because they rely on these affordable and widely used models and say there are few good alternatives.
Robotics industry reacts to iRobot’s bankruptcy
iRobot, the maker of Roomba, announced a Chapter 11 bankruptcy last week. Although many had expected this outcome, the news nevertheless sent shockwaves through the robotics industry. This post compiles responses from robotics leaders, engineers, and executives (some of whom previously worked at iRobot), sharing their thoughts, postmortems, and warnings drawn from iRobot’s story.
▶️ How Disney’s Avatar Robots Are Designed to Feel Real (28:56)
Here is a conversation with Ben Procter, an art director, production designer and visual effects artist, whose work can be seen in many movies, including Transformers and Prometheus. The conversation revolves around designing sci-fi robots that feel real, and dives deep into how real robotics, industrial design, and human biomechanics shape some of cinema’s most iconic machines. Ben Procter explains how mech suits are piloted, why insects, not humans, are the real inspiration, and how realism makes science fiction hit harder.
▶️ TRON 2 Officially Launched (2:42)
Chinese robotics company LimX presents the TRON 2, its new modular robotics platform that can easily switch between wheeled and legged configurations, with or without a torso.
These are the world’s smallest programmable robots
Researchers have created the world’s smallest autonomous robots—tiny, light-powered machines that can sense their surroundings, move on their own, and work for months at a very low cost. Barely visible to the eye, they use clever electrical forces to move through water and run on extremely low-power computers programmed by light. The robots could be used in future medical applications, such as monitoring individual cells, and in building very small devices, opening new possibilities in microscale robotics.
🧬 Biotechnology
Scientists Build ‘Speed Scanner’ to Test Thousands of Plant Gene Switches at Once
Scientists have created a new tool called ENTRAP-seq that can test thousands of plant gene switches at the same time, greatly speeding up plant genetics research. Instead of studying one gene at a time, the method works inside individual plant cells and uses AI to quickly show how different switches turn genes on or off. This could make it much faster to develop crops that grow better, produce more food, or handle stress like drought and disease.
Scientists create replica human womb lining and implant early-stage embryos
Scientists have created a lab-grown version of the human womb lining and shown that early IVF embryos can implant into it, allowing researchers to observe the crucial but poorly understood first steps of pregnancy. The embryos attached to the tissue, produced pregnancy hormones, and began forming early placental cells, allowing researchers to observe the chemical signals involved in implantation. The work could help explain why many pregnancies fail so early and lead to better treatments and higher success rates for IVF.
AstraZeneca, Daiichi’s breast cancer drug gets FDA nod as first-line treatment
The US Food and Drug Administration has approved AstraZeneca and Daiichi Sankyo’s cancer drug Enhertu to be used with Roche’s Perjeta as a first treatment for adults with advanced HER2-positive breast cancer. The approval follows a large study showing the combination helped patients live longer without their disease getting worse compared with standard treatment, and caused tumours to shrink or disappear in most cases, although full survival results are not yet available.
Thanks for reading. If you enjoyed this post, please click the ❤️ button and share it!
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"







