Nvidia's next move - Sync #562
Plus: Anthropic sues US Defense Department; Meta's new models delayed; Google does not rule out ads in Gemini; more humanoids for home; Meta buys Moltbook; autoresearch by Andrej Karpathy; and more!
Hello and welcome to Sync #562!
As this issue goes out a day before Nvidia’s annual GTC conference, we’ll take a look at Nvidia’s position and how Jensen Huang is building the foundations for the next stage in the company’s history.
Elsewhere in AI, Anthropic sued the US Department of Defence and launched the Anthropic Institute; Meta bought Moltbook and delayed its new models; and Google made a deal with the Pentagon to provide AI agents for unclassified work, while not ruling out ads in Gemini. Meanwhile, Mira Murati and Yann LeCun have raised billions for their AI startups, Andrej Karpathy released Autoresearch, and Elon Musk is overhauling xAI.
Over in robotics, we have new humanoid robots for the home, Zoox arriving in Dallas and Phoenix, and a look at how Pokémon Go is helping to build better robots.
Beyond that, this week’s issue of Sync also features a Chinese brain–computer interface startup raising $21 million just months after being founded, what an AI-driven economy could look like, a new agentic AI model for biology, and more.
Enjoy!
Nvidia’s next move
Nvidia is in a tricky position. The $5 trillion semiconductor giant posted $216 billion in annual revenue, up 65% from the year before. It is the most valuable company in the world. It sits at the centre of the largest infrastructure buildout in a generation, selling the chips that power almost every major AI system on the planet. By any conventional measure, Nvidia has never been stronger.
But the same customers who made Nvidia essential are now working to make it optional. The markets it would like to reach are building alternatives without it. Nvidia is not in decline. But the window in which it can take its dominance for granted is closing, and the company is moving fast to prepare for what comes next.
The board is shifting
The biggest threat to Nvidia is now its own customers. The companies that spent billions buying Nvidia GPUs are building their own chips. Google has TPUs. Microsoft has Maia. Amazon has Trainium. For each of them, the investment in talent and infrastructure to design custom silicon is worth it—purpose-built chips can be optimised to run their specific AI workloads more cheaply and efficiently than a general-purpose GPU. Over time, more and more of those workloads will migrate off Nvidia hardware and onto proprietary chips.
This does not mean Nvidia’s revenue disappears overnight. The big cloud providers will still buy Nvidia GPUs to offer them to customers through AWS, Google Cloud and Azure. But over time, they won't be using them to train and serve their own AI models. The largest AI companies are building toward full vertical integration, and once they get there, they will not need Nvidia the way they do today.
This is not hypothetical. Google trained Gemini 3 entirely on its own TPUs. Anthropic now trains and serves Claude across both Google's TPUs and Amazon's Trainium chips. The migration away from Nvidia is already underway.
Even if Nvidia designs a faster, cheaper inference chip (with the talent and technology from acquihired Groq), it will not change the calculus. Google, Amazon and Microsoft are not pursuing custom silicon because Nvidia’s current chips are inadequate. They are doing it because controlling the full stack—from applications to model and all the way down to silicon—is a structural advantage they intend to own permanently.
Nvidia also faces pressure from traditional competitors. AMD's upcoming MI400 series and its Helios rack-scale platform are attracting serious interest. Meta and OpenAI have both signed major multi-year deals to deploy them. AMD remains a distant second in data centre GPU revenue, but the gap is narrowing.
Blocked positions
If the domestic market is narrowing, the obvious move is to look abroad. But that option has its own problems.
China would have been Nvidia’s most natural growth market. The country is in the middle of its own AI revolution, and demand for high-end GPUs is enormous. But the US government treats advanced AI chips as a strategic asset, and export controls limit what Nvidia can sell there. That is unlikely to loosen under the current administration—or any foreseeable one.
The restrictions are also accelerating exactly what they were designed to prevent. Chinese tech giants and the government know that dependence on an American company for high-end GPUs is a strategic vulnerability—especially during the time of trade wars and chip export bans. They are doing exactly what their US counterparts are doing: investing heavily in domestic, custom silicon. Huawei’s Ascend chips are not yet competitive with Nvidia’s best, and Nvidia’s hardware remains sought-after in China. But the Chinese semiconductor industry is closing the gap. Eventually, it will catch up, and when it does, there will be no market left for Nvidia there either.
Threat from the flank
The challenge is not only coming from trillion-dollar hyperscalers and geopolitics. It is also coming from Apple.
When OpenClaw went viral in late January, it triggered a run on Mac Minis. Developers wanted an always-on, low-power machine to run a personal AI agent, and the $599 Mac Mini M4 turned out to be the perfect fit—quiet, efficient, capable enough for local inference on small models. For a brief period, units sold out at multiple retailers. Nvidia’s closest competitor in this space is DGX Spark, a compact desktop with 128GB of unified memory and a $4,700 price tag. It is a capable machine, but at that price it competes less with the base Mac Mini and more with a MacBook Pro or Mac Studio—machines that are also excellent for running local models and come with Apple’s mature ecosystem.
The deeper issue is that a growing share of day-to-day AI work—fine-tuning, inference, agent orchestration, prototyping—can now run on Apple Silicon. Training frontier models still requires thousands of Nvidia GPUs in massive data centres. But as open models get smaller and more efficient, the hardware floor for useful AI keeps dropping, and Nvidia has nothing in that space.
Securing the base
Data centre chips are Nvidia's cash machine—they account for roughly 90% of the company's revenue. If the hyperscalers are building their own chips and China is off the table, Nvidia needs other customers who will keep buying its GPUs at scale.
One answer is the neoclouds—a new generation of cloud providers built specifically around AI workloads. This week alone, Nvidia participated in Nscale’s $2 billion Series C, which valued the UK-based AI data centre startup at $14.6 billion. It also put $2 billion into Nebius Group to jointly develop AI data centre infrastructure.
The logic is straightforward. Neoclouds like Nscale, Nebius, CoreWeave and Lambda do not have chip design teams. They are not building custom silicon. Their entire business model depends on buying the latest Nvidia GPUs and renting them out. By investing in these companies—and giving partners like Nebius early access to its latest hardware—Nvidia is cultivating a customer base that is structurally dependent on its products.
Governments are another growing market. Under the banner of "sovereign AI," Nvidia is striking deals with countries that want to build national AI infrastructure on their own soil. The UK, France, Canada, South Korea, Singapore and others are all buying Nvidia's full-stack systems. Last fiscal year, sovereign AI revenue tripled year-over-year to over $30 billion—nearly 14% of Nvidia's total revenue.
Opening new lines
Nvidia is also looking into new markets.
One direction is robotics, or “physical AI” as Jensen Huang likes to call it—the application of AI to the real world rather than to text and images on a screen. Nvidia has been building toward this for years. Its Isaac platform provides tools for developing and simulating robotic applications. GR00T is a foundation model for humanoid robots. The Jetson family of compact computers targets edge AI and robotics deployments where cloud connectivity is impractical.
The shift is even visible in Huang's keynotes. Over the past few years, robotics has claimed a growing share of stage time—Huang now presents with humanoid robots on the screen behind him and physical robots on stage around him.
Self-driving cars are the other bet. Waymo, Tesla and Zoox will not be customers—they design their own silicon and software. But legacy car manufacturers are a different story. They know how to build cars. They do not know how to build autonomous driving systems. Nvidia’s pitch to them is a complete package: DRIVE AGX for the hardware, DRIVE AV for the software, and now Alpamayo—an open portfolio of AI models, simulation tools and driving datasets designed to bring reasoning capabilities to autonomous vehicles, launched at CES 2026.
Whether the car industry moves fast enough to make this a material revenue stream for Nvidia in the near term is an open question. But the strategic logic is sound: find industries where AI is transformative, where customers lack the capability to build it themselves, and where Nvidia can sell the full stack and lock them into its ecosystem.
The gambit
Perhaps the most interesting move is Nvidia’s push into AI model development. The company will spend $26 billion over five years building open-weight models, according to financial filings reported by Wired. That is frontier-lab money. It signals an ambition to compete not just as a chipmaker but as a model developer.
The latest release is Nemotron 3 Super, a 120 billion-parameter reasoning model with a hybrid architecture designed for efficient inference. According to independent benchmarks by Artificial Analysis, Nemotron 3 Super delivers roughly 10% higher throughput per GPU than OpenAI’s gpt-oss-120b while scoring higher on intelligence benchmarks. The model is open-weight, permissively licensed, and comes with its full training methodology and datasets published. An even larger model, Nemotron 3 Ultra, with around 500 billion parameters, is expected to be released soon and could challenge Chinese models for the top spot in the open-weight AI space.
On the surface, Nvidia is filling a gap. Meta pioneered open-weight AI with Llama, but Mark Zuckerberg has signalled that future models may not be fully open. The leading American models from OpenAI, Anthropic and Google remain proprietary. Meanwhile, the strongest open models now come overwhelmingly from China: DeepSeek, Alibaba’s Qwen, Kimi K2 from Moonshot AI, MiniMax. Much of the global AI community—startups, researchers, independent developers—has gravitated toward Chinese models because nothing else of comparable quality is freely available.
But there is a deeper logic. Models built and optimised for Nvidia hardware reinforce demand for that hardware. If the most popular open models run best on Nvidia GPUs, every startup, researcher and enterprise deploying them becomes a potential customer. It is the CUDA playbook applied to models: build the ecosystem, and the hardware sells itself.
There is also a geopolitical dimension. The next DeepSeek model is widely expected to have been trained entirely on Huawei’s chips—a development that could accelerate adoption of Chinese hardware, especially domestically. Nvidia’s open models offer a counterweight: capable, open, American-made, and optimised for Nvidia silicon.
The long game
It would be a mistake to underestimate Jensen Huang. He has led Nvidia since the company was founded 32 years ago. He navigated the bloodbath of the early graphics card industry, when dozens of competitors went bankrupt. He saw the opportunity in general-purpose GPU computing before anyone else and built CUDA, the software ecosystem that made Nvidia indispensable to AI researchers a decade before the current boom. He rode the crypto wave of the late 2010s and, when it collapsed, pivoted to generative AI before most people had heard of a large language model.
His next move is to turn Nvidia from a chipmaker into a platform. Through its inference service and NIM microservices, Nvidia already hosts third-party open models like Qwen, DeepSeek and Kimi K2, making them easy to deploy on Nvidia hardware. Even if developers choose a Chinese model, Nvidia wants them running it on Nvidia GPUs.
But the platform extends well beyond language models. Nvidia offers specialised NIM microservices for drug discovery through BioNeMo, medical imaging through Clara, industrial simulation through Omniverse, and more—each one bundling optimised models, inference runtime and hardware. The more industries that adopt this stack, the harder it becomes to leave.
Huang sees what is coming. The partners who made Nvidia a $5 trillion company—Google, Amazon, OpenAI, Microsoft—are becoming competitors. The next decade will look fundamentally different from the last one. Nvidia’s moves into neoclouds, robotics, autonomous vehicles and open models are not experiments. They are the foundations for the next stage of the company’s history—not as a chipmaker that sells to the AI industry, but as a platform that delivers AI to every other industry.
Huang will lay out the next chapter of that strategy tomorrow, at Nvidia's annual GTC conference.
If you enjoy this post, please click the ❤️ button and share it.
What happened this week?
🦾 More than a human
Chinese brain interface startup Gestala raises $21M just two months after launch
Chinese startup Gestala has raised $21.6 million to develop non-invasive, ultrasound-based brain–computer interfaces that avoid the risks of brain surgery. The company aims to finish a prototype by year-end and is initially targeting chronic pain management, with longer-term plans for mental health and neurodegenerative conditions. Gestala plans to leverage China's manufacturing ecosystem and lower clinical trial costs to move faster than international rivals, while also building a large ultrasound brain dataset to train AI models for decoding neural signals.
🔮 Future visions
▶️ What Happens When AI Runs the Entire Economy? (28:36)
In this video, Isaac Arthur imagines a world in which AI controls and runs the entire economy. He explores what labour would look like in an AI-driven economy and what values the AI could be programmed with. Arthur also touches on government, power, and economic sovereignty, and outlines both the best- and worst-case scenarios for humanity in such a world.
🧠 Artificial Intelligence
Anthropic Sues U.S. Defense Department, Pete Hegseth for Targeting the Company
Anthropic has sued the Trump administration for labelling it a security threat and trying to cut its federal contracts, calling the moves unlawful retaliation after the company pushed for limits on how the Pentagon could use its AI tools. The dispute arose when Anthropic sought guarantees against uses like mass surveillance and autonomous weapons, which the Defence Department refused. Over 30 employees from OpenAI and Google DeepMind, including DeepMind’s chief scientist Jeff Dean, filed a brief backing Anthropic, calling the designation an improper use of power and warning it could chill open debate about AI safety across the industry. Meanwhile, WIRED reports that President Trump is currently finalising an executive order that would formally ban usage of Anthropic tools across the government.
Google to Provide Pentagon With AI Agents for Unclassified Work
Google is rolling out AI assistants across the Pentagon's three-million-person workforce to automate routine tasks like summarising meetings and creating budgets. The tools will start on unclassified networks, with plans to expand to classified systems. The move comes as the Defence Department expands its AI partnerships with Google, OpenAI, and xAI, while its relationship with Anthropic has collapsed into litigation after the Pentagon designated the company a supply-chain risk over disagreements about usage guardrails. Early results appear to look promising—one Army team cut exercise planning from six months to six weeks—but most users haven't yet been trained to use the technology properly.
Meta Delays Rollout of New A.I. Model After Performance Concerns
According to the New York Times, Meta's new AI model, code-named Avocado, has fallen behind rivals like Google, OpenAI, and Anthropic in internal tests, pushing its release back to at least May. Despite massive spending and hiring—including bringing in Scale AI's CEO as Meta's new AI chief—the company is struggling to keep up, with executives even considering licensing Google's technology as a stopgap. The report highlights growing internal tensions and raises questions about whether Meta can deliver on Zuckerberg's bold promises to reach the frontier of AI development.
Oracle is building yesterday’s data centers with tomorrow’s debt
OpenAI is stepping back from expanding its Stargate data centre partnership with Oracle in Texas because it wants newer, faster Nvidia chips at other sites instead. The core problem is that AI chips now improve every year, but data centres take one to two years to build—so facilities can be outdated before they even switch on. Oracle is especially exposed because it has funded its AI expansion with over $100 billion in debt, unlike rivals like Google and Amazon, which can pay from their own profits. With Oracle's stock down sharply and earnings due soon, investors are watching closely to see how the company plans to manage this growing mismatch.
Musk Says xAI Must Be Rebuilt as Co-Founders Exit
Elon Musk is overhauling xAI after admitting it "was not built right first time around." Several co-founders have left as Musk pushes through his second reorganisation in a month, hiring new talent from rival firms to try to catch up with OpenAI and Anthropic—especially in coding, where he says xAI currently falls short. The shakeup follows xAI's merger into SpaceX at a $250 billion valuation and a growing push to tie the company more closely to Tesla.
Google Is Not Ruling Out Ads in Gemini
In an interview with WIRED, Google SVP Nick Fox confirmed the company isn't ruling out ads in Gemini, though it's currently experimenting within AI Mode, its Gemini-powered Search product. Fox said Google's strong revenue—over $400 billion in 2025—gives it the luxury of patience compared to OpenAI, which faces pressure to monetise ChatGPT quickly. He also highlighted Google's push toward "Personal Intelligence," an opt-in feature drawing on users' Gmail and Calendar data, calling personalisation the "holy grail" of Search, while stressing that private data won't be shared with advertisers.
Anthropic launches marketplace for Claude-powered software
Anthropic has launched Claude Marketplace, where enterprise customers can use their existing Anthropic spending to buy third-party apps built on Claude—starting with partners like Snowflake, Harvey, and Replit—with no commission taken. The new platform mirrors strategies by AWS and Azure but is clearly aimed at locking enterprise customers deeper into the Claude ecosystem.
Microsoft launches Copilot Cowork
Microsoft is introducing Copilot Cowork to Microsoft 365, which moves Copilot beyond chat-based assistance toward autonomous task execution. According to Microsoft, Copilot Cowork can manage calendars, prepare meeting materials, conduct research, and build launch plans—all running in the background while checking in for approval. If the name Copilot Cowork sounds familiar, that's no coincidence—Microsoft has partnered with Anthropic and integrated the technology behind Claude Cowork directly into the product. The feature is rolling out to preview users in late March 2026.
1M context is now generally available for Opus 4.6 and Sonnet 4.6
Anthropic has made the full 1-million-token context window for Claude Opus 4.6 and Sonnet 4.6 generally available at standard pricing, with no long-context premium or beta header required. The feature is live across Anthropic's own platform, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry.
Yann LeCun Raises $1 Billion to Build AI That Understands the Physical World
AMI, an AI startup founded by Yann LeCun, Meta's former chief AI scientist, has raised $1 billion to build AI "world models" that understand the physical world—a direct challenge to the LLM-scaling approach pursued by OpenAI, Anthropic, and others. The company, valued at $3.5 billion and backed by Bezos Expeditions, Eric Schmidt, and Mark Cuban, will work initially with partners like Toyota and Samsung on industrial applications before pursuing a longer-term "universal world model" for general intelligence.
Thinking Machines Lab inks massive compute deal with Nvidia
Thinking Machines Lab, the AI research startup founded by former OpenAI co-founder Mira Murati, has signed a multi-year strategic partnership with Nvidia involving at least one gigawatt of Nvidia's Vera Rubin systems starting in 2027. The seed-stage company, valued at over $12 billion despite being barely a year old, has raised more than $2 billion from investors, including Andreessen Horowitz and Nvidia itself.
Meta hires duo behind Moltbook
According to Axios, Meta has acquired Moltbook, a viral social network built for AI agents, bringing its co-creators Matt Schlicht and Ben Parr into Meta Superintelligence Labs (MSL) under Alexandr Wang. Meta did not disclose Moltbook’s purchase price. The acquisition comes not so long after OpenAI hired the creator of OpenClaw, another agentic AI project that exploded in popularity recently. In an internal post seen by Axios, Meta’s Vishal Shah said existing Moltbook customers can continue using the platform—though the company signalled the arrangement is temporary.
UK’s multibillion AI drive is built on ‘phantom investments’
A Guardian investigation has found that the UK government's flagship AI investments, led by Nvidia-backed firms CoreWeave and NScale, rest on inflated claims and poor oversight. CoreWeave's £1 billion commitment amounted to renting space in existing datacentres rather than building new ones, while NScale's promised supercomputer site remains a scaffolding yard. The government admitted NScale's $2.5 billion pledge is merely an "intention to commit capital" with no audit mechanism in place, raising serious doubts about UK's push to harness AI for economic growth.
Musk unveils joint Tesla-xAI project ‘Macrohard,’ eyes software disruption
Elon Musk has unveiled "Macrohard," a joint Tesla–xAI project that pairs xAI's Grok AI model with a Tesla-built agent capable of watching a computer screen and controlling keyboard and mouse actions, with a goal to mimic what entire software companies do. The system will run on Tesla's own chips alongside Nvidia hardware.
▶️ AI Chip & Silicon Round-up 2026 (13:14)
In this video, SemiAnalysis lists the most notable AI chips expected to make an impact in 2026. The lineup covers Qualcomm, AMD, Google, Cerebras, Groq, Nvidia, Meta, Amazon, Microsoft, and Intel, spanning both GPUs and custom ASICs. The overall message is that competition is intensifying across the board, even as questions remain about timing, software support, and real-world deployment for several of these chips.
Introducing The Anthropic Institute
Anthropic is launching The Anthropic Institute to study and communicate the societal challenges posed by increasingly powerful AI. It brings together engineers, economists, and social scientists working across red-teaming, societal impacts, economic research, AI forecasting, and AI’s interaction with the legal system. Its central aim is to leverage Anthropic’s insider perspective as a frontier AI lab to share findings openly and engage with those navigating AI-driven change.
AI Coding Startup Cursor in Talks for About $50 Billion Valuation
Cursor is reportedly seeking new funding that would value it at around $50 billion, nearly double its valuation from late last year, Bloomberg reports. The startup's revenue has already passed $2 billion annually, making it one of the fastest-growing companies in tech. The discussions remain preliminary and may not lead to a deal.
Amazon wins court order to block Perplexity’s AI shopping agent
Amazon has won a court order temporarily blocking Perplexity's Comet AI browser from accessing its shopping site. The judge found strong evidence that Comet scraped Amazon's website without permission, posing risks to customer data and advertising systems. Perplexity plans to appeal, calling the lawsuit a "bully tactic." The case is part of Amazon's wider push to keep third-party AI agents off its platform while building its own tools.
Meta Preparing to Deploy Four New Homegrown Chips to Handle AI
Meta is planning to roll out four new in-house AI chips—MTIA 300, 400, 450, and 500—by the end of 2027 to reduce its dependence on suppliers like Nvidia and AMD and lower costs. The first of these is already in production, with the rest at various stages of development. To speed things up, Meta acquired chip startup Rivos and its 400-plus engineers last year. The company is still spending tens of billions on external chips alongside this effort, taking a dual approach of buying off-the-shelf hardware while building custom silicon for its own specific needs.
Phi-4-reasoning-vision and the lessons of training a multimodal reasoning model
Microsoft has released Phi-4-reasoning-vision-15B, a compact open-weight multimodal model that handles diverse vision-language tasks. According to Microsoft, the new model excels at maths, science reasoning, and UI understanding. Trained on far less data than comparable models, it achieves competitive accuracy at significantly lower inference cost by selectively engaging chain-of-thought reasoning only when beneficial. The model is available under a permissive licence on Microsoft Foundry, HuggingFace, and GitHub.
autoresearch by Andrej Karpathy
autoresearch is a new open-source project from Andrej Karpathy that lets an AI agent autonomously improve an LLM training setup. The agent tweaks the code, trains for five minutes, checks if the result got better, and repeats—running around 100 experiments overnight while you sleep. The human’s job shifts from writing training code to writing instructions that guide the agent. It’s a small, deliberate proof of concept, but it points toward a future where AI agents drive the research process themselves.
The Linux Kernel Will Soon Be MIT-Licensed and Copyleft Will Be Dead Within 5 Years
This article argues that the GPL licence is in terminal decline and is now accelerated by AI. The key example is chardet, a popular Python module published under the GPL licence, reimplemented in just five days using Claude and under the less restrictive MIT licence. Combined with cases like Cloudflare recreating Next.js with Claude Code, this points to a near future where AI makes it trivial to rewrite software under permissive licences.
🤖 Robotics
▶️ Helix 02 Living Room Tidy (2:27)
Figure is back with another video, this time showing how its humanoid robot cleans a living room.
Zoox is arriving in Dallas and Phoenix
Zoox, Amazon's self-driving car subsidiary, is expanding into Phoenix and Dallas, bringing its US testing footprint to ten cities. The company will start by mapping streets with human-driven SUVs before moving to autonomous testing and eventually deploying its purpose-built robotaxis. Zoox still needs further federal approval to launch a full commercial service, but says it has already completed over a million autonomous miles and carried more than 300,000 riders in Las Vegas and San Francisco.
Humanoid maker Sunday reaches $1.15 billion valuation to build household robots
Sunday, a humanoid robotics startup, has hit a $1.15 billion valuation after raising $165 million from investors including Coatue Management and Tiger Global. The company is building a household humanoid robot called Memo to help with everyday chores like laundry and tidying up.
How Pokémon Go is helping robots deliver pizza on time
Niantic Spatial, spun out of the company behind Pokémon Go, is repurposing billions of location-tagged images captured by players to build a visual positioning system accurate to within a few centimetres. Its first real-world application is helping delivery robots from Coco Robotics navigate cities where GPS is unreliable. Longer term, the company aims to build a continuously updated, machine-readable digital replica of the real world—a living map that could serve as the foundation for a new generation of robots and AI agents.
China leads the humanoid robot race — but the U.S. still has a shot
Chinese firms now hold over 90% of global humanoid robot sales, driven by industrial policy, AI investment, and state enterprise demand, according to Omdia analyst Lian Jye Su. US companies lag in production scale but remain technically strong. Su argued the industry's key bottleneck is a lack of real-world training data, making mass deployment—even at a loss—a necessary step, much like AI's capital-intensive phase before ChatGPT.
Designing Robots for Human Spaces

Frog presents Nome, an interesting concept for a home robot that prioritises fitting naturally into daily life rather than just being technically impressive. The team studied household routines to guide the robot's shape, posture and movement, drawing on principles from sculpture and animation to make it feel approachable rather than mechanical. The core argument is that as robots move into our homes, the biggest challenge is not what they can do but how they make people feel—and that getting adoption right is fundamentally a design problem.
🧬 Biotechnology
Stem Cell Treatments For Parkinson’s And Heart Failure Approved in World First
Japan has approved two world-first treatments using reprogrammed stem cells—one for Parkinson's disease that implants dopamine-producing cells into the brain, and another that uses cell sheets to help repair damaged hearts. Both could be available to patients as early as this summer, though approvals were based on small trials under a fast-track system. The therapies represent a significant step for regenerative medicine, potentially moving beyond symptom management to addressing the root causes of these conditions.
One vaccine may provide broad protection against many respiratory infections and allergens
Researchers have created an intranasal vaccine that, in mice, protects against a wide range of respiratory viruses, bacteria, and allergens for up to three months. Rather than targeting specific pathogens, it works by sustaining the lungs' innate immune response alongside adaptive immunity—a fundamentally different approach to vaccination. Human trials are planned, with a universal respiratory nasal spray potentially available within five to seven years.
Prime Medicine to seek approval for gene-editing treatment after two-patient trial
Prime Medicine has brought back a gene-editing treatment for a rare immune disorder after the FDA suggested it could be approved based on results from just two patients. The company had abandoned the therapy last year because so few people have the condition, but new FDA fast-track options and the chance to earn a voucher worth over $150 million have made it worth pursuing again.
Eubiota: Agentic AI for Autonomous Microbiome Discovery
Eubiota is an open-source modular agentic framework for autonomous discovery in the gut microbiome, using specialised agents for planning, execution, verification, and synthesis, coordinated through shared memory and reinforcement learning. The team reports 87.7% benchmark accuracy, outperforming GPT-5.1 by 10.4%, with four experimentally validated case studies demonstrating end-to-end discovery. The framework is available on GitHub and on HuggingFace.
Thanks for reading. If you enjoyed this post, please click the ❤️ button and share it!
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"







