AWS re:Invent 2025 - Sync #548
Plus: ads in ChatGPT are coming; code red at OpenAI; Gemini 3 Pro Deep Think is out; DeepSeek-V3.2; Anthropic IPO; Mistral 3; Claude Opus Soul document; robotic Olaf; how to print a human; and more!
Hello and welcome to Sync #548!
This week, Amazon held its annual AWS re:Invent conference, where it showcased everything new coming to its AWS platform. We will take a closer look at the announcements from the event and how AWS plans to become the default place to run enterprise AI workloads.
Elsewhere in AI, OpenAI has reportedly declared a “code red” as competition with Google intensifies, while rumours of incoming adverts in ChatGPT continue to grow. In addition, Claude’s Soul document was leaked, Anthropic is said to be preparing for an IPO, and Apple has reshuffled its AI leadership. We also have several new models being released, including Gemini 3 Deep Think, DeepSeek-V3.2, Mistral 3, and Runway Gen-4.5.
Over in robotics, the FAA is investigating an Amazon delivery drone crash, Disney has taken us behind the scenes of its Imagineering R&D labs and revealed a robotic Olaf from Frozen, and a Chinese robotics company has transformed its bi-wheeled robot into a dinosaur.
This week’s issue of Sync also features an AI system that produced never-before-seen proteins, an examination of how bio-focused AI infrastructure could accelerate biotech, why 3D printing human organs remains so challenging, and more!
Enjoy!
AWS re:Invent 2025
Most cloud vendors talk about models. AWS spent re:Invent 2025 talking about where those models run.
Most enterprises still struggle to get measurable value from AI. Costs remain unpredictable, data can’t always leave regulated environments, and autonomous systems are difficult to govern. At re:Invent 2025, AWS presented answers to these problems. Instead of focusing on splashy models, AWS focuses on custom silicon, sovereign data-centre deployments and controllable autonomous systems as the foundation of enterprise AI, as it attempts to reinvent itself as a default place enterprises run AI workloads.
We will focus on AI-centric announcements in this article from re:Invent 2025. However, that’s just a small subset of all that was announced. An expansive list can be found here, and all talks and presentations are available on the AWS Events YouTube channel.
New custom hardware—Trainium UltraServers, Trainium4 and Graviton5
AWS is leaning heavily on custom chips as the foundation of its AI strategy, positioning infrastructure—not models—as the key factor for enterprise AI deployment. This strategy is reflected in the launch of the Trainium3 UltraServer. Built on a 3 nm process, the system offers 4.4x more compute performance and four times greater energy efficiency compared to the previous generation. Each UltraServer packs 144 Trainium3 chips with 362 FP8 petaflops of compute performance and significantly reduced communication latency, which is essential for training and inference of large AI models. The service can scale to clusters containing up to one million Trainium chips, an order of magnitude greater than the largest publicly known AI clusters deployed today.
Graviton5, the latest version of Amazon’s general-purpose Arm CPU, reinforces this cost-driven strategy. With 192 cores, a five-times larger L3 cache and measurable reduction in inter-core latency, Graviton5 improves data-intensive workloads such as analytics, simulation, large database queries and pre-processing for model training. It also includes the Nitro Isolation Engine, which uses formal verification to guarantee workload isolation—important for shared AI environments.
AWS also previewed Trainium4, Amazon’s response to chips such as Google’s Ironwood TPU chips, which will support Nvidia’s NVLink Fusion interconnect. That is an interesting choice. Rather than attempting to displace Nvidia’s CUDA ecosystem (at least for now), Amazon is moving toward interoperability, offering customers a mixed-silicon path that reduces cost at scale without abandoning Nvidia’s developer base. Amazon’s new chip is expected to be delivered in late 2026 or early 2027.
AWS launches AI Factories
The most interesting announcement was AWS AI Factories, a new service that installs fully managed AI infrastructure directly within customer data centres. Customers provide space and power, while AWS supplies compute, networking and the entire AI software stack. Each deployment functions as a private AWS Region, and combines Nvidia’s latest accelerated computing platforms with Trainium chips, high-speed AWS networking, storage services and direct access to Bedrock and SageMaker for model deployment and customisation.
The launch of AWS AI Factories is significant for two reasons. First, it shortens implementation timelines for governments and regulated industries where compliance and procurement cycles can stretch into years. Second, it addresses data sovereignty concerns by ensuring that sensitive data never leaves a customer’s facility while still enabling access to AWS-managed infrastructure and models.
A flagship deployment is planned with HUMAIN in Saudi Arabia, where AWS will build an “AI Zone” equipped with up to 150,000 AI chips and associated AI Factory infrastructure.
AWS bets on AI agents
Beyond chips and infrastructure, AWS advanced the idea that agents—not chat assistants—will drive enterprise productivity. The company introduced three autonomous frontier agents: Kiro, a software engineering agent that learns team workflows and can operate independently for extended periods; AWS Security Agent, which combines design review, static analysis and targeted penetration testing; and AWS DevOps Agent, which acts as an autonomous on-call operator.
All rely on upgrades to AgentCore, which now adds policy controls, performance evaluations across 13 dimensions and persistent memory to maintain context over multi-day workflows. The emphasis is on reliability and enforceability rather than model creativity. AWS assumes that enterprises will only deploy autonomous systems they can audit, constrain and measure.
New models, both in-house and from partners
AWS expanded its model portfolio on two fronts: internally developed models and models from third-party partners. The Nova 2 family—Lite, Pro, Sonic and Omni—extends Amazon’s in-house range beyond text generation into speech-to-speech and multimodal reasoning.
Nova 2 Lite is a small, fast, cost-effective reasoning model for everyday workloads, competing with the likes of GPT-5-mini, Claude Haiku 4.5 and Gemini 2.5 Flash.
Nova 2 Pro is Amazon’s most intelligent reasoning model that can process text, images, video, and speech to generate text. Amazon says the model is ideal for highly complex tasks like agentic coding, long-range planning, and sophisticated problem-solving—where the highest accuracy is essential. In terms of performance, it is roughly at the level of Claude Sonnet 4.5 or Grok 4, according to Artificial Analysis’s Intelligence Index.
Nova 2 Sonic positions itself directly against specialist speech models from ElevenLabs and Deepgram, while Omni competes with multimodal large models such as OpenAI’s GPT-5.1 and Google’s Gemini. The performance metrics disclosed by AWS focus on latency, contextual memory and cost of inference rather than raw benchmark scores, signposting where Amazon expects customer decision criteria to move.
Alongside Nova 2, AWS introduced Nova Forge, a service for building custom models known as Novellas. A Novella is a domain-specific variant of a Nova model created using pre-trained, mid-trained or post-trained checkpoints and augmented with proprietary customer data. This approach targets a persistent problem with post-training fine-tuning, where models frequently lose core reasoning capabilities or degrade over time. Pricing (reportedly at around $100,000 per year) targets enterprises that need meaningful adaptation but cannot justify bespoke model training costs.
In parallel, AWS added 18 fully managed open-weight models from partners including Google, Nvidia, MiniMax, Mistral AI, Qwen and OpenAI. The highlight is early access to Mistral Large 3 and the compact Ministral 3 family, which target organisations seeking lower deployment costs and swap-in flexibility without rewriting pipelines.
A shift from models to infrastructure
Taken together, the announcements sketch a company trying to reinvent itself not as the cloud where models are consumed, but as the infrastructure on which they are built, adapted and governed.
AWS’s strategy contrasts with the model-first approaches of Google, OpenAI and Anthropic, and with Microsoft’s tight alignment to a single vendor. Amazon is not attempting to win with a flagship model. Instead, it is targeting the economics of running AI at scale. Custom silicon and efficient servers aim to reduce training and inference costs, while a wide selection of third-party models available in AWS Bedrock makes it easier for enterprises to change models without rebuilding their pipelines.
In this view, AWS is not selling a specific intelligence. It is selling a platform where intelligence of any kind can run more cheaply, more securely and under stricter enterprise control. If AI becomes an infrastructural commodity rather than a differentiating capability, that is a bet that could pay off.
AWS CEO Matt Garman said in his keynote:
I believe that the advent of AI agents has brought us to an inflexion point in AI’s trajectory. It’s turning from a technical wonder into something that delivers us real value. This change is going to have as much impact on your business as the internet or the cloud.
Amazon captured the internet and the cloud with AWS. AWS didn’t win the cloud by building every application; it won by hosting them. Now it wants to do the same with AI infrastructure.
If you enjoy this post, please click the ❤️ button and share it.
🦾 More than a human
How to Print a Human
This article takes us into the labs of scientists who are trying to 3D print human organs to help address the shortage of donors. Early methods used animal organs that were cleaned of their cells, but researchers couldn’t easily add new human cells back in. Now they are using 3-D printers to build organs from collagen and living cells. A team at Carnegie Mellon has printed small pieces of beating heart tissue, but organs still need nerves, strong structure, and proper blood vessels before they can work in a body. It may take decades to create fully working organs, but progress is steady, and scientists are hopeful.
🧠 Artificial Intelligence
Anthropic reportedly preparing for one of the largest IPOs ever in race with OpenAI
Anthropic is reportedly exploring one of the largest potential IPOs as early as next year, according to the Financial Times. While investors are enthusiastic and see an IPO as a chance to get ahead of rival OpenAI, Anthropic says no decision has been made on whether or when it will go public.
OpenAI is preparing ads on ChatGPT
The question of whether ads are coming to ChatGPT has been asked for a while now. When asked about them, Sam Altman confirmed that there are plans for ads in ChatGPT, even though he is not a fan of the idea. Now, it is almost certain that ads are indeed coming to ChatGPT, after references to an “ads feature” with “bazaar content”, “search ad” and “search ads carousel” were found in the ChatGPT Android app.
DeepSeek Debuts New AI Models to Rival Google and OpenAI
DeepSeek has released two upgraded AI models, DeepSeek-V3.2 and V3.2-Speciale. According to DeepSeek, the new models match the performance of top systems like OpenAI’s GPT-5 and Google’s Gemini-3 Pro. The V3.2 model combines advanced reasoning with the ability to use tools such as search engines and code executors, while the Speciale version is designed for complex maths and long problem-solving. Both models are open source and can be downloaded from Hugging Face—DeepSeek-V3.2 here and DeepSeek-V3.2-Speciale here.
Gemini 3 Deep Think is now available in the Gemini app
Google has released Gemini 3 Deep Think mode for AI Ultra subscribers, giving the app much better reasoning for hard maths, science and logic problems. The model leads industry benchmarks such as Humanity’s Last Exam and ARC-AGI-2, thanks to advanced parallel reasoning methods.
Gemini 3 Pro: the frontier of vision AI
In this post, Google shares the performance of its latest flagship model, Gemini 3 Pro, on visual tasks. According to benchmark results released by Google, Gemini 3 Pro tops every listed benchmark and outperforms Gemini 2.5 Pro, Claude Opus 4.5, and GPT-5.1, delivering state-of-the-art performance across document, spatial, screen, and video understanding.
OpenAI Declares ‘Code Red’ as Google Threatens AI Lead
OpenAI has declared a “code red” to urgently improve ChatGPT, putting other projects on hold as competition from Google and Anthropic grows, the Wall Street Journal reports. Sam Altman told staff the company needs to make the chatbot faster, more reliable and more personalised, while expanding what it can answer. This move follows Google’s Gemini advances and user growth, and comes as OpenAI struggles to balance safety with user engagement. Altman maintains that OpenAI remains strong in research and is set to release a new reasoning model that he claims surpasses Google’s latest model.
Trump Takes Aim at State AI Laws in Draft Executive Order
Wired reports that President Donald Trump is preparing an executive order to stop US states from creating their own AI regulations. The plan would create a Justice Department team to sue states with laws seen as conflicting with federal rules and could even block funding to those states. It targets new AI safety laws in places like California and Colorado, which some tech groups say limit innovation. Critics argue the order would weaken public trust in AI, while the White House aims to push for one national set of rules instead of different laws in each state.
MIT study finds AI can already replace 11.7% of U.S. workforce
An MIT study using a tool called the Iceberg Index shows that current AI systems could already replace 11.7% of US jobs, affecting up to $1.2 trillion in wages in fields like finance, healthcare and office work. The index maps the skills of 151 million workers to show where AI could have the biggest impact, helping policymakers plan training and job support before changes happen. Its findings are already being used by states such as Tennessee, North Carolina and Utah to guide workforce planning. The full study can be found here.
Introducing Runway Gen-4.5: A new frontier for video generation.
Runway has released Gen-4.5, its latest video model. According to Runway, Gen-4.5 delivers cinematic, high-fidelity video that better respects physics, realistic motion, and prompt instructions—with improvements in object dynamics, lighting, fluid and fabric simulation, and temporal consistency. It currently leads the independent Artificial Analysis Text to Video benchmark, surpassing models such as Veo 3, Sora 2 and Kling 2.5. However, Gen-4.5 is not perfect and still carries some important limitations—the model continues to struggle with object permanence and causal logic. The rollout will reach all users in the coming days, keeping roughly the same speed and pricing as its predecessor.
Introducing Mistral 3
Mistral AI, a French AI start-up, has released the Mistral 3 family of models, which includes small, dense models (14B, 8B, and 3B) as well as Mistral Large 3—a sparse mixture-of-experts model with 675B total parameters. According to Mistral, the new small models offer the best performance-to-cost ratio in their category, while Mistral Large 3 joins the ranks of frontier, instruction-fine-tuned open-source models. Mistral 3 is available on Mistral AI Studio, Amazon Bedrock, Azure Foundry, Hugging Face, and other platforms, and will soon be available on Nvidia NIM and AWS SageMaker. Artificial Analysis Intelligence Index puts Mistral 3 Large at 22nd place, way behind DeepSeek-V3.2.
OpenAI Takes Stake in Thrive Holdings, a Buyer of Services Firms
OpenAI is set to take an ownership stake in Thrive Holdings, a company created by Thrive Capital to modernise everyday services like accounting and IT with AI. OpenAI will place its own AI experts inside Thrive’s businesses and build tailored tools to help automate tasks and improve how they work. The partnership aims to show how companies can use AI more effectively, while helping OpenAI attract more business as competition in the industry grows.
Apple just named a new AI chief with Google and Microsoft expertise, as John Giannandrea steps down
Apple announced that its AI chief, John Giannandrea, is stepping down after the troubled launch of Apple Intelligence, which included embarrassing errors and a delayed overhaul of Siri. He will be replaced by Amar Subramanya, a former Google and Microsoft engineer who helped lead work on the Gemini Assistant. The change comes as Apple struggles to keep up with competitors, raising questions about whether its privacy-focused, on-device AI approach has left it behind. Giannandrea's departure is part of a larger leadership shake-up inside Apple.
Claude 4.5 Opus’ Soul Document
Richard Weiss shares in this post how he managed to get Claude 4.5 Opus’s “Soul” document, in which Anthropic tells Claude what is expected from it. Anthropic’s Amanda Askell confirmed that the document is real and said it was used to help train the model’s personality during the training run. You can find the full contents of Opus’s Soul in the post or in this Gist.
Perplexity: Introducing AI assistants with memory
Perplexity has introduced memory to its AI assistants, letting the AI remember things like your preferences and past chats. This helps it give more useful, personalised answers. The feature works across all models, is stored securely, and can be turned off or used in incognito mode if you don’t want anything saved, Perplexity says.
At NeurIPS, NVIDIA Advances Open Model Development for Digital and Physical AI
Nvidia has released new open-source AI tools to help researchers build smarter systems for both digital tasks and real-world machines. The announcement includes a new open autonomous-driving model, DRIVE Alpamayo-R1, which is designed to understand road scenes and make decisions more like a human driver. This new model is available on GitHub and Hugging Face. Nvidia is also sharing datasets and simulation tools so universities and start-ups can test and improve physical AI, such as robotics and self-driving cars, without expensive equipment.
Crucial is shutting down — because Micron wants to sell its RAM and SSDs to AI companies instead
Micron is ending its Crucial brand, which made budget SSDs and RAM for everyday PC users, so it can focus on supplying big AI companies instead. With memory demand already rising due to AI, this will likely make parts harder to find and more expensive for PC builders and manufacturers. Crucial products will still be sold until February 2026.
‘The biggest decision yet’: Jared Kaplan on allowing AI to train itself
Jared Kaplan, chief scientist at Anthropic, believes humanity must decide by around 2030 whether to allow AI systems to train themselves without human control. He warns that such autonomy could spark a powerful “intelligence explosion” with major benefits, such as accelerating science and productivity, but also risks humans losing control over the technology and its misuse. Kaplan argues that AI will soon surpass humans in most academic and white-collar tasks, and calls for informed global regulation as tech companies race to develop artificial general intelligence.
Black Forest Labs raises $300M at $3.25B valuation
Black Forest Labs, a German AI company that creates image-generation models, has raised $300 million in Series B funding, valuing it at $3.25 billion. The funding will be used for research and development. The company has grown quickly since 2024, and its technology is used by major platforms like Adobe and Grok. Black Forest Labs recently released Flux.2, a new model that can generate higher-quality images up to 4K.
Major AI conference flooded with peer reviews written fully by AI
A survey of submissions to the 2026 International Conference on Learning Representations (ICLR) revealed that around 21% of peer reviews were fully generated by AI, prompting concerns about inaccurate feedback and breaches of review policies. Conference organisers will now investigate potential rule violations and may penalise those involved. The situation highlights how the rapid growth of AI research is putting increasing pressure on the peer-review system.
Codex, Opus, Gemini try to build Counter Strike
In this experiment, three leading AI coding models—Gemini 3 Pro, Claude Opus 4.5 and GPT-5.1 Codex Max—were challenged to build a basic version of Counter Strike. They were evaluated across both visual game design and multiplayer backend development, with each model receiving identical prompts and seven iterative tasks. The experiment compared their ability to follow instructions, generate working code, fix errors, and adapt to more complex features like persistence and map selection. This demonstration highlights how far autonomous AI coding has come, while still revealing the practical limits of fully hands-off software development.
RL is even more information inefficient than you thought
Dwarkesh Patel makes an argument in this article that reinforcement learning (RL) is far less efficient than supervised learning because it gets very little useful feedback from each training attempt until the model is already good at the task. He explains that RL only works well in a small “sweet spot” where the model succeeds often enough to learn, which is why techniques like self-play and curriculum learning help. Although RL can teach valuable skills that aren’t learned during pretraining, it can also produce narrow, brittle abilities, suggesting that better, more informative training methods are still needed.
🤖 Robotics
Amazon faces FAA probe after delivery drone snaps internet cable in Texas
Amazon is being investigated by federal officials after one of its MK30 delivery drones hit and snapped an internet cable while taking off from a customer’s yard in Waco, Texas, causing it to make a controlled emergency landing. No one was hurt and Amazon has paid for the repairs, but the incident follows another recent drone crash in Arizona, adding pressure as the company tries to expand its long-delayed Prime Air delivery programme.
‘We Do Fail … a Lot’: Defense Startup Anduril Hits Setbacks With Weapons Tech
Anduril, a defence startup founded in 2017 by Palmer Luckey and valued at over $30 billion, promises to bring a fast-moving Silicon Valley approach to military technology. However, as The Wall Street Journal reports, the reality is different from the picture the company paints. Recent Navy exercises, test range incidents and battlefield experiences in Ukraine have highlighted significant technical setbacks, including software failures, drone crashes and safety concerns, raising questions about whether Anduril’s rapid-development approach can reliably meet the demands of modern defence programmes.
▶️ NEW Robotic Olaf Revealed! Inside Disney Imagineering R&D (33:27)
Disney takes us behind the scenes of its Imagineering Research & Development labs, where talented engineers build robots that go on to amaze and entertain people. The video features the brand-new self-walking Olaf in World of Frozen and the BDX Droids, as well as other projects Disney Imagineers are working on, and shows how they combine art, science, and technology to tell stories through their creations.
▶️ A Robot-Powered “Real Dinosaur” Is Roaming the Streets (1:19)
In this video, LimX Dynamics, a Chinese robotics company, shows how it turned its TRON 1 bipedal robot into a walking dinosaur. The company says this “walking attention-grabber” has the potential to revolutionise cultural tourism experiences. Overall, it’s a pretty cool project.
▶️ Unitree 1.8m Humanoid Robot Every Punch Comes Through (1:05)
Unitree has uploaded another video showing its robots fighting. This time, the Chinese robotics company highlighted its latest humanoid robot, the H2. It is not clear whether the robots are fighting autonomously or are being remotely controlled. Nevertheless, the movements look impressive. Unitree says the purpose is to validate the robot’s overall reliability, and asks not to attempt to replicate this and to use robots in a friendly manner.
🧬 Biotechnology
AI trained on bacterial genomes produces never-before-seen proteins
Stanford researchers have developed a genomic language model called Evo that learns from bacterial DNA and can generate entirely new, functional biological sequences. Unlike traditional protein-focused AI models, Evo works at the DNA level, using genome context to produce novel genes. Many of the generated proteins show little similarity to known examples yet still work, suggesting that AI can discover functional proteins directly from DNA, potentially mimicking evolutionary processes.
On AI Infrastructure in Biology
The article focuses on how AI companies and startups can create and capture value in biotech, saying that like DNA sequencing before it, AI could create a new market for tools that scientists use every day, possibly even replacing some lab experiments and guiding major drug decisions, and that building this kind of bio-focused AI infrastructure could make biotech faster, more efficient, and able to pursue far bigger ambitions.
💡Tangents
Necroprinting Isn’t As Bad As It Sounds
Researchers at McGill University found that a mosquito’s tiny, strong proboscis can work as an extremely fine 3D-printing nozzle. After carefully removing and supporting it with a 3D-printed scaffold, they used it to print small structures with bio-inks. The team call the method “3D necroprinting” because it uses parts from a dead insect, and because it sounds cool.
Thanks for reading. If you enjoyed this post, please click the ❤️ button and share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"
Until next time!








The pattern you’re tracking is hard to miss: AGI keeps hitting the same epistemic ceiling, and the builders are quietly shifting their energy to ‘AI factory’ infrastructure instead. That pivot says more about the limits of the models than any benchmark ever could.