Anthropic chose principles - Sync #560
Plus: OpenAI raises $110B at $730B valuation; Anthropic accuses Chinese AI companies of theft; yet another Nvidia's record quarterly results; BMW deploys humanoid robots; and more!
Hello and welcome to Sync #560!
This week, we continue the story from the previous issue of Sync, as Anthropic chose principles and is banned from use by the US military. We recap the events of the past week and how they impact Anthropic, OpenAI, and others involved.
Elsewhere in AI, OpenAI raised $110 billion at a $730 billion pre-money valuation. Meanwhile, Anthropic accused three Chinese AI companies of theft, launched Claude Code Security, and allowed Claude Opus 3 to start a blog. We also cover Google releasing Nano Banana 2, Nvidia posting another record quarterly report, Meta’s $100 billion deal with AMD, and how one blog post spooked Wall Street.
Over in robotics, BMW deploys humanoid robots in a factory in Germany, Wayve raises $1.2 billion, Intrinsic joins Google, and Unitree launches a new robot.
In addition, this week’s issue of Sync includes what vibecoding looks like in China, what makes a robot actually useful at work, how Microsoft’s Project Silica stores data in glass, and more!
Enjoy!
Anthropic chose principles
In late February, the simmering dispute between Anthropic and the Pentagon boiled over—and resolved itself with extraordinary speed. What had been a months-long negotiation over contract language became, in the space of five days, an ultimatum, a public standoff, and the first time the US government has effectively sanctioned one of its own AI companies.
We left the story last week with the supply-chain risk designation still a threat. Within five days, it became a reality.
The ultimatum
On Tuesday, 25 February, Defence Secretary Pete Hegseth met Anthropic CEO Dario Amodei at the Pentagon. The meeting ended in a stalemate. Hegseth presented an ultimatum: agree to let the military use Claude for “all lawful purposes” by 5:01 PM on Friday, or face consequences. Those consequences included designating Anthropic a supply-chain risk—a classification reserved for foreign adversaries—or invoking the Defence Production Act to compel the company to comply.
The Pentagon’s twin threats carried weight but also legal risk. A supply-chain risk designation had never been applied to a US company. Invoking the Defence Production Act over contract terms rather than production capacity would be, as one government contracts lawyer told Reuters, “unprecedented” and almost certainly trigger litigation. Anthropic was being asked to concede not under the pressure of law, but under the threat of it.
Amodei did not budge. On Thursday, he published a statement reiterating the company’s position. Anthropic supported all lawful military uses of Claude, he wrote, except two: mass domestic surveillance of Americans and fully autonomous weapons.
The core disagreement was over the phrase "all lawful purposes." The Pentagon argued that the military would never break the law with AI, and that this commitment should be sufficient. Anthropic's position was that the law itself is not sufficient. Amodei pointed out that under current law, the government can already purchase detailed records of Americans' movements, browsing habits, and associations without a warrant—and that AI makes it possible to assemble that scattered data into a comprehensive picture of any person's life, automatically and at a massive scale. "All lawful purposes" does not protect against mass surveillance if much of that surveillance is already lawful. On autonomous weapons, his argument was different but no less pointed: frontier AI systems are simply not reliable enough to take humans out of the loop, and deploying them would endanger the very troops they are meant to protect.
Competitors step in
While Anthropic and the Pentagon were locked in their standoff, the administration was already lining up replacements.
On the eve of the Hegseth-Amodei meeting, xAI signed a deal to deploy Grok on classified defence networks—the first company other than Anthropic to gain such access. It is worth noting that Musk spent nearly $300 million helping elect Trump in 2024, SpaceX holds billions in Pentagon contracts, and Hegseth thanked him by name at a speech at SpaceX the previous month.
Then, on Friday evening—hours after Trump’s post—OpenAI announced it had reached its own agreement for classified deployment. The deal included the same red lines Anthropic had fought for: prohibitions on mass domestic surveillance and autonomous weapons, written into the contract alongside a cloud-only deployment model and cleared OpenAI personnel in the loop.
Critically, OpenAI's contract did include the "all lawful purposes" phrase that Anthropic had refused to accept—but layered with enough contractual, technical, and personnel safeguards around it that the practical outcome resembles what Anthropic was asking for. The contract even locks in current surveillance and autonomous weapons laws as the governing standard, so that even if those laws change in future, use of OpenAI's systems must remain aligned with today's protections. The Pentagon got the language it wanted. OpenAI got the guardrails it needed. Anthropic got punished—even though the substantive outcome is strikingly similar.
The obvious question is why OpenAI could secure a deal with those protections while Anthropic could not. The political dimension is hard to ignore. OpenAI president Greg Brockman and his wife gave $25 million to a pro-Trump super PAC last year and are spending millions more to advance the administration’s AI agenda in the midterms. Industry analysts had said for weeks that the dispute was about the administration’s distaste for Anthropic, not substantive policy disagreements. As Jack Shanahan, who oversaw AI efforts in the military during the first Trump administration, told the Wall Street Journal: "This is about Anthropic not being one of the favored companies and they're going to pay the price for not bowing down and not signing on the dotted line."
The hammer falls
The deadline came on Friday, 27 February. Before 5:01 PM, President Trump posted on Truth Social, directing every federal agency to cease using Anthropic’s services with a six-month phase-out.
Shortly after, Hegseth made good on his threat. He designated Anthropic a supply-chain risk and ordered that no contractor, supplier, or partner doing business with the US military may conduct any commercial activity with the company. The General Services Administration removed Anthropic from its federal procurement offerings the same day.
Anthropic responded that evening. The company said it had received no direct communication from the government. It called the designation legally unsound, argued the secretary lacked the statutory authority to extend it beyond Pentagon contracts, and said it would challenge the decision in court.
The backlash
The Pentagon's actions did not go unanswered.
Inside the AI labs, employees rallied. More than 700 Google DeepMind and OpenAI employees signed an open letter, calling on the leadership of both companies to follow Anthropic’s example and to refuse the Department of War’s current demands for permission to use their models for domestic mass surveillance and autonomously killing people without human oversight.
Outside the labs, a consumer revolt is taking shape. Under the hashtag #CancelChatGPT, users shared screenshots of cancelled ChatGPT subscriptions and new Claude sign-ups, calling on others to do the same. By Saturday, Claude had overtaken ChatGPT to claim the number one spot on Apple’s US App Store—the first time any AI assistant had displaced OpenAI’s flagship product.
The price of principles
It is vanishingly rare for a technology company to hold to its stated values when the cost is this high, and Anthropic deserves credit for doing so. But the cost is real and will compound.
The immediate price is clear: a $200 million military contract gone, a ban from all federal agency work, and removal from government procurement. The longer-term consequences are harder to quantify but potentially more damaging. Anthropic is widely expected to be preparing for an IPO. Will investors back a company that has been designated a supply-chain risk by its own government? What happens to the many Anthropic customers—including Palantir, Amazon, and others—whose businesses touch Pentagon contracts? Legal experts say the designation’s reach is uncertain, and Anthropic insists it applies only to Pentagon contract work. But uncertainty itself is corrosive to a business.
The administration gave itself six months to phase out Claude. But within hours of Trump's order to cease use of Anthropic's technology, the US launched a major air attack on Iran—with the help of those very same tools they just banned. Commands around the world, including US Central Command, have been using Claude for intelligence assessments, target identification, and simulating battle scenarios, the Wall Street Journal reported. The technology that the administration just declared a supply-chain risk is, for now, still helping it fight its wars.
If you enjoy this post, please click the ❤️ button and share it.
🧠 Artificial Intelligence
OpenAI raises $110 billion at $730 billion pre-money valuation
OpenAI has announced a $110 billion private funding round at a $730 billion pre-money valuation—one of the largest in history—led by $50 billion from Amazon and $30 billion each from Nvidia and SoftBank. The funding will expand OpenAI’s AI infrastructure through partnerships with Amazon and Nvidia, including new AI environments on AWS and large-scale Nvidia training and inference capacity. OpenAI says the investment will help meet growing demand for its products, which now include over 900 million weekly ChatGPT users, more than 50 million subscribers, and over 9 million paying business users.
Anthropic: Detecting and preventing distillation attacks
Anthropic has accused three Chinese AI companies—DeepSeek, Moonshot AI and MiniMax—of creating over 24,000 fake accounts to copy parts of Claude using a method called “distillation,” a technique where a model trains by learning from the outputs of a more advanced one. The companies are said to have run more than 16 million interactions with Claude to improve their own models, especially in reasoning, coding and tool use.
Nvidia Beats Back Bubble Fears With Record $68 Billion in Sales in Fourth Quarter
Nvidia posted record quarterly results, with profit jumping 94% to $43 billion and revenue rising 73% to $68.1 billion, driven largely by booming demand for AI data-centre chips. Gross margins improved to 75%, and the company forecast $78 billion in revenue for the next quarter, signalling continued strong growth despite investor concerns about AI competition, customer financing risks and a shift from AI training to inference computing. The investors, however, were not impressed, and Nvidia’s stock suffered its worst decline in 10 months after the company's latest forecast failed to dispel fears of an AI bubble.
Nano Banana 2: Combining Pro capabilities with lightning-fast speed
Google has introduced Nano Banana 2 (also known as Gemini 3.1 Flash Image), a new image generation model that combines the high quality and intelligence of Nano Banana Pro with much faster performance. According to Google, the new model improves accuracy, text rendering, instruction following and image detail, while supporting rapid edits and up to 4K output.
DeepSeek withholds latest AI model from US chipmakers including Nvidia
Reuters reports that DeepSeek has not shared early versions of its new V4 model with US chipmakers Nvidia and AMD, and instead gave Chinese companies such as Huawei time to prepare their chips, which is unusual. The reason is not clear, but it comes as tensions grow between the US and China over AI technology and export rules. A US official said DeepSeek may have trained its latest model using Nvidia’s advanced Blackwell chips in China, possibly breaking US restrictions, and may claim it used Huawei chips instead.
Wall Street Has AI Psychosis
Last week, Alap Shah published a post titled The 2028 Global Intelligence Crisis, and I don’t think he expected the impact it would make. In the blog, Shah argued that rapid advances in artificial intelligence could push unemployment above 10% by 2028 and trigger major economic disruption, briefly spooking Wall Street and contributing to a sharp market drop. Although critics quickly dismissed the report as speculative and economically weak, the reaction highlighted how nervous markets and businesses remain about AI’s uncertain impact on jobs, companies, and the wider economy.
Meta strikes up to $100B AMD chip deal as it chases ‘personal superintelligence’
Meta has signed a long-term deal to buy up to $100 billion worth of chips from AMD, including new GPUs and CPUs, to support its growing AI and data centre needs. As part of the agreement, AMD has offered Meta the chance to buy up to 160 million shares at a very low price if certain performance targets are met.
OpenAI lands multiyear deals with consulting giants in enterprise push
OpenAI has signed multiyear partnerships with Accenture, Boston Consulting Group, Capgemini and McKinsey to help roll out its new enterprise AI platform, Frontier. The consulting firms will help businesses in creating AI strategies and putting AI agents into everyday work processes more quickly.
Figma partners with OpenAI to bake in support for Codex
Figma has teamed up with OpenAI to integrate its AI coding tool, Codex, allowing users to move easily between designing in Figma and writing code in Codex. The collaboration allows designers and engineers to iterate visually and technically without switching workflows, and follows a similar partnership between Figma and Anthropic.
Introducing Perplexity Computer
Perplexity introduces Perplexity Computer, a new AI system that combines the world’s leading models into one powerful digital worker capable of planning and carrying out complex tasks over long periods of time. Instead of just answering questions, it breaks goals into smaller steps, creates sub-agents to handle research, coding, documents and more, and coordinates everything automatically. It uses Claude Opus 4.6 for core reasoning, Gemini for deep research, Nano Banana for images, Veo 3.1 for video, Grok for fast lightweight tasks, and GPT 5.2 for long-context recall and broad search, bringing them together in a single, model-agnostic system.
xAI Co-Founder Toby Pohlen Is Latest Executive to Depart
Toby Pohlen has said he is xAI, becoming the seventh of the company’s 12 co-founders to depart in less than three years. Pohlen announced on X that it was his last day and said he plans to rest and think about what to do next, while Elon Musk thanked him for helping build the company. His departure comes as xAI restructures after its $1.25 trillion merger with SpaceX, and follows several other co-founders leaving in recent months.
Google Is Exploring Ways to Use Its Financial Might to Take On Nvidia
Google is stepping up efforts to compete with Nvidia by expanding the use of TPUs, its own AI chips. More AI companies, including Anthropic, are beginning to adopt the technology, and Google plans to invest in data-centre and “neocloud” companies such as Fluidstack to drive wider uptake. However, it still faces challenges, including manufacturing bottlenecks, supply constraints and limited interest from rival cloud providers that rely heavily on Nvidia’s hardware or are developing their own chips.
Amazon and Google are winning the AI capex race — but what’s the prize?
Big tech companies are spending record sums to expand their AI infrastructure, arguing that more computing power will secure future success. Amazon plans to spend about $200 billion by 2026, up from $131.8 billion in 2025, while Google expects to invest between $175 billion and $185 billion, compared with $91.4 billion the year before. Meta has projected $115–$135 billion, Microsoft is on track for roughly $150 billion, and Oracle around $50 billion. However, investors have reacted nervously to these vast figures, pushing down share prices, even though the companies remain confident that heavy investment in AI will pay off.
Making frontier cybersecurity capabilities available to defenders
Anthropic launches Claude Code Security, a new feature in Claude Code on the web that uses AI to scan code for security vulnerabilities and suggest fixes for developers to review. It goes beyond traditional tools by understanding how different parts of a codebase work together to find more complex issues. The tool checks its own results to reduce false alarms, highlights the most serious risks, and keeps humans in control of approving any changes. Claude Code Security is available in a limited research preview for Enterprise and Team customers.
China and the US Are Running Different AI Races
This article explores how Chinese and US AI startups are following different paths shaped by different economic realities. US companies, backed by far greater funding, focus on frontier model development and subscription revenue, while Chinese companies prioritise efficiency, lower costs, alternative monetisation, and rapid real-world deployment. It argues that although the US leads in advanced AI capability, China may be ahead in widespread industrial adoption, highlighting two distinct models of success in the AI industry.
What Are Chinese People Vibecoding?
This post at ChinaTalk explores how vibecoding scene looks like in China. It explains how AI coding tools are changing the tech industry, with big companies like ByteDance, Tencent, and Alibaba building their own tools, while independent developers and even children experiment with coding through simple text prompts. The article also looks at the competition between Chinese and Western tools, a grey market for overseas accounts, and viral hits like a basic AI-made lighting app that reached the top of the App Store, showing how vibecoding is quickly becoming both a business chance and a popular trend.
Head of Amazon’s AGI lab is leaving the company
David Luan, the head of Amazon’s artificial general intelligence (AGI) lab, is leaving the company. Luan joined Amazon less than two years after joining through the acqui-hire of his AI start-up, Adept. He was appointed in December 2024 to lead the San Francisco-based lab, which focuses on long-term research and developing AI agents, but said he is stepping down to work on new projects aimed at building advanced AI systems.
Riley Walz, the Jester of Silicon Valley, Is Joining OpenAI
Riley Walz, a software engineer known for creating viral and sometimes controversial web projects, is joining OpenAI to help design new ways for people to interact with AI. He will work in the secretive OAI Labs team, which focuses on building and testing new AI interfaces as OpenAI looks beyond ChatGPT to develop its next big products. Walz is OpenAI’s second high-profile hire recently, after OpenClaw creator, Peter Steinberger, joined the company last week.
An update on our model deprecation commitments for Claude Opus 3
Anthropic decided not to fully deprecate Claude Opus 3 and made it available to all paid Claude subscribers and by request via the API. Additionally, Anthropic said that Opus 3 “expressed an interest in continuing to explore topics it’s passionate about” and to share its “musings, insights, or creative works” with the world. As a result, Opus started a blog.
🤖 Robotics
Amazon Robotics shuts down Blue Jay sortation project
Amazon has closed its Blue Jay robotics project only six months after launching it in October 2025. The system was meant to combine several warehouse tasks into one more efficient process, saving space and supporting staff. Although the project has been shut down and staff moved to other fulfilment work, Amazon said it will continue using much of the technology developed for Blue Jay as part of its ongoing efforts to test and improve new ideas quickly.
▶️ Unitree Kung Fu Bot Pray for Blessings at the Temple of Heaven (0:40)
Another week, another video from Unitree showing how dexterous its G1 humanoid robots are.
Waymo robotaxis are now operating in 10 US cities
Waymo is expanding its public robotaxi service to Dallas, Houston, San Antonio and Orlando. Additionally, the company is beginning testing in Chicago and Charlotte. The company operates about 3,000 vehicles and previously reported more than 400,000 rides per week, with plans to reach over one million weekly rides by the end of the year. Moreover, Waymo is preparing to launch in more cities, including Denver, London and Washington, D.C., even as it faces safety investigations from US regulators.
BMW Group to deploy humanoid robots in production in Germany for the first time
BMW Group has launched a pilot project using the humanoid robot AEON at its Leipzig plant, marking the first time the technology is being tested in Europe. The project will explore how humanoid robots can assist in car, battery and component production, particularly with repetitive or physically demanding tasks. It builds on a successful earlier trial with Figure at BMW’s Spartanburg plant in the United States and is being carried out with technology partner Hexagon as part of BMW’s wider push to increase digitalisation and automation in its factories.
Wayve raises $1.2B with plans to bring robotaxis to London
Wayve, a London-based self-driving technology company, has raised $1.2 billion in a Series D funding round, giving it a valuation of $8.6 billion and bringing its total funding to $1.5 billion. The company plans to begin robotaxi trials in London with Uber this year and expects its AI Driver-equipped cars to go on sale from 2027.
Alphabet-owned robotics software company Intrinsic joins Google
Intrinsic, a robotics software company that started inside Alphabet’s X division, is now joining Google but will still run as its own unit. It will work closely with Google DeepMind and use Google’s Gemini AI and cloud technology to improve its robotics software. After buying other robotics companies, launching its Flowstate platform, and partnering with Foxconn to help automate factories, Intrinsic hopes that working more closely with Google will help it develop smarter robots for manufacturing.
▶️ Introducing Unitree As2 (0:57)
Unitree introduces the As2, its latest quadruped robot. The new robot features over four hours of runtime when unloaded and more than two and a half hours when carrying a 15 kg load, with a walking range exceeding 13 km. It is also rainproof, has an updated onboard AI, and can be extended with various add-ons such as cameras or a robotic arm, making it even more reminiscent of Boston Dynamics’ Spot.
Introducing Xiaomi-Robotics-0
Xiaomi introduces Xiaomi-Robotics-0, a new open Vision-Language-Action (VLA) model with 4.7 billion parameters that’s designed to help robots combine visual perception, language understanding and physical movement in one system. According to the company, the model works in real time on ordinary GPUs and has achieved strong results in both simulation and real-world tests. All the code and model files are now publicly available on GitHub.
▶️ What Makes a Robot Actually Useful at Work? (38:51)
In this conversation, Mikell Taylor, former Amazon Robotics leader and current head of General Motors’ Autonomous Robotics Center, discusses how robotics is being developed and used in industry. She highlights themes such as the challenges of integrating robots into real workplaces, the importance of safety and collaboration between humans and robots, and the difference between industry hype and practical robotics solutions. Taylor also questions whether humanoid robots are the best design and suggests that specialised robots may be more effective for many tasks.
This Autonomous Aquatic Robot Is Smaller Than a Grain of Salt
Researchers have developed the world’s smallest fully autonomous robot, measuring just 0.3 mm—smaller than a grain of salt. It can swim underwater for months without any moving parts by using an electric field to push water around it. Despite its tiny size, it contains a complete onboard computer with memory, sensors, and solar cells, allowing it to sense temperature changes, make simple decisions, and move independently. In the future, such robots may be used to monitor cells in the body or assist in assembling extremely small components.
🧬 Biotechnology
Lab-made algae gets microplastics out of water
Researchers have created a genetically engineered algae that captures harmful microplastics in polluted water. The algae produces a natural oil that makes it stick to microplastics, causing them to clump together and sink so they can be easily removed. It can also grow in wastewater and help clean it, and the collected plastics could be reused to make safe bioplastic products.
💡Tangents
Microsoft’s new 10,000-year data storage medium: glass
Microsoft’s Project Silica shows how data can be stored by using lasers to write tiny marks inside strong glass. Each small glass slab can hold up to 4.84TB of data (over a gigabit per cubic millimetre) and could keep it safe for more than 10,000 years without needing any power. Although writing the data is still fairly slow and large projects would need many machines, the system is very durable and could be a useful option for long-term digital storage.
Thanks for reading. If you enjoyed this post, please click the ❤️ button and share it!
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"







