Walled Artificial Intelligence
What happens when the most powerful AI models are only available to those who are chosen to use them?
Anthropic’s Claude Mythos is the first state-of-the-art frontier model to be withheld from the public since GPT-2 in 2019. Much of the conversation has focused on what Mythos can do. I am not here to discuss its cybersecurity capabilities. I am here to ask what happens when the best AI is no longer available to everyone. What would happen if withholding the most powerful AI models from the public becomes not the exception, but the norm? Could that work commercially? And what would such a world look like?
Anthropic set the precedent
Anthropic has done something no frontier AI lab has done before. It announced its most powerful model and made it available only to a selected group of partners.
This is something new in the AI industry. We have had closed-source models and open-weight models. We have had models behind paywalls and models available for free. We have had research previews and staged rollouts. But we have not had a state-of-the-art model that is actively withheld from the public while being offered to handpicked partners with no guarantee that the full version will ever be released.
So far, no one has followed Anthropic's example with a general-purpose model. OpenAI released its latest model, GPT-5.5, which reportedly matches Mythos in cybersecurity capabilities, with no restrictions. However, the story is different when it comes to specialised models. GPT-5.4-Cyber, a tailored version of GPT-5.4 with enhanced cybersecurity capabilities, which was released in response to Mythos, is only available to vetted users. A week later, OpenAI announced ChatGPT for Clinicians, a specialised model for physicians, nurse practitioners, physician assistants, and pharmacists, which is also restricted to verified healthcare workers.
Sam Altman has been vocal about Anthropic's approach. On the Core Memory podcast, he called it "fear-based marketing" and said there are people who "have wanted to keep AI in the hands of a smaller group of people" for a long time. But he also acknowledged that "there will be very dangerous models that will have to be released in different ways." Even OpenAI's chief executive, while criticising Anthropic for restricting access, concedes that not everything can be made broadly available.
Benefits, not necessarily access
Both Anthropic and OpenAI have already created loopholes to withhold their models, and they are hiding in plain sight.
Look at how those two companies describe themselves and their mission. OpenAI says its mission is to ensure that artificial general intelligence “benefits all of humanity.” Anthropic describes itself as a public benefit corporation dedicated to securing AI’s benefits and mitigating its risks. Notice what is absent. Neither company promises access to AI. They promise benefits.
This is an important distinction to keep in mind. You do not need access to a model to benefit from what it produces. If an AI plays an instrumental role in discovering a new therapy that cures your cancer, you benefit, even though you might never have been able to access the model. If it identifies a critical vulnerability in the software your bank runs on and that vulnerability gets patched before anyone exploits it, you benefit. But the model stays behind the wall.
This framing gives AI companies room to manoeuvre. Neither OpenAI nor Anthropic has an obligation to make their models available to the public. They can keep their most powerful models restricted to a handful of partners and still claim that they are fulfilling their mission.
Why wall off your best model?
If there is no obligation to provide access, the next question is whether there are reasons to actively restrict it. There are several, and safety is the one used the most. In the case of Mythos, it is not an unreasonable one. Anthropic can argue that giving organisations like Cisco and the Linux Foundation early access lets them patch vulnerabilities before attackers can exploit them. More broadly, each generation of models is more powerful than the last, and AI companies have limited control over who uses them or how. One can argue that restricting access to potentially dangerous AI tools is a good idea. Whether the safety case alone justifies withholding a model from the public is a deeper question, and one that deserves its own exploration. But for now, it is worth noting that safety is not the only reason to wall off a model. There are also commercial incentives.
Let’s start with marketing. By withholding Mythos, Anthropic received more attention and press coverage than any model launch in recent memory. Not because of its benchmark scores, but because the public could not have it, which helped build the mystique around the model. News outlets that usually do not cover AI put Mythos on their front pages. That kind of attention does not happen when a model is available to everyone for $20 a month.
Then there is compute scarcity. Serving a state-of-the-art model is expensive. Before producing an answer, these models can take more time to break problems into subtasks, search for additional information, propose and test hypotheses. That takes computing power, which is not in abundance. Every person who uses the most capable models available to summarise an article or proofread an email, tasks that smaller models are perfectly capable of doing, is consuming resources that could have been directed toward more valuable work. Controlling who has access lets AI companies manage their resources and ensure the model is used primarily for high-value tasks. I understand the logic. But the one deciding what counts as a high-value task is not the user—it is the company.
There is also vendor lock-in. Applications like Cursor are popular among businesses partly because they avoid tying you to a single model provider. Developers can swap models in and out depending on cost and performance. But you cannot build on a model you cannot access. If the most capable model is only available through the company that made it, developers are pushed toward that company’s own tools—Anthropic’s Claude Code, OpenAI’s Codex. Restricting access to the best models shifts power back to the model-makers, and away from the ecosystem of independent tools that has grown around them.
Finally, keeping a model behind a wall can prevent it from being stolen. In February 2026, Anthropic complained about industrial-scale campaigns by Chinese labs to distil its models. The attackers were using Claude's outputs to train cheaper, competing systems. OpenAI has made similar complaints. Most AI labs use distillation in one form or another to create smaller, more efficient models from larger ones. But distilling a rival’s model without permission is closer to industrial espionage. Intellectual property law is one defence. But the surest way to stop someone from copying your model is to never let them use it.
How do you make money from a walled model?
Restricting access is one thing. Making it profitable is another. One might assume that the way to maximise revenue is to offer your best model to as many people as possible. But a walled model opens up different monetisation strategies that may prove more lucrative, or extract more value per token spent.
The simplest is premium pricing. Mythos costs five times more than Opus 4.6. That price will discourage individuals and organisations without deep pockets, but it will not discourage the likes of JPMorgan Chase or Microsoft. Enterprise customers with high-value problems will pay for access to the best model available if it delivers results their competitors cannot match.
There is also a more radical idea. In February 2026, Sam Altman suggested that OpenAI might invest in or subsidise companies that make significant use of its AI to discover new drugs or therapies, and take royalties in return. The company would cover the cost of using its models in partnership with a pharmaceutical firm and then receive a share of whatever that firm discovers.
OpenAI has not implemented the idea of royalties yet, and maybe never will. But I don’t think Altman would publicly propose an idea he is not at least considering.
Let’s do some quick maths to see how this idea could work. In 2025, Novo Nordisk made around $18 billion from selling Ozempic. If an AI model played a meaningful role in developing a drug of that scale, even a 1% cut from sales would be $180 million per year from a single product. Add to that the cost of high-value tokens consumed during development—at Mythos-tier pricing, that alone could run into the millions—and the number climbs further. Now scale that across multiple products and multiple companies, and you get a rather large recurring source of income, secured by enterprise contracts rather than individual subscriptions.
What it means for everyone else
Keeping the most powerful models available only to selected partners could have far-reaching consequences. If that were to happen, only vetted developers would be able to build applications on frontier AI. Independent researchers would lose access to the tools they need to study the technology’s capabilities and risks. Startups could not compete with incumbents who have access to models they do not. Only the selected few would be able to build with the most capable models. Mere mortals like you and me would have access only to what is made available to the public, or rely on open models.
This is a different world from the one we have now, where anyone can sign up for an API key and start building. The last few years produced an explosion of AI-powered tools, products, and experiments precisely because frontier models were broadly accessible. Restricting that access could cut innovation and limit the exploration of ideas that approved businesses would not otherwise pursue. The history of technology tells us that breakthroughs often come from unexpected places. When we restrict who can use a technology, we are limiting what is possible.
The pieces are in place
None of this is inevitable. The AI market is competitive and public pressure could push companies to keep their best models accessible for all. Open-weight models can act as a counterbalance. They continue to close the gap with proprietary ones, and anyone can download and run them without asking for permission. But running a frontier-scale model requires serious hardware and technical expertise that most people do not have.
For the past few years, anyone with an internet connection could use the world's most powerful AI models. We have grown used to that and it is easy to assume it will continue. But nothing guarantees it. The companies that build these models have the means, the motive, and now the precedent to change the arrangement. If they do, a handful of companies become the gatekeepers of a technology that is reshaping every industry it touches. They will decide who gets access, on what terms, and for what purpose.
If this becomes the norm, the landscape of AI splits in two. On one side, there are the models available to the public—still capable, but deliberately constrained. On the other side, behind the wall, sit the models that are genuinely pushing the frontier. Models an order of magnitude more powerful than today’s best models, maybe even close to the elusive artificial general intelligence. But you or I will not be able to touch them.
I would happily pay a premium for access to a state-of-the-art model if it helps me do better work. But that is not what walled AI is. Walled AI is someone else deciding who is worthy of access in the first place. That puts an extraordinary amount of power in the hands of a few companies over who gets to build, research, and innovate with the best tools available.
Thanks for reading. If you enjoyed this post, please click the ❤️ button and share it!
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity’s horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!




