We’ve seen this story before—and it always ends the same way.
Whenever a transformative technology bursts onto the scene, excitement follows. New possibilities emerge, old ways are swept aside, and startups rush in, convinced they can build the next big thing, or at least do the old things in a new way.
In the late 1990s, it was the Internet. The dotcom boom sent entrepreneurs scrambling to put ".com" at the end of every business idea, hoping to strike gold. A decade later, the world was gripped by a frenzy for mobile apps and social platforms. By the 2010s, with interest rates at historic lows, investors poured billions into any company with a slick app and a compelling story, regardless of whether the maths behind the business model ever added up. As long as you could show growth, there was always more money to raise.
Derek Thompson called this 2010s era the “Millennial Lifestyle Subsidy”—when venture capitalists and zero-interest loans made it possible for consumers to live well beyond the true cost of digital convenience. Ride across town for less than a bus ticket. Get dinner delivered for next to nothing. Work, play, shop, and stream—almost for free. The only thing that mattered was growth, not profit. Companies raised round after round, sometimes reaching deep into the alphabet to find a new letter for the next funding series. Sooner or later, though, the party ended. Investors demanded results. Prices went up. The perks faded. The user experience, once magical, started to sour.
Now, a similar cycle is playing out with artificial intelligence. Today’s equivalent of the name ending with “.com” is to say the service is “AI-powered”, and investors are once again scrambling not to miss out. If a startup can credibly claim to use AI, it stands a better chance of attracting funding, even if the business model is as wobbly as the dotcom or app-boom darlings of the past.
For now, users are reaping the benefits: free trials, free credits upon sign-up or renewed every month, and a sense that the future has arrived. The focus is on growth, market share, and carving out a niche before the window closes. But as history shows, the party cannot last forever. Sooner or later, the bill comes due. When it does, the change is sudden and rarely gentle.
We are, once again, living through the subsidy phase, this time powered by AI, and we are waiting for the hangover to begin.
The Enshittification of AI
So what happens after the party?
History suggests that the arc is almost always the same. Cory Doctorow, author and digital rights activist, has a word for it: enshittification. It’s the predictable decline that sets in as digital platforms and services go from dazzling to dreadful. Doctorow’s formula is simple:
First, they [companies or platforms] are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
We don’t have to look far for examples.
Google began as a breath of fresh air among search engines—fast, uncluttered, and focused solely on what users wanted—finding answers to their queries. Over time, as profit pressures mounted, the adverts multiplied, and the search experience deteriorated. Today, finding what you want can feel like hacking your way through a jungle of “sponsored” results, phishing links, and SEO-churned clickbait. Good luck using Google Search or YouTube without an ad blocker.
Facebook (and, for that matter, every app from Meta) also started out as a fun way to keep in touch with friends and family, but became a “Skinner box casino” full of manipulative feeds, intrusive ads, and endless notifications engineered for engagement rather than enjoyment, steadily diminishing its value for users and businesses alike.
AI platforms, for now, are still in their “nice” phase. The likes of OpenAI, Google, Anthropic and others offer powerful tools—often free or subsidised—as they lure customers in, and chase growth and market share. But the economics are brutal. They won’t be nice for long. Serving modern AI models at a massive scale is expensive, and they can’t burn billions of dollars forever. The generosity can’t last.
There are already signs of the enshittification of AI on the horizon. Perplexity is already experimenting with ads. People are asking not if but when OpenAI will introduce them. Google’s Gemini seems all but destined to serve ads; after all, advertising is still Google’s primary revenue stream. And it won’t stop there. As these platforms seek profitability, price hikes and more restrictive plans are likely to follow. The most useful features, once included in free or lower tiers, may be locked behind ever-pricier subscriptions. Users could find themselves nudged towards more expensive plans by deliberate throttling, usage caps, or the gradual erosion of what the “basic” plan offers—all designed to squeeze even more value from each customer.
Why AI’s Decline Could Be Worse
As Doctorow points out, once platforms gain dominance and legal power, they can continually “twiddle the knobs” on the back end—changing how recommendations work, suppressing or promoting content, and making it impossible for users or businesses to get a fair deal, or even to understand what is happening. In the case of AI, there may be no clear way to trace where influence or manipulation has crept in. A recommendation, a summary, or even the absence of an answer could all be shaped by undisclosed commercial arrangements.
What makes this wave different is how subtle—and potentially insidious—the monetisation can be. In traditional search, adverts are at least labelled as such. In a chatbot conversation or AI-generated answer, it is much harder to distinguish between neutral information and paid influence. The risks are heightened because these AI systems are designed to be conversational and persuasive. Unlike a static webpage or a list of links, a chatbot can tailor its tone, remember your preferences, and respond with apparent empathy, encouraging users to trust its advice or recommendations. Over time, users may form a sense of rapport or even attachment to an AI assistant, lowering their guard and making them less likely to question whether a suggestion is truly impartial. When monetisation is woven invisibly into these interactions, the potential for manipulation—and for subtle, highly personalised influence—becomes far greater than in the platforms that came before.
Venture capital firm Andreessen Horowitz has dubbed this the dawn of Generative Engine Optimisation (GEO). Where traditional SEO focused on getting websites to rank high in search results, GEO is about getting brands, products, and messages embedded directly into the answers provided by language models. Dozens of new GEO companies have sprung up, selling tools to track and shape how often an AI model “remembers” a brand, mentions a product, or repeats a specific talking point. Referral traffic and “model memory” are the new battlegrounds for influence in the age of AI.
Worse still, the consolidation of AI services in the hands of a few major players, protected by intellectual property laws and anti-competition regulations, means that switching to alternatives—or “modding” the system to restore fairness—becomes all but impossible. As Doctorow warns, “consolidation, unrestricted twiddling for them, and a total ban on twiddling for us” make the cycle of enshittification nearly inevitable.
Is There a Way Out?
Is there hope for users? Perhaps, but only if we heed the warnings of the last two decades and act before the trap closes. Doctorow argues for robust antitrust enforcement to break up tech giants, comprehensive privacy and consumer protection law, and—critically—a legal right to interoperability. Users, he says, must be able to leave platforms, take their data, and connect with other providers without facing “insurmountable collective action problems” or legal threats.
There will likely remain small sanctuaries: paid, privacy-oriented AI services, or open-source AI tools run by communities. These may not match the scale or polish of Big Tech, but they offer a vision of a more user-respecting internet—and a more ethical, human, and fair approach to AI.
But unless we insist on transparency, competition, and genuine user rights, the most probable future is clear: today’s AI tools will, in time, become tomorrow’s manipulative, extractive, and enshittified platforms.
Until then, enjoy the subsidised AI lifestyle. It won’t be around forever.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"
My problem with this is of course the impact on youth unemployment and the decline of youth well-being we have seen in the last 10 years of mobile app addiction. If there are less jobs that are entry level for new graduates, what do they have to look forward to on an internet filled with AI slop from YouTube to the rest with manipulative chatbots downgrading their creative and independent thinking with repetitive summaries wearing clothes of super intelligence? Now youth unemployment is already fairly high in countries with the high skewing of demographics to below 25. How does AI make things better for the future of the planet? And namely, as wealth inequality soars and geopolitical uncertainty fragments the world, how do increasingly efficient killer robots make the planet more safe?