Six predictions and themes for 2026
The signals and inflection points worth paying attention to
It’s the start of a new year, which makes it a good moment to look ahead and identify the themes that will shape 2026. These are the signals I’m paying attention to, and why I think they matter.
A breakthrough in continual learning will happen
One of the biggest limitations of current language models is that they stop learning once training is complete. After a model is released, what it knows is effectively frozen. There are workarounds—such as giving models access to the internet or relying on retrieval-augmented generation (RAG)—but these approaches sit on top of the model rather than addressing the core problem: the model itself cannot learn new things.
Some, like Dwarkesh Patel, see this lack of continual learning as a major bottleneck preventing AI tools from living up to their promises of usefulness. It may even be the missing link between today’s language models and artificial general intelligence.
Continual learning is an area of intense research, and a meaningful breakthrough could arrive as early as 2026. If that happens, it would enable AI models to absorb, synthesise, and incorporate new knowledge over time—without the need for expensive retraining cycles or bolting additional tools onto the system. The result would be dramatically more capable models—systems that improve through use rather than periodic retraining.
Consolidation is inevitable
The competition in the AI industry is fierce. New startups are fighting to carve out their niche. Meanwhile, established companies can render entire product categories irrelevant with a single model update, all while pulling top talent away with higher salaries and better titles.
As a result, AI startup founders will face increasing pressure to answer an uncomfortable question: do they still believe they can compete with bigger or better-funded competitors, or has their startup become an elaborate career progression move?
We have already seen several high-profile “acquisitions” over the past two years. Adept. Inflection. Covariant. Character.AI. Scale AI. And recently, Groq and Manus. In 2026, more names will be added to that list—especially if the AI bubble bursts, or even just deflates.
Apple—the dark horse of the AI race
Apple’s AI strategy is not going well. The company promised a massive makeover of how people will interact with its products through Apple Intelligence and a new, more intelligent Siri. Instead of a revolution, Apple Intelligence has become a symbol of failure. Apple did not deliver on its promises, and what it did ship feels half-baked and already outdated.
And yet, despite this disastrous start, Apple may still turn things around—and even emerge as a leader in consumer and personal AI.
Apple Intelligence burned Apple. Alongside Apple Vision Pro, it will likely go down as the biggest stain on an otherwise impressive run under Tim Cook’s leadership. However, this failure may also turn out to be a blessing in disguise. Apple is already making moves to reset its AI efforts. As part of a broader executive shake-up, Apple’s AI chief, John Giannandrea, is leaving the company, and internal AI teams are being reorganised.
The frustrating part is that Apple Intelligence had genuinely good ideas. A strong focus on privacy, on-device models, and deep integration of a smarter Siri across Apple’s ecosystem is exactly the right vision for personal AI. The vision was good—the execution was not. If Apple learns the right lessons and manages to deliver a trustworthy, predictable, and genuinely useful AI experience through a new Siri, it could still dominate personal AI.
Apple was not the first to introduce an MP3 player, nor the first to build a smartphone. What it consistently did better than others was take immature, chaotic technology and turn it into something reliable, intuitive, and mainstream. Personal AI is still in that awkward, unfinished phase. If Apple can apply its traditional strengths—privacy, polish, and deep ecosystem integration—to Apple Intelligence, it could still define what consumer and personal AI looks like. If that “Apple magic” still exists inside the company, I wouldn’t bet against it.
A viral deepfake will cause major civil unrest
In the summer of 2024, following a stabbing in Southport, UK, which resulted in the death of three children and injuries to eight others, the country was thrown into a week of riots fuelled by racist, Islamophobic, and anti-immigrant misinformation. The event that sparked the 2024 summer riots was, tragically, real. But what if a real event is no longer necessary to trigger a similar response? What if an AI-generated video is enough?
I hope this prediction does not come true anywhere in the world. However, given how good AI video generators have become—and how polarised societies are across the globe—this scenario is increasingly plausible. With each passing year, these tools produce videos that are increasingly difficult to distinguish from footage captured by a real camera. On top of that, there are deliberate techniques to make AI-generated clips appear more authentic, such as presenting them as security camera or dashcam recordings.
The companies behind these video generators are taking steps to reduce harm. These include rejecting certain prompts, adding visible watermarks (as Sora does), or embedding identifiers directly into the video, such as Google’s SynthID or initiatives like C2PA. But I do not see platforms like Instagram, TikTok, or YouTube clearly flagging which videos are real and which are not. As long as that remains the case, even overwhelming evidence that a video is fake may not matter. A lie repeated thousands of times will become the truth.
The backlash against AI will intensify
One trend that was simmering and growing in strength in 2025 was resistance against AI. I expect it to become a major movement in 2026, manifesting both in opposition to AI data centres and in the villainification of anything even suspected of being touched by AI.
A major flashpoint will be AI infrastructure. Wherever large data centres are being built, local communities often see little direct benefit—the electricity prices tend to rise, and there are not that many new jobs created. There is also a growing narrative around water usage. While this concern is only partially accurate—modern data centres typically rely on closed-loop cooling systems that recycle water after an initial intake—it is an emotionally powerful argument that resonates strongly at a local level.
This tension will be amplified by politics. With the US mid-term elections scheduled for 2026, opposition to AI infrastructure is likely to be used by political actors looking for populist talking points. “Big Tech”, abstract as it may be, is easy to vilify, and AI data centres are its most visible physical manifestation. Expect protests and increasingly hostile public hearings wherever large-scale AI infrastructure is proposed.
Additionally, I expect that in 2026, even the smallest suspicion of using AI, especially in the arts, will be met with boycott campaigns, pile-ons, and public shaming. “No AI was used” will become a marketing slogan, while creators and studios will increasingly feel pressured to prove their work was done by humans, not by AI.
A glimpse of what’s coming has already played out in the gaming industry. Larian Studios, the studio behind the beloved Baldur’s Gate 3, got itself into a PR nightmare after its CEO and founder, Swen Vincke, admitted that the studio is using AI in the initial phases of the game design. Similarly, Sandfall Interactive faced intense criticism after admitting that AI-generated placeholder textures accidentally made it into the final release of Clair Obscur: Expedition 33, despite being removed shortly after launch. Nevertheless, because of that, Sandfall was stripped of its Indie Game Awards and received a negative response from some parts of the gaming community.
One final piece of evidence for the growing resistance against AI is in language. In 2025, the word “clanker” entered common usage as a slur and derogatory term for robots and AI technology. Cultural changes are often accompanied by language changes. When a derogatory term enters common use, it reflects something deeper: a growing tendency to distrust and reject anything associated with AI.
The rise of offline and “AI-free” experiences
Building on the previous theme, the backlash against AI will drive a renewed interest in offline and explicitly “AI-free” experiences. This mirrors what happened with dating apps: when they failed to deliver meaningful connections, people slowly returned to offline alternatives.
A similar “reject modernity, embrace tradition” movement will emerge in response to an increasingly toxic online experience, one flooded with AI slop and low-effort content. As digital spaces lose their sense of authenticity, physical experiences—where effort, skill, and human presence are visible—will regain appeal. From local events and hobby groups to analogue media and in-person communities, people will seek out spaces that feel grounded, intentional, and unmistakably human.
Meanwhile, in online spaces, authenticity and imperfections will become a key way to stand out in the crowd. Being “perfect” will be associated with AI-generated content, while visible human flaws will signal real effort and intention.
This will create a “premium market” for human-created content and open opportunities for brands that genuinely embrace human creators. Take a look at how much positive feedback Apple received for its new Apple TV logo animation after it was revealed to be handmade with no CGI. Then compare it with the response Coca-Cola got for its AI-generated 2025 Christmas ad. Betting on human craft and human connection will pay off in 2026.
These are my predictions and themes to watch in 2026. I’ll come back to them at the end of the year and see how many I got right—and how many I got wrong.
If you disagree, I’d love to hear why. And if you have your own predictions or themes, please share them in the comments.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"





Your predictions have a luddite tone that I really didn’t expect. You cover the cutting edge fun stuff. I read your column every week. Love it but...
I disagree on the following points.
Viral deep fakes will not be believed. Look at all the Maduro ones already on X. Nothing will be believed. Everything’s assumed to be AI.
And the backlash will crumble. Of course those artists out of work will oppose AI in the arts. But what they are doing is like opposing electric lighting in order to keep using candles. Entire workflows are being upended, again. They never go backwards.
The AI Free label is novel, rare, niche and expensive. Financially unworkable. Way too expensive.
There may be a completely AI feature film. Now that’s a prediction!