Two paths to the future
I’m a designer who started building things with AI about a year ago. Since then, I’ve automated more and more of my own work. Things that used to take me weeks now take days. Things that used to take a team now take me and a good prompt.
Every week there’s a new tool, a new model, a new company launching something that makes the previous thing look slow. And every week I see more companies around me adopting AI to move faster, ship more, do more with less.
But when I look outside my bubble, the picture is very different. Most people haven’t felt this yet. Their jobs look the same. Their companies haven’t really changed.
The acceleration is real, but it’s not evenly distributed. Some of us are already living in the future. Most people are still in 2023.
That gap is what keeps me up at night. Because how fast it closes changes everything. Not just for people like me who work with AI every day, but for everyone. And I keep coming back to two possible scenarios for how this plays out.
The slow burn
In this version, AI gets better gradually. Progress continues but doesn’t explode. Models improve year over year. Automation creeps into industry after industry, but slowly enough that it doesn’t feel like a crisis at any single moment.
Over seven to ten years, maybe longer, jobs start disappearing. First the ones already being hit hardest. Software development, where AI can already write, test, and review code faster than most junior engineers. Data entry. Basic customer support. Routine legal work. Content production.
Then it spreads. Accounting. Logistics coordination. Design work that used to require a human eye. Each year, a few more roles get automated. Each year, the job market gets a little tighter.
The problem with this scenario is the frog in boiling water. It’s happening, but not fast enough for anyone to panic. Not fast enough for politicians to talk seriously about universal basic income or radical wealth redistribution.
Those conversations will get pushed to the next election cycle, and the one after that. Nobody wants to be the person who tells voters that half the jobs in the economy are going away.
Meanwhile, real people lose their livelihoods. Not all at once, but steadily. Enough to build resentment. Enough to fuel the kind of anger that finds targets.
We’re already seeing this. Someone recently attacked Sam Altman’s home with a Molotov cocktail and then threatened to burn down OpenAI’s offices. There’s growing talk of sabotaging AI data centres. The frustration is real and it’s not irrational. People are watching their industries get hollowed out while the people building the technology get richer.
I think this is the worse scenario. Not because the destination is different, but because the journey is brutal. A decade of rising inequality, political paralysis, and growing public anger, all while the technology continues advancing regardless.
Climate change is the closest analogy. The science was clear for decades, but the response was too slow because the pain was too gradual to force action.
The one advantage of the slow path is that people have more time to adapt. More time to retrain, to start new kinds of businesses, to reorganise communities. But adaptation only happens if people see what’s coming, and the whole point of the boiling frog is that they don’t. Not until the water is already too hot.
The sudden shock
Now imagine the opposite. AI improves rapidly over the next one to three years. Models start improving themselves. Robots get mass-produced and deployed into warehouses, kitchens, construction sites, hospitals. The gap between what AI can do and what humans do for a living closes almost overnight.
This scenario feels like COVID. One month everything is normal. The next month, everyone you know is affected. Companies that don’t accelerate with AI go bankrupt within quarters, not years. Entire verticals you thought were safe get restructured from the inside.
The change is fast enough to be genuinely shocking. Mass unemployment at a speed nobody has seen before.
But here’s why I actually think this is the better scenario, even though it sounds worse.
When things move this fast, there is no option to delay. Politicians can’t push it to next year. Companies can’t pretend it’s not happening. When unemployment doubles in eighteen months, you get emergency legislation. You get radical solutions because there are no incremental ones left.
Companies need people to have money to buy their products, so they have a direct incentive to push for redistribution. Governments see the numbers and have no choice but to act.
It’s like ripping off a band-aid. It hurts sharply for a short time, maybe a year of real chaos, and then you’re in a new reality and everyone has to build from there. The transition is brutal but it’s fast. And once it’s done, it’s done.
Sam Altman wrote about this back in 2021 in Moore’s Law for Everything. His argument was that AI would make everything cheaper so quickly that the right response isn’t to slow it down, but to make sure the wealth it creates gets distributed. Tax the companies, fund the people, and let the machines do the work.
More recently, Elon Musk has been saying something similar, talking about a universal “you can have whatever you want” income powered by AI and robotics, and that saving for retirement will become irrelevant.
When two of the most powerful people in tech are both saying abundance is coming, it’s worth paying attention. I think they’re directionally right, even if the politics are harder than either of them suggests.
What comes after
Both paths lead to the same destination. The speed is different but the endpoint is the same.
If everything can be automated, if robots can build and maintain and deliver and serve, if AI can handle the coordination and problem-solving layer on top, then the cost of producing almost anything drops close to zero.
The only real limit is energy. And within that limit, we can have it all.
Food, housing, devices, healthcare, education, travel, entertainment, all available to everyone at a fraction of what they cost today.
Diseases get eradicated because AI can run medical research faster than any human team. Lifespans extend because the bottleneck was never biology, it was our ability to understand and modify it. Problems that seemed permanent start getting solved one by one because solving them becomes cheap.
This is the age of abundance. Not a utopia. Utopias are fantasies. But a world where the baseline standard of living for every human being is dramatically higher than what most people experience today.
“The future is already here. It’s just not evenly distributed.”
William Gibson
That quote has never been more relevant. Because abundance won’t arrive everywhere at once.
The risks nobody wants to talk about
There are real dangers in this future, even the optimistic version.
The most obvious one: who controls the most powerful AI? AGI, artificial general intelligence, means AI that can do anything a human can do. ASI, artificial superintelligence, means AI that’s far beyond human ability.
If the leading AGI or ASI systems end up in the hands of authoritarian governments or private actors who have no interest in sharing, abundance becomes a weapon. One group has everything. Everyone else has what they’re allowed to have. The technology is the same, the outcome is entirely different depending on who holds it.
Then there’s the problem of countries whose entire economies are built on producing things for richer countries.
India alone has over 5 million IT workers in its outsourcing sector and another 1.6 million people working in call centres. Investment bank Jefferies has predicted that Indian call centres could face a 50% revenue hit from AI over the next five years. Offshore software development, the other pillar of India’s tech economy, is already being squeezed as AI coding tools let smaller teams do what used to require large offshore teams.
Add textile workers in Bangladesh, factory workers across Southeast Asia, and you’re looking at hundreds of millions of people whose livelihoods depend on doing work that richer countries are about to automate.
Those economies don’t just lose jobs. They lose their entire economic model. And they haven’t reached the abundance level yet. They’re left behind in the transition, facing extreme poverty while the countries that built the AI move on.
I’m optimistic that this gap closes eventually. When you can produce almost anything for almost nothing, helping people isn’t expensive anymore. The question is whether the political will exists to actually do it, and whether it happens fast enough to prevent real suffering.
There will also be things that stay scarce no matter how much abundance we create. You can’t automate a view of Central Park. There’s only one Mona Lisa. Beachfront property doesn’t scale. The things that are truly finite will become even more valuable, and access to them will still depend on wealth or luck or both. Abundance raises the floor. It doesn’t eliminate the ceiling.
How to prepare
Regardless of which path we end up on, the preparation is mostly the same.
Build skills that are hard to automate. Problem definition, judgment, taste, agency. The ability to frame the right question, evaluate the answer, and decide whether it’s good enough. Those skills matter in both scenarios.
Stay flexible. Don’t bet your entire identity on a job title or a specific skill set. The people who adapt fastest will be the ones who treat their skills as tools rather than identities. You’re not “a designer” or “an engineer.” You’re a person who can learn, and right now you happen to be doing design or engineering.
Build financial resilience. If you can, reduce your fixed costs and increase your savings. Not because the world is ending, but because transitions are bumpy even when they end well.
Pay attention. Don’t be the frog. Watch what AI can actually do, not what people say it can do. Try the tools. Build something with them. Form your own opinion about where this is going and how fast.
And don’t assume someone else will figure it out for you. Not your government, not your employer, not the tech companies. The people who come through transitions well are the ones who saw it coming and moved early.
The middle road
There’s a third possibility: something in between. AI takes off faster than the slow scenario but not as dramatically as the fast one. Maybe four to six years of accelerating change.
Honestly, I think this plays out more like the slow burn than the shock. It’s still gradual enough for politicians to delay, still uneven enough for people to underestimate, still painful enough to generate resentment. The middle road isn’t really a middle road. It’s just the slow burn compressed, with the same problems arriving a few years earlier but without the urgency that forces real solutions.
If I had to bet, I’d bet on something closer to the fast scenario. Not because I know, but because the pace of improvement in the last two years has consistently surprised even the people building these systems. Every benchmark, every capability threshold, every “that’s still five years away” prediction has been blown through faster than expected. The trend line points toward acceleration, not gradual improvement.
The most interesting time
We are living through the most consequential period in human history. That’s not hype. The decisions made in the next few years, about who controls AI, how the wealth gets distributed, how fast we adapt our institutions, those decisions will shape centuries.
Both paths lead to abundance. One gets there through a decade of unnecessary suffering. The other gets there through a year of shock. Neither is comfortable. But the destination is worth it.
I’d rather rip the band-aid off.
