The Leisure Trap

Acceleration without integration isn't progress. It's a trap.

Evolution will happen, but it might not happen to you.

(By “integration” I mean: new power gets absorbed without tearing the social fabric through shared ownership, bargaining power, and access.)

tl;dr

  • Past tech revolutions increased output, not leisure, until politics forced distribution.
  • AI increases “intelligence supply,” but shifts scarcity to infrastructure and ownership.
  • Without policy, the “no-work” story becomes dependency, not freedom.
  • “Integration” is the missing design requirement.

I. The Billionaire's Bedtime Story

Elon Musk wants you to know that work is almost over.

"AI will mean no one has to work," he told the UK Prime Minister 2023. "You can have a job if you want, for personal satisfaction, but AI will do everything."

Sam Altman sketches a similar horizon: some form of guaranteed income, and beyond that, abundance so deep that money starts to feel like an outdated interface. Marc Andreessen calls it “the greatest gift we can give ourselves.” Prices fall toward zero, intelligence becomes cheap and ubiquitous, and humanity is finally released from drudgery to pursue meaning, art, and connection.

It's a beautiful story. And if you've heard it before, that's because it's been told before.

They were wrong then. They're wrong now. And it's worth understanding why they're wrong, because the pattern is so consistent it's almost funny.

II. Two Historical Rounds

"The Machines Will Free Us"

In 1830, a Manchester mill owner named Edmund Ashworth made a prediction. The power loom, he explained, would "weights off the shoulders of the laboring classes." Productivity would explode. Goods would become cheap. The workingman would finally have leisure.

He was half right. Productivity did explode. A single power loom did the work of forty hand-weavers. Cotton production increased 5,000 percent in fifty years. Goods became cheap.

And the workingman? He worked harder than ever.

The factory system didn't create leisure, it destroyed it. Agricultural workers had followed seasonal rhythms: long days in summer, short days in winter, feast days scattered throughout. The mills ran 12 to 16 hours a day, six days a week, fifty-two weeks a year. No seasons. No feasts. Just the clock and the overseer.

The mill owners needed small fingers for the machines, so children as young as five joined the workforce. By 1820, a third of mill workers were under fourteen. Hand-loom weavers, the skilled craftsmen that mechanization was supposed to free, saw their wages drop 75 percent within a generation. They didn't become men of leisure. They became men of starvation, then men of factories, then statistics.

The technology delivered exactly what was promised. The abundance was real. But somehow, between the production and the people, the abundance vanished, or rather, it didn't vanish at all. It just went somewhere else.

The mill owners who promised liberation were the same men who fought child labor laws, opposed wage minimums, and called factory regulation an assault on natural liberty. Their predictions weren't analysis. They were marketing.

"Science Will End Hunger"

A century later, the story repeated. In 1968, Paul Ehrlich published The Population Bomb, predicting that hundreds of millions would starve in the coming decades. The math seemed irrefutable: population was growing faster than food supply. Malthus would finally be vindicated.

Then Norman Borlaug ruined the math.

Borlaug's dwarf wheat varieties - shorter, stronger, responsive to fertilizer - tripled grain yields in a decade. The Green Revolution, they called it. India went from famine to food exporter. Mexico became self-sufficient. Ehrlich's bomb was defused. A billion people who would have died didn't.

This is true. Borlaug deserved his Nobel Prize. The technology worked.

But the promise went further. Scarcity is over, the boosters announced. With hunger solved, poverty will end. The developing world will leap forward.

They were half right again.

Global food production tripled. The number of hungry people barely budged for decades, and in recent years has even risen again, mostly tracking politics, conflict, and purchasing power more than yields. Small farmers, unable to afford the fertilizers and irrigation that the new seeds required, went bankrupt while food conglomerates consolidated. Countries that had fed themselves for millennia became dependent on seed multinationals based in St. Louis and Basel.

Today there is enough food on Earth to feed ten billion people. Seven hundred million are hungry anyway.

The technology didn't fail. The distribution did. Or rather, the distribution worked exactly as designed by the people who owned the inputs.

Abundance doesn't distribute itself. It has to be distributed. And the people who control the abundance have never, in the history of technology, volunteered to give it away.

III. The Pivot - What Musk Gets Right, and What He Ignores

So when Elon Musk promises that AI will make work optional, the question isn't whether the technology will work. It probably will. The models are real. The exponentials are real. GPT-5 will be smarter than GPT-4, and whatever comes after will be smarter still.

The question is: why would this time be different? Not how could it be different.

We can imagine utopias all day. But why would it be different, given who is building the technology, who is funding it, and what their actual incentives are?

xAI is building/expanding massive AI compute infrastructure in and around Memphis, projects in the tens of billions. The compute (the chips, power, and cooling required to run these models at scale) to run frontier AI costs tens of millions of dollars per training run. The electricity demands are so immense that tech companies are buying their own power plants. The server farms running "free" AI require water rights, chip allocations, and political connections that concentrate in fewer and fewer hands.

When the model is free but the infrastructure costs billions, who actually benefits from the abundance?

Musk talks about what AI will do. He never talks about who will own it. And he never explains why the people who own it would give it away, when the people who owned the mills didn't, and the people who owned the seeds didn't, and the people who owned every previous abundance-generating technology in human history didn't.

"Eventually everyone benefits" is cold comfort if you're the generation being fed into the machinery. The cotton workers eventually benefited from industrialization about three generations later, after unions, regulations, and two world wars forcibly redistributed the gains. The path from abundance to broadly shared prosperity ran through the Triangle Shirtwaist fire and the Battle of Blair Mountain and Verdun.

That's not a plan. That's a meat grinder.

Yes, industrialization ultimately raised average living standards. My claim is about who benefits when, and what institutions were required to force distribution.

IV. The "No Work" Fantasy - Let's Think it Through

Take the leisure utopia seriously for a moment and watch it dissolve.

If AI does all the work, who pays you?

"Universal High Income," Musk says. Funded by what? Taxing the AI owners? The same companies building AI are lobbying for lower corporate taxes, fewer regulations, weaker unions. The political coalition for redistribution would have to be built against the interests of the people building the technology. Who's building it?

If everything is cheap, what about the things that can't scale? Housing in desirable places: fixed supply of land. Healthcare's human attention component: you can automate diagnosis, but someone still has to hold your hand when you're dying. Status goods, which are zero-sum by definition: my Tesla isn't special if everyone has one.

The abundant things become worthless. The scarce things become more expensive. You'll be able to afford infinite AI-generated content. You won't be able to afford to live anywhere interesting.

And if no one needs to work, who has power? The people who own the machines.

That's not a post-scarcity society. That's feudalism with better special effects. The serfs have Netflix and UBI checks while the lords control the substrate of everything that matters.

"It's not that robots will take your job," as Yanis Varoufakis put it. "It's that robots will take your leverage."

V. The Real Pattern - Scarcity Doesn't Disappear, It Migrates

Here's the pattern, stated plainly: every time we solve one scarcity, the bottleneck moves.

Mechanization solved the scarcity of goods and created the scarcity of capital. The Green Revolution solved the scarcity of food and created the scarcity of inputs and market access. AI will solve the scarcity of intelligence and create the scarcity of... what?

Compute. Energy. Physical infrastructure. Human attention. Access to the API layer (the gated interface that decides who gets to plug AI into real-world systems) that connects intelligence to the actual world.

The models may be open. The server farms running them will not be. The power plants feeding those servers will not be. The chip fabs building the processors will not be. Compute might commoditize eventually. But the transition period is where power concentrates, and transition periods are where politics matter.

Intelligence abundance doesn't mean power abundance. It means whoever controls the substrate of intelligence (the infrastructure that makes ‘AI’ real: data centers, chips, power, and access controls) has more power than anyone in history.

VI. Who Benefits From This Fantasy?

The leisure narrative isn't just wrong. It's useful.

For tech founders, it preempts regulation. "Don't slow us down, paradise is coming!" Every restriction becomes an obstacle to utopia, every critic an enemy of progress.

For investors, it justifies valuations. "This isn't a company, it's a civilization upgrade!" When you're selling shares in the future of humanity, normal metrics don't apply.

For politicians, it postpones hard choices. "The market will sort it out." No need for transition programs, redistribution policy, uncomfortable votes.

For all of us, it's a sedative. "I don't have to worry about my kids' careers, everything will be fine." The leisure narrative is a permission slip to avoid the hard thinking, the difficult conversations, the political engagement that actual transition management requires.

This is not futurism. It's a sales pitch. It asks you to accept disruption now in exchange for paradise later. On the word of the people profiting from the disruption.

VII. Deeper Than Musk Thinks - The Cosmic Cop-Out

But let me steelman the strongest version of the case. There's a more radical techno-optimism than Musk himself articulates, and it deserves a real answer.

Thinkers like Sara Walker and Lee Cronin argue that technology isn't something we do, it's something the universe does through us. Their Assembly Theory proposes that the same force that produced DNA, then neurons, then language, is now producing AI. We're not at the beginning of machine intelligence. We're at the latest inflection point in a 13-billion-year process of cosmic complexification.

Ray Kurzweil mapped the trajectory: each stage enables faster iteration. Biology took billions of years. Brains took hundreds of millions. Culture took tens of thousands. Technology moves in decades. AI might move in months.

From this view, AI isn't optional. It's as inevitable as multicellular life once chemistry got complex enough.

Grant the argument. Technology is evolution continuing by other means. We are the universe waking up to itself.

Now watch what that actually implies.

Evolution doesn't optimize for flourishing. It optimizes for what survives. The process that produced human consciousness also produced parasites, extinction events, and three billion years of suffering before anything could feel joy. Evolution doesn't care if the transition is humane. It only cares that something makes it through.

If we're passengers on this ride, then Musk is right: strap in and hope for the best.

But the whole point of consciousness is that we're not passengers anymore. For the first time in 13 billion years, the process can see itself. Can model its own trajectory. Can choose.

And here's what choice reveals: not all evolution is healthy evolution.

The philosopher Ken Wilber spent decades studying how consciousness develops individually and collectively. His central insight: healthy development doesn't just add new capacities. It integrates them.

I should say what I mean by “integration,” because I don’t mean a vibe. Politically, it means the gains of productivity get converted into shared freedom, ownership, leverage, access. But it’s also personal. Tools don’t just amplify competence; they amplify whatever is unresolved in us. If we can’t hold attention, regulate impulse, and choose what we’re optimizing for, we don’t get liberation. We get faster dysfunction.

Each stage must transcend and include what came before. You don't discard childhood when you become an adult, you incorporate its lessons into a more complex whole.

Wilber called the pathology "dissociation": when new capacities break away from the whole instead of integrating with it. When a stage tries to accelerate past integration, you don't get evolution. You get fragmentation. Domination. Cancer.

This is the distinction Musk can't see, or won't see. This is where Musk's framing collapses.

The Industrial Revolution eventually integrated workers into shared prosperity. But it took three generations of suffering because integration wasn't designed in from the start. The Green Revolution created abundance, but it broke the relationship between farmers and their land, between countries and their food sovereignty. The abundance was real. The integration failed.

Real evolutionary progress doesn't grind up the prior stage, it incorporates it. The question isn't whether AI will be powerful. It's whether we'll build integration into the transition, or whether we'll call acceleration "progress" and leave the wreckage behind.

Musk talks about becoming light, spreading across the cosmos, merging with AI. But he never answers why. Why is that desirable? Why is faster always better? Why is "more complexity" the same as "more good"?

He can't answer because the answer would reveal the game. The cosmic framing "evolution, inevitability, destiny" isn't a description of where we're going. It's a justification for extraction now. It lets him build empires while claiming to serve the universe.

"We must accelerate" sounds like vision. But without integration, acceleration is just another word for greed.

If AI is inevitable cosmic evolution, why the urgency? Why the rush to build multi‑billion‑dollar data centers? Why the race? The universe doesn't need Elon Musk to accelerate its complexification. The process will happen regardless.

He races because he knows agency matters. His entire career is built on the premise that who builds the technology, and how fast, makes a difference.

But if agency matters, then the quality of the transition matters. The difference between evolution-through-mass-extinction and evolution-through-conscious-design.

He can't claim cosmic inevitability to deflect criticism while claiming heroic agency to attract capital. Pick one.

VIII. What Would Actually Be Different?

A genuinely different outcome would require things no one in power is proposing.

Public ownership stakes in AI infrastructure, not just UBI checks, but actual equity. If AI becomes as foundational as electricity, it should be governed like a utility, not a startup.
For example: a public “AI dividend” fund that automatically takes a small equity stake in frontier-scale data centers and chip supply chains, so when the infrastructure throws off cash, citizens get paid like owners, not like dependents.

Antitrust action before concentration, not after. We know the chokepoints: compute, data, distribution. We can see the bottlenecks forming. The time to act is now, not after the monopolies are entrenched and the lobbyists are hired.
For example: block vertical stacks where one firm owns the model, the distribution channel, and the marketplace and require interoperability so switching costs don’t become the new moat.

Massive investment in transition support. Retraining programs that last years, not weeks. Income bridges that don't expire when the political will fades.
For example: wage insurance that tops up displaced workers’ incomes for multiple years, plus portable benefits and serious apprenticeship pipelines and not a two-week “learn to prompt” webinar.

Democratic governance of the platforms, not just the models. Who decides what AI optimizes for? Right now: shareholders. That's not physics. That's a choice.

IX. The Honest Pitch

Here's what the honest version of the pitch would sound like:

AI will be the most powerful technology ever created. It will generate enormous wealth, primarily for those who own the infrastructure. The transition will be brutal for most workers, and there is no realistic plan for redistribution. Historically, technologies like this have concentrated power, not dispersed it. There's no reason to expect this time to be different unless we make it different through political action, which we are not currently taking and which I am not personally advocating for.

But trust me. It'll be great.

That's the real pitch. Everything else is the bedtime story.

X. Waking-Up

We probably are in an evolutionary transition as significant as the emergence of language or agriculture. Walker and Cronin might be right: this is the universe complexifying itself through us.

The question isn't whether to participate. We don't have that choice.
The real question is whether this will be the first major evolutionary transition that happens consciously.

The cotton mill transition took three generations and two world wars to reach something like shared prosperity. The agricultural revolution took millennia and countless collapsed civilizations. Evolution doesn't optimize for comfort. It optimizes for what works, eventually, for someone.

We could do better. We have information. We have knowledge. We have technology (embodied knowledge). We have, for the first time in history, the ability to see the transition coming and shape it deliberately.

That's not only a technological question, it's a social one.

The technology is coming. The abundance might be real. But the mechanism for distribution does not exist.

Evolution doesn't care if you suffer. But we can. That's the whole point of consciousness, the first thing in the universe that can look at the trajectory and say: not like this.

The question isn't whether to evolve. The question is whether to evolve like humans - or like bacteria.

Read more