Protopia.

The Compression

Cover Image for The Compression
Curtis Duggan
Curtis Duggan
26 min read

Sometime around 2015, there was an office in a WeWork. You've seen this office, or one like it, even if you've never set foot inside.

The walls were exposed brick. The desks were reclaimed wood. There was a ping pong table that nobody used and a kegerator that everybody used. Someone had hung a neon sign that said "HUSTLE" in cursive, which was meant ironically, or sincerely, or both, depending on when you asked.

This office employed eleven people. There was a front end developer and a back end developer and a "full stack" developer who was really just the front end developer's roommate. There was a project manager named something like Jenna who sent Basecamp messages that began with "Just circling back on this!" There was a designer who argued, passionately and correctly, about the spacing between letters. There was someone whose job title contained the word "growth."

They were building a website for a consumer packaged goods brand. The brand sold, let's say, premium dog treats. The website had to be "responsive" and "mobile-first" and the client had requested, ominously, that it "feel like Apple."

The project would take four months. It would cost $180,000. There would be a "discovery phase."

I'm not being dismissive of this. These were real jobs done by real people who took genuine pride in their craft. The designer really did care about kerning. The developer really did lose sleep over page load times. The project manager really did keep everything from falling apart. They went to conferences. They read A List Apart. They had opinions about Sketch versus Figma that could fill an evening.

This was a culture. A whole civilization, really, with its own language and status hierarchies and rites of passage. You could spend a decade inside it and never run out of things to learn, debates to have, skills to acquire.

And then, very quickly, a large language model learned to do most of it.

I.

Let me tell you about some other offices.

There was a building in Austin, a converted warehouse near The Domain, where forty people came to work every day to write things. They called themselves a "content agency," which was the nomenclature of the era, and their clients were SaaS companies who needed blog posts.

The workflow had been refined over years. A strategist would research keywords. A writer would produce a draft. An editor would revise. A different editor would proofread. A project manager would traffic everything through a proprietary system they'd built in Airtable. A account manager would present the finished work to the client, who would inevitably request changes, which would cycle back through the whole apparatus.

They had a style guide. They had a tone guide. They had a voice guide, which was somehow different from the tone guide, and if you couldn't articulate the difference you probably hadn't been there long enough. They had walls covered with Post-it notes in colors that signified stages of completion. They had a Slack channel called #content-wins where people posted screenshots of articles that had reached the first page of Google, and everyone would react with the party parrot emoji.

The economics worked like this: A 2,000-word blog post cost the client $1,200. The writer got $200. The rest evaporated into the apparatus, into the strategists and editors and project managers and account managers and the rent on the converted warehouse and the Post-it notes and the health insurance and the quarterly team-building exercises at Top Golf.

In 2023, Claude and GPT-4 learned to write 2,000-word blog posts.

Not well, at first. The early outputs had a telltale blandness, a bureaucratic sheen that any professional writer could identify instantly. But the models improved. They learned to vary sentence length. They learned to include specific details. They learned, or seemed to learn, the difference between tone and voice.

The warehouse in Austin is not empty now, but it is quieter. The forty people became twenty, then twelve. The Post-it notes came down. The party parrot emoji appears less frequently in #content-wins, though the channel still exists, because nobody has thought to archive it.


There was a glass tower in Chicago, in The Loop, where a company employed three hundred people to answer telephones.

They called it "customer success," which was the nomenclature of the era. The phones rang and the people answered and they helped customers navigate software that was, if we're being honest, not designed particularly well. They had scripts. They had decision trees. They had a four-week training program that taught you how to de-escalate an angry caller, how to upsell a premium tier, how to express empathy in a way that was genuine but also efficient.

The best customer success representatives developed a kind of intuition. They could hear the particular exhaustion in someone's voice and know whether to crack a joke or stay businesslike. They could sense when a customer was about to churn and find exactly the right thing to say. This intuition was valued. The people who had it got promoted to team lead, then manager, then director.

The economics worked like this: The average representative handled forty tickets per day. Their fully-loaded cost was about $65,000 per year. The math was simple: $65,000 divided by roughly 10,000 tickets per year, which comes out to $6.50 per human interaction.

In 2024, Claude learned to answer telephones.

The early versions were obvious. They had the uncanny valley quality of a voice that was almost human but not quite, and customers would demand, with increasing frustration, to speak to a real person. But the models improved. They learned to pause naturally. They learned to say "um" and "let me check on that." They learned, or seemed to learn, when to crack a joke and when to stay businesslike.

The glass tower in Chicago still exists. The three hundred people are now eighty. The four-week training program has been condensed to one week, and it focuses mainly on "escalation handling," which means taking over when the AI decides a situation has become too complex or too emotional for it to manage. The intuition that used to get you promoted now gets you assigned to the difficult cases, the ones the machine hasn't learned to solve.


There was a studio in Brooklyn, in one of those neighborhoods that used to be industrial and then became artistic and then became expensive. Twelve people worked there making videos.

The videos were for brands. A beverage company would want a 30-second spot for Instagram. A fashion label would want a campaign with multiple cuts for different platforms. A startup would want an explainer video with animated graphics and a voiceover that sounded friendly but authoritative.

The workflow had been refined over years. A producer would manage the client relationship. A director would develop the concept. A cinematographer would shoot the footage. An editor would assemble the rough cut. A colorist would grade the footage. A sound designer would mix the audio. A motion graphics artist would create the animated elements. An assistant would organize the files and order the lunch.

They had opinions about cameras. They had opinions about codecs. They had a reference library of shots they admired, organized by mood and movement and lighting condition. They could talk for hours about the particular quality of light in a Terrence Malick film, or the way Roger Deakins uses practicals, or why the Alexa renders skin tones better than the RED.

The economics worked like this: A 30-second Instagram spot cost the client $35,000. The twelve people would work on it for two weeks. The cinematographer would rent $15,000 worth of equipment for the shoot day. Everyone would feel good about the final product, which would appear in someone's feed for three seconds before they scrolled past it.

In 2024, Runway and Pika and Sora learned to generate video.

Not well, at first. The early outputs had that dreamlike fluidity, fingers that didn't quite resolve, physics that didn't quite obey. But the models improved. They learned to hold a shot steady. They learned to match lighting across cuts. They learned, or seemed to learn, the difference between an insert and an establishing shot.

The studio in Brooklyn still exists, but they've pivoted. They now specialize in "production design for AI," which means creating reference images and style guides that clients use to prompt the models. The cinematographer, who spent fifteen years learning to light faces, now spends his days writing text descriptions of how faces should be lit. He is not bitter about this, or claims not to be. "It's still the same eye," he says. "Just different tools."


There was a firm in San Francisco, in a building South of Market, where eighty people worked as designers.

They designed apps. They designed websites. They designed "experiences," which was the nomenclature of the era. They had process. They had so much process. They had design sprints and design systems and design critiques and design research and design ops, which was somehow its own discipline, responsible for making sure all the other design activities happened smoothly.

The workflow had been refined over years. A researcher would conduct user interviews. A strategist would synthesize insights. A designer would sketch concepts. A different designer would create high-fidelity mockups. A prototyper would make the mockups interactive. A writer would craft the microcopy. A design lead would present everything to the client in a deck that was itself a work of considerable design.

They had opinions about everything. Rounded corners versus sharp corners. Drop shadows versus flat. Hamburger menus versus bottom navigation. They could debate whether a button should say "Submit" or "Send" or "Continue" for an hour, and the debate would not be frivolous, because they had data showing that the choice mattered, that one word could move a conversion rate by twelve basis points.

The economics worked like this: Redesigning an app cost the client $400,000. The eighty people would work on it for three months. There would be a "vision phase" and an "execution phase" and something called a "hardening sprint." The client would get a Figma file with 847 screens, each one annotated, each one precisely specified, ready to be handed off to developers who would, inevitably, build something slightly different from what was specified.

In 2024, Claude and GPT-4 learned to design apps.

Not well, at first. The early outputs looked like templates, generic assemblages of components that could have been anything or nothing. But the models improved. They learned to create visual hierarchy. They learned to consider user flows. They learned, or seemed to learn, when a button should say "Submit" versus "Send" versus "Continue."

The firm in San Francisco still exists. The eighty people are now thirty-five. The design sprints have been compressed. The "vision phase" is now handled mostly by AI, with humans stepping in to curate and refine. The debates about rounded corners continue, but there is a new participant in the debate, one that can generate a hundred variations while the humans are still sketching their first idea.

II.

I keep using the same phrase: "The models improved." Let me be more specific about what this means.

What happened was that a relatively small number of companies, concentrated in San Francisco and London and a few other places, figured out how to build systems that predict the next word in a sequence. That's all a language model does, technically. Given everything that has come before, what word is most likely to come next?

This sounds simple, and in some sense it is. But it turns out that predicting the next word requires something like understanding. To predict that the next word after "The capital of France is" should be "Paris," you need to have absorbed, somewhere in your weights, the fact that France has a capital and that capital is Paris. To predict the next word in a complex instruction, you need to have absorbed something like the ability to follow instructions.

The researchers at Anthropic and OpenAI and Google discovered, to their genuine surprise, that when you train a system on enough text, it develops capabilities nobody explicitly programmed. It learns to code by predicting the next character in code. It learns to reason by predicting the next step in arguments. It learns to design by predicting the next element in designs.

And because it has ingested essentially the entire written output of human civilization, it has absorbed patterns that no individual human could hold in their head. It has read every blog post about conversion optimization. It has read every case study about design systems. It has read every tutorial about cinematography. It has read the style guides and the tone guides and the voice guides. It has read the debates about rounded corners.

The thing that made the WeWork agency valuable was specialized knowledge. They knew things that their clients didn't know. They had developed intuitions that took years to build.

But intuition, it turns out, is largely pattern recognition. And pattern recognition is exactly what these models are built to do.


The compression happened faster than anyone expected.

In 2022, GPT-3 could produce text that was fluent but hollow. Readable, technically, but missing something. Professionals could always tell the difference.

In 2023, GPT-4 closed the gap. The text was still distinguishable, but you had to look more closely. The tells were subtler: a certain predictability in the sentence rhythms, a tendency to reach for the obvious metaphor.

In 2024, Claude 3 and GPT-4o closed the gap further. The text was, for most purposes, indistinguishable. The professionals who had spent years developing their ear for prose found that their ear was no longer reliable.

This was disorienting. Imagine spending a decade learning to identify counterfeit currency, developing an eye for the subtle imperfections that distinguish real bills from fake ones. Then imagine that the counterfeiters suddenly get access to technology that produces perfect fakes. Your expertise doesn't disappear, exactly, but its value collapses. The thing you were paid to see is no longer visible, because it is no longer there.

This is what happened to the content agency in Austin. This is what happened to the customer success center in Chicago. This is what happened to the video studio in Brooklyn and the design firm in San Francisco.

The skills didn't evaporate. The designer who understood kerning still understood kerning. But one person with the right tools could now do what used to take a department. A firm that employed eighty discovered that fifteen could handle the load. Which left a question hanging there, the one nobody wanted to say out loud: What had the other sixty-five been doing?

The economics inverted. The thing that had been expensive became cheap. The thing that had been slow became fast. The thing that had required coordination among many specialists became something that one generalist with the right tools could accomplish in an afternoon.

III.

Here is a story about a different kind of compression, one that happened a hundred and thirty years ago.

In the 1890s, there were something like ten million horses in American cities. These horses pulled streetcars and delivery wagons and carriages. They required feeding and grooming and housing. They produced, by one estimate, about 2.5 million pounds of manure per day in New York City alone.

Around these horses, entire industries had formed. There were farriers and stable hands and harness makers. There were hay dealers and oat merchants. There were street sweepers whose job was specifically to clear the manure, and there were entrepreneurs who collected the manure to sell as fertilizer. There were veterinarians who specialized in draft horses. There were breeders who spent generations refining bloodlines for urban work.

This was a culture. A whole civilization, really, with its own language and status hierarchies and rites of passage. The best farriers were respected craftsmen. The best stable managers were well-compensated professionals. You could spend a decade inside this world and never run out of things to learn.

Then, in a span of about fifteen years, the internal combustion engine made all of it obsolete.

The transition was not instant, but it was faster than anyone expected. In 1905, horses still dominated city streets. By 1920, they were a curiosity. The farriers and stable hands and harness makers found other work, or didn't. The hay dealers and oat merchants closed their businesses. The specialized knowledge that had accumulated over centuries compressed into nostalgia.

I bring this up because the standard narrative about technological displacement is that it happens gradually. Old jobs fade away as new jobs emerge. Workers "reskill" and "retool." The economy adjusts.

But this narrative is often wrong about the timeline. The horse economy didn't fade gradually. It collapsed rapidly, in less than a generation. The people who had built careers around horses were not able to gradually transition into careers around automobiles. The skills did not transfer. A farrier could not become a mechanic through some natural evolution of craft.

The automobile didn't make transportation slightly more efficient. It made the horse obsolete. The economic logic that had supported millions of jobs simply ceased to apply.


I am not claiming that language models will do to knowledge work what the automobile did to the horse. The analogy is imperfect, as all analogies are.

But I am suggesting that the timeline may be faster than we expect, and the mechanism may be more like replacement than gradual transition.

The content agency in Austin didn't slowly become more efficient. It lost half its staff in eighteen months. The customer success center in Chicago didn't gradually optimize its operations. It replaced two hundred people with machines in the span of a year.

This is what compression looks like. Not a gentle slope but a step function. Not a long goodbye but a sudden absence.


There is a particular quality to being inside an industry at the moment of compression. You can feel it in the conversations, though nobody wants to name it directly.

At the design firm in San Francisco, they started having meetings about "AI strategy." The meetings were long and inconclusive. Some people argued that AI would make designers more productive, that it was just another tool, like when Photoshop replaced the airbrush. Other people argued that something different was happening, that the tool was becoming the worker.

The optimists pointed to history. Every new technology creates fear of displacement, they said. Spreadsheets were supposed to eliminate accountants. ATMs were supposed to eliminate bank tellers. The jobs changed, but they didn't disappear.

The pessimists pointed to the demo videos. Have you seen what Midjourney can do? Have you seen what Claude can produce with the right prompt? The jobs aren't changing. The jobs are being done by something else.

Both sides were partly right, which made the conversation impossible to resolve. The tool was becoming more powerful, which made individual designers more productive. But the tool was also becoming autonomous, which made some designers unnecessary. Both things were true simultaneously. The question was which truth would dominate.

In the meantime, the firm kept taking projects. The partners kept projecting revenue. The designers kept debating rounded corners. But there was a new quality to the debates, a faint undertone of "does this matter anymore?" that nobody voiced aloud.

IV.

Now let me tell you about a different kind of work.

There is a workshop in the Cowichan Valley, on Vancouver Island, in a town that used to mill lumber and now mostly serves tourists passing through to visit wineries. Four people work there. They make furniture.

The workflow is simple compared to the agencies we've been discussing. A customer commissions a table. One of the four people selects the wood, examining boards for grain pattern and stability. Another person does the joinery, cutting the dovetails and mortises that will hold the piece together. Another person does the finishing, applying oils and waxes in thin coats, waiting days between applications. The fourth person handles the business side, such as it is.

There is no strategist. There is no project manager. There is no designer separate from the maker. The person who conceives the piece is the same person who builds it, which means there is no gap to manage between intention and execution.

A table takes three weeks to make. It costs $4,000. The customer waits.

If you asked the owner whether he was worried about AI, he would probably look at you like you were slightly confused. He makes tables. Out of wood. The machine would have to show up with hands.

This is the part of the essay where I am supposed to romanticize physical craft. I am supposed to suggest that there is something irreplaceable about working with materials, something authentically human that the algorithms cannot touch. This is the move you expect.

I am going to resist the move, because I think it is sentimental in exactly the way that misses what is actually interesting about the situation.

The furniture maker in the Cowichan Valley is not protected by authenticity. He is not protected by the irreducible humanity of his craft. He is protected by atoms.

His work requires arranging physical matter in space. It requires selecting boards from a lumberyard, carrying them into a workshop, passing them through machines that are themselves physical objects requiring maintenance and skill to operate. It requires applying finishes that cure through chemical reactions that take time. It requires shipping heavy objects to customers who will put them in rooms and sit at them and spill things on them.

None of this can be compressed by a language model, because none of this is language. The model can generate an image of a table. The model can write instructions for building a table. But the model cannot instantiate a table. It cannot reach into the world and rearrange matter.

This seems obvious, almost tautological. But I think it is the most important observation in this entire essay.

V.

Let me tell you about another workshop, one that doesn't exist yet.

Imagine a facility, perhaps in Ohio or Indiana or some other place where land is cheap and logistics are favorable. The facility is large but not enormous. It contains robots.

These are not the robots of the twentieth century imagination, the anthropomorphic machines that move like clumsy humans. These are something else. They are arms that can pick and place with submillimeter precision. They are vision systems that can inspect a surface for defects invisible to human eyes. They are conveyors and sorters and packaging systems that coordinate through software rather than human supervisors.

The facility makes consumer products. Perhaps electronics, perhaps textiles, perhaps housewares. Things that used to be made in Shenzhen and shipped across the ocean in containers.

The economics of this facility would have been impossible five years ago. Robots were too expensive. Vision systems were too unreliable. The coordination software didn't exist. It was simply cheaper to pay humans in countries where labor was inexpensive than to build machines that could do the same work.

But the costs have been collapsing. The robots are getting cheaper and more capable at a pace that looks a lot like software fifteen years ago. The vision systems actually work now. The coordination software is being written by the same language models that compressed the design firm in San Francisco.

And suddenly, the math changes.

If a robot can assemble a product as reliably as a human worker, then you compare the robot's hourly cost (depreciation plus electricity plus maintenance) against the human's hourly cost (wages plus benefits plus management overhead). In many cases, the robot is now cheaper, even compared to workers in low-wage countries.

But more importantly, the robot is here. The robot doesn't require a three-week shipping container journey. The robot doesn't introduce supply chain risk. The robot doesn't create a two-month lag between design change and production change.

This is the other compression. The one happening to atoms instead of bits.


For thirty years, the received wisdom has been that manufacturing is something you offshore. You do the high-value work here, the design and marketing and strategy, and you send the low-value work there, the actual making of things.

This division made sense when labor cost was the dominant factor. Why pay American wages for assembly work when you could pay Chinese wages?

But the division was always unstable. It relied on a permanent differential, an assumption that labor in some places would always be cheaper than labor in other places. It relied on shipping remaining cheap. It relied on the complexity of making things remaining beyond the reach of automation.

All three assumptions are now under pressure.

Labor differentials are shrinking as wages rise in manufacturing regions. Shipping is no longer cheap, especially after the disruptions of recent years revealed how fragile the system was. And automation, which seemed perpetually five years away, has started to arrive.

The judges on Dragons' Den, the ones who said "do it in China" with such confidence, were giving advice appropriate to a particular moment in economic history. That moment may be ending.

VI.

Here is my hypothesis, stated plainly:

The same technologies that are compressing the digital economy may be decompressing the physical one.

In the digital world, language models are collapsing the layers. Work that required many specialists now requires few. The value of pure information work, of moving words and images and code around, is falling toward zero.

In the physical world, robotics and vision systems are making local manufacturing economical again. Work that required offshoring may soon be done in the same country, or the same city, or the same building as the customer.

These are the same underlying technologies, but they cut in opposite directions. AI compresses digital work. AI enables physical work.

If you spent the last decade getting very good at arranging pixels, you are being squeezed. Your skills are being absorbed into models that can replicate them at marginal cost approaching zero.

If you spent the last decade getting very good at arranging atoms, you may be about to have an extremely interesting time.


I think about the furniture maker in the Cowichan Valley, and I think about the design firm in San Francisco, and I try to understand why their situations are so different.

The designer's work product is information. A Figma file, ultimately, is just structured data. It can be transmitted instantly. It can be copied infinitely. It can be generated by any system capable of producing structured data in the right format.

The furniture maker's work product is matter. A table is a physical object occupying space. It cannot be transmitted. It cannot be copied, only re-made. It can only be generated by a system capable of arranging matter in space.

This distinction has always existed. What's new is that information work has suddenly become cheap, while matter work is becoming newly valuable.

The skilled trades, the people who know how to build and fix and make physical things, were told for decades that they had chosen the losing side. The future was digital. The money was in software. Physical work was what you did if you couldn't get a job in tech.

That story may be reversing.

VII.

I want to end with a question rather than a conclusion.

What would it mean to take physical manufacturing seriously again?

Not as a nostalgic project. Not as a way to create jobs for their own sake. But as an actual source of value, an actual competitive advantage, an actual frontier.

Everyone has the same digital tools now. So what creates differentiation? Atoms.

If you can design a product with AI and manufacture it locally with robots and ship it to customers in days instead of months, you have something that a purely digital competitor cannot replicate. You have a moat made of matter.

This is why I keep thinking about the dog treat website, the one the WeWork agency built for $180,000 in 2015. The website was the product, back then. The differentiation was in the pixels, the user experience, the conversion optimization.

But now anyone can have that website. Claude will build it in an afternoon. The pixels have been commoditized.

What cannot be commoditized, at least not yet, is the dog treat itself. The formulation, the sourcing, the production, the quality control. The actual atoms arranged in an actual shape that an actual dog will eat.

The alpha has moved. It was in bits. Now it's in atoms.


There is a version of the future in which local manufacturing comes back, in which every city has facilities turning out products designed that morning for customers who want them that week. In this future, the supply chains are short and resilient. The feedback loops between design and production are tight. The ability to make things becomes, once again, a source of regional pride and economic advantage.

There is another version of the future in which the compression continues everywhere, in which robots remain expensive and unreliable, in which offshoring remains the default, in which physical manufacturing stays low-status and the only game worth playing is still digital.

I don't know which future we're headed toward. Nobody does.

But I know which one I find more interesting. And I know that the assumptions baked into my thinking for the past fifteen years, the assumptions that said digital was where the action was and physical was for people who hadn't gotten the memo, may have been true for a particular moment that is now ending.

The WeWork with the exposed brick and the neon sign is still there, probably. Or one like it. But the eleven people who worked there have scattered to other things, and the project that would have taken them four months can now be done in an afternoon, and the world has moved on without them quite noticing.

Meanwhile, somewhere in the Cowichan Valley, a man is finishing a table. He does not know that he is ahead of his time. He just knows that the grain came out well, and the joints are tight, and the customer is going to be pleased.

The future is always showing up in the places we're not looking.