Complexity and uncertainty

Irreducibility

Some systems can only be understood by running them - there's no shortcut to the answer

Also known as: Computational irreducibility, No shortcuts

Originated by Stephen Wolfram

THE IDEA

You have to play it out

For some processes, there is no shortcut. No formula, no summary, no clever simplification that tells you the answer without going through every step. The only way to know what happens is to run the process and watch.

This is computational irreducibility, a concept from mathematician Stephen Wolfram. He showed that even very simple rules - far simpler than anything in the real world - can produce behaviour so complex that no analysis can predict the outcome faster than simply running the system step by step. The process is its own shortest description. There is no compression.

The implications for the real world are profound. When a system is irreducible, no amount of analysis, modelling, or expert opinion can tell you what will happen. The only way to find out is to let it run. This doesn’t mean analysis is useless - it means analysis has limits. You can understand the rules of the system perfectly and still be unable to predict where it’s going, because the outcome can only be computed by the system itself, in real time, with all its complexity intact.

IN PRACTICE

When the only model is the thing itself

Chess has simple rules. Any beginner can learn them in an afternoon. But those simple rules produce a game so complex that after three moves each, there are over nine million possible positions. After forty moves, there are more possible games than atoms in the observable universe. No formula tells you who will win from a given position (except trivially simple ones). The only way to know is to play it out. The game is irreducible - its simplicity generates complexity that resists every shortcut.

An economy is governed by rules that economists can describe in detail - supply and demand, interest rate mechanics, fiscal policy effects. Yet no economic model can reliably predict what will happen next year, let alone next decade. Not because economists are bad at their job, but because the economy is irreducible. Billions of agents making interconnected decisions, each responding to conditions that include other agents’ responses. The only “model” that captures this accurately is the economy itself, running in real time.

You plant a garden. You know the soil type, the sunlight exposure, the watering schedule, the species of every plant. You still can’t predict exactly how it will look in two years. Which plants will thrive, which will struggle, how they’ll interact, what the weather will do, which pests will arrive - these interactions are irreducible. The garden is its own experiment, and the result is only available by growing it. Every experienced gardener knows this. They plan, but they also watch, respond, and accept surprise as the default.

WORKING WITH THIS

Living with the limits of prediction

If a system is irreducible, the strategy shifts from prediction to preparation. You can’t know the outcome, but you can be ready for a range of outcomes. Build capacity to respond. Maintain buffers. Keep options open. Watch closely and adapt quickly.

This also means treating simulation and modelling with appropriate humility. Models are useful for exploring possibilities, stress-testing assumptions, and building intuition. They are dangerous when treated as forecasts. An irreducible system will produce outcomes that no model anticipated - not because the model was flawed, but because prediction was never available.

The practical question to ask is: can I shortcut this? If I understand the rules perfectly, can I jump to the outcome without running the process? For simple and complicated systems, often yes. For complex ones, often no. Recognising when you’re in irreducible territory saves you from the most expensive mistake in planning: assuming that enough analysis will eventually produce certainty. Sometimes the honest answer is: we won’t know until we try. And that’s not a gap in your knowledge. It’s a property of the system.

THE INSIGHT

The line to remember

Some things can only be understood by letting them happen. The search for a shortcut isn’t patience - it’s denial.

RECOGNITION

When this is in play

You’re facing irreducibility when a model keeps being surprised by the real system, no matter how much you refine it. When experts who understand the rules perfectly still can’t agree on what will happen. When the best prediction method is “let’s try it and see” - not as laziness but as genuine epistemological humility. When a simple set of rules produces endlessly surprising outcomes. When someone asks “can you just tell me what will happen?” and the honest answer is no.

complexity prediction simulation limits