Optimise for the cheapest change
2026-03-03
Most engineers already agree with Dan North’s idea of building the best simple system for now. It is solid advice: avoid speculative architecture, ship the thing you need, and keep today’s constraints visible.
Where teams still get stuck is what happens next. The first system works, but replacing or reshaping it later is slow and risky.
The hidden failure mode
A common sequence is straightforward:
- Avoid premature abstraction.
- Build a simple system.
- Ship it.
- Discover that further change is expensive.
That does not always happen because the original design was wrong. It often happens because the process of change is costly: feedback loops are long, deployment is fragile, boundaries are fuzzy, and local edits trigger non-local consequences.
The practical failure is not “we built something simple.” It is “we built something expensive to modify.”
Shift the optimisation target
Instead of optimising for the best system, optimise for the cheapest change.
This is not a call for more abstraction or hypothetical extension points. It is a call for operationally cheap change:
- short feedback loops
- localised change surfaces
- explicit boundaries with minimal hidden state
- minimal manual release steps
- deployment that feels routine
That is the platform concern in plain terms: making safe change normal.
What this looks like in practice
This blog is intentionally shaped for local, low-risk edits.
- Posts are plain Markdown files.
- Most edits touch one file.
- Hot reload makes local feedback immediate.
- Push to
masterdeploys automatically, typically in about 90 seconds.
The product of this system is published writing, and AI helpers are part of that workflow. The point is not that AI exists; the point is that the system gives AI a safe operating surface. When artifacts are local and readable, changes are easy to verify and easy to review.
AI tends to amplify systems that are already coherent. In tangled systems, it tends to amplify the tangle.
What this means for platform work
The same pattern holds in larger systems.
Change gets expensive when adding one feature touches five subsystems, when builds and tests are slow, when reviews are noisy, or when deployment requires coordinated rituals.
If you care about throughput over time, high-leverage platform work is reducing the cost of safe modification. That usually pays off faster than adding another abstraction layer.
Not the same as premature abstraction
Premature abstraction asks, “What flexibility might we need?”
Cheapest-change thinking asks, “What change will definitely come, and how do we make that change safe and fast?”
The second question is easier to reason about because it is tied to observable flow, not speculation.
Closing
Systems evolve, requirements move, and tools get replaced. None of that is surprising.
The useful design question is how painful those transitions are.
Build the best simple system for now, then shape it so replacing it later is cheap.