Systems Thinking
A map of how complex systems work and how to reason about designing, breaking, and fixing them.
A map of how complex systems work and how to reason about designing, breaking, and fixing them.
A map of how complex systems work — and how to design, reason about, and fix them. My main lens for both technical and human problems.
Systems respond to their own outputs. Two types:
Most interesting system behavior comes from the interaction between multiple loops.
Properties that appear at the level of the system but not in any individual component.
You can’t predict emergent behavior by studying components in isolation.
Every system has stocks (accumulations — water in a tank, knowledge in memory, money in an account) and flows (rates of change — fill rate, learning rate, spending rate).
The key insight: stocks change slowly. You can’t instantly drain a reservoir. You can’t instantly retrain an organization. This is why systems have inertia — and why policy changes take time to show effects.
“A chain is only as strong as its weakest link.”
In any system with a goal, there is always a binding constraint — the factor limiting throughput. Goldratt’s Theory of Constraints says:
Non-constraints have slack. Optimizing non-constraints without addressing the constraint is waste.
A distributed system is a collection of independent computers that appear to users as a single coherent system. The hard problems:
CAP theorem: you can have at most two of the three.
“Organizations which design systems are constrained to produce designs which are copies of the communication structures of those organizations.”
Software architecture mirrors org structure, not the other way around. A monolith often signals a team that communicates well. Microservices often signal org fragmentation.
Inverse Conway Maneuver: deliberately restructure the team to produce the desired architecture.
Every useful abstraction hides complexity — and in doing so, creates a leaky abstraction. Joel Spolsky’s law: “All non-trivial abstractions, to some degree, are leaky.”
The abstraction layer decides what complexity to hide. Good layers hide accidental complexity (details of implementation) while exposing essential complexity (semantics that matter to the caller).
Knowledge moves through teams the same way data moves through networks: with latency, loss, and distortion. Bottlenecks in information flow produce the same effects as bottlenecks in data pipelines — queues build up, decisions get made on stale data.
Systems behave as incentives dictate, not as designers intend. If the incentive structure rewards short-term metrics, you get Goodhart’s Law:
“When a measure becomes a target, it ceases to be a good measure.”