This content originally appeared on DEV Community and was authored by Glacius
In the previous post, we established that simplicity is about reasoning—the ability to clearly understand what a system does, how it works, and why it behaves in a particular way.
To understand what impedes that reasoning, this post takes a step back to analyze the fundamental nature of the systems we build. We will borrow insights from the field of cybernetics to dissect how modern software works, on a search to diagnose what precisely makes it complex.
The Non-Trivial Nature of Modern Software
In computer science, a “Turing complete” system is capable of performing any computation a theoretical Turing machine can. All modern programming languages are Turing complete, providing the fundamental power to build incredibly diverse applications. However, while the underlying computational engine (the programming language itself, or the CPU) operates according to fixed, deterministic rules—making it “trivial” in its core operation—this doesn’t mean the software systems we build using these engines are trivial.
Drawing from Heinz von Foerster’s work in cybernetics, we can make a crucial distinction:
A Trivial System is one whose input-output relationship is invariant and entirely predictable. Given the same input, it will always produce the exact same output, regardless of its internal state or past operations. You can precisely and analytically determine its behavior, as its rules are fixed and its history doesn’t influence its current response. Think of a simple calculator: “2 + 2” always yields “4.”
A Non-Trivial System, conversely, is one whose input-output relationship is variant and often unpredictable from an external perspective. Its behavior depends not only on its current input but also on its internal state, which is continuously shaped by its past operations and experiences through feedback loops. Its current output is a function of both its input and its history, leading to adaptive or emergent behaviors that are hard to predict without knowing its full context.
Crucially, modern software systems have to be non-trivial to fulfill their purpose. They are designed to model and interact with the real world—processes like global economies, supply chains, or intricate biological systems—which are themselves profoundly non-trivial in their dynamic, evolving nature. Furthermore, these systems are built for and used by humans, who are the quintessential non-trivial systems. To effectively meet complex, adaptive requirements and provide meaningful interactions with users, software simply cannot remain a purely trivial, static input-output machine.
Consider a common system like Identity and Access Management (IAM). When you attempt to log in, the system’s response isn’t a simple, fixed output for a given username and password. Instead:
It checks its internal state (the user database) to see if your credentials exist and are valid. This state is a product of past actions, such as your initial registration.
If you’ve made too many failed attempts, the system’s internal state might reflect a “locked account” status. Even with the correct password, your login will fail due to this history-dependent state.
A successful login might update your “last login” timestamp, while a failed attempt might increment a “failed attempts counter.” These are feedback loops where your action directly modifies the system’s internal state, influencing its future behavior (e.g., triggering an alert, locking the account).
Thus, the same input (your username and password) can yield different results (success, wrong password, account locked, password expired) depending on the system’s internal, evolving state. This makes IAM systems, and indeed most interactive modern applications, fundamentally non-trivial. The presence of databases, user accounts, personalization features, recommendation engines, and adaptive interfaces in almost every modern application built today serves as compelling evidence: the vast majority of software systems we construct are, by their very nature, non-trivial. This inherent non-triviality fundamentally shifts how we must approach their design and comprehension.
Complexity in Non-Trivial Systems
Now that we understand that modern software systems are non-trivial, what does this tell us about complexity? If simplicity is about reasoning, and non-trivial systems complicate that reasoning, then acknowledging non-triviality is the first step towards managing complexity.
Consider again the definition of a trivial system: its input-output relationship is invariant, entirely predictable, and fundamentally stateless. For such a system, complexity is generally low because its behavior can be fully understood by merely examining its fixed rules and current inputs. There’s little “mystery” to solve, as its past doesn’t influence its present.
Therefore, if most modern software is non-trivial, it follows logically that the significant complexity within these systems must lie precisely in the distinction that makes them non-trivial.
The Role of Statefulness
The defining characteristic of a non-trivial system is its internal state—a memory of past operations that evolves over time. This state is continuously modified by new inputs and, in turn, conditions all future outputs.
In software, this state is pervasive, manifesting in databases, caches, session variables, and the memory of running processes. It is precisely this dynamic quality that allows an application to be personalized, persistent, and reactive, elevating it beyond a mere computational tool.
Yet, this very statefulness introduces a fundamental challenge to reasoning. When a system’s output depends not just on its input but on its entire history, its behavior cannot be understood without knowing its current state and the sequence of events that produced it. The simple question, “What happens when this button is clicked?” becomes intractable, splintering into a series of context-dependent possibilities: it depends on the user’s identity, their previous actions, and the state of the surrounding environment.
Reinforcement Loops and Emergence
The challenge of statefulness is further amplified by feedback and reinforcement loops. These are mechanisms where a system’s output, or the consequences of that output, feed back into its internal state, influencing future behavior. In our IAM example, a failed login attempt increments a counter, which can then trigger a lockout—a clear reinforcement loop. In a recommendation engine, a user’s click (output) updates their profile (state), leading to different recommendations (future output).
When multiple such loops interact, especially across different components or services, predicting the system’s overall behavior becomes exponentially harder. Small, local changes can propagate and amplify, leading to behaviors that were not explicitly programmed but emerge from the interaction of these loops. This emergent behavior is a hallmark of truly complex adaptive systems. It’s why a small bug in a payment processing system might only manifest under very specific, rare combinations of user actions and data states, or why seemingly benign feature additions can unexpectedly degrade performance or introduce security vulnerabilities elsewhere.
This phenomenon bears a striking resemblance to concepts from chaos theory, often popularized as the “butterfly effect.” Just as a butterfly flapping its wings in Brazil might, over time, contribute to a hurricane in Texas, a seemingly minor change or unexpected input in one part of a highly interconnected, stateful software system can trigger a cascade of effects, leading to unforeseen and disproportionate outcomes elsewhere. This makes deterministic prediction exceptionally difficult and highlights the inherent unpredictability of highly non-trivial systems.
Ultimately, this emergent nature is where the significant complexity of modern software systems truly resides. It’s the source of the “labyrinth of interconnected, history-dependent decisions” we grapple with, making systems difficult to reason about, modify, and trust.
Simplicity in the Face of Non-Triviality
If this emergent nature is the heart of complexity, then its epicenter—the precise point where we must focus our attention—is the moment of state transition. It is the single line of code that decrements an inventory, the function that promotes a user to an admin, or the event that marks an invoice as paid. When we reason about a system’s safety and predictability, we are really reasoning about the integrity of these transitions.
The Art of Simplicity, therefore, is not about avoiding state, which is impossible in a non-trivial world. Instead, it is about constraining it. We must protect the integrity of our system’s state by defining and rigorously enforcing invariants—strong, unyielding statements about what must always be true, especially during a transition.
An invariant is the most potent form of the “strong statements” we discussed previously. It is a declaration that carves out islands of predictability in an ocean of non-triviality. Consider these invariants:
An account balance can never become negative.
An order cannot be shipped if it has not been paid for.
A user with ‘read-only’ permissions can never execute a ‘write’ operation.
These are the load-bearing walls of our application’s logic. By making them explicit and unbreakable, we gain confidence not just in what our system will do, but more importantly, in what it will never do.
Acknowledging that our systems are non-trivial is the first step. Learning to define their essential invariants is the second. In the next post, we will explore how to design these into the very structure of our code, giving our system’s complexity a home.
This content originally appeared on DEV Community and was authored by Glacius