Beautiful Complexity: Curb your Reductionism
Complexity has a bad reputation with programmers. At best, we hear the word and think of ‘computational complexity’: NP-completeness, Big O notation…a means of classifying the inherent difficulty of computational problems in terms of determinism, runtime performance, and space consumed in memory or on disk. “This thing runs in exponential time…if we don’t find something that scales logarithmically, this is never going to work for anything other than trivial inputs.” Complexity in this sense addresses the inherent difficulty of a problem, outlining theoretical limits for how efficient an algorithm can be. For NP-complete problems like TSP, approximation is often a pragmatic and effective alternative. Engineers need to be aware of the constraints of certain problems, but this responsibility can be managed relatively easily with practice. In short, this is not the complexity that people generally worry about.
More often, ‘complex’ is used to pejoratively describe a system that’s unnecessarily difficult to work with, and may be downright incomprehensibile: A Rube Goldberg machine caused by over-engineering and scope creep, a combinatorial explosion of mutable state, or a tangled nest of concerns that place tremendous demands on developers who find themselves needing to understand a large, coupled codebase and endure long feedback cycles for even a small change. This is the “Out of the Tar Pit” complexity that developers have been enjoined to avoid at all costs for decades, with Unix as our roadmap for simplicity, modularity, and clarity.
As such, developers strive for ‘simple’ solutions. Eliminating a combinatorial explosion of possible outcomes caused by mutable program state and an entanglement of leaky abstractions is absolutely essential. Rich Hickey’s “Simple Made Easy” talk at StrangeLoop 2011 primarily addresses “Tar Pit” complexity, but alludes to a third type, one that is inherent in many of the world’s most challenging problems: “You want to start seeing interconnections between things that could be independent.” That is, after you’ve eliminated state and over-engineering, you still need to separate concerns and create modular components that have a single reason to change and that require a limited number of people to manage the change. As you separate concerns, you think of interfaces that should exist between the related modules, services, or systems to create abstractions and hide internal implementation details.
Fundamentally, why is this separation of concerns so important? Can we approach this scientifically, quantitatively, and not just anecdotally? Does this separation of concerns and modularity show up in systems outside of software development? How does this apply to SOA and microservices? Where does Docker fit it?
In searching for the answers, developers will like need to venture into some unfamiliar territory. This can cause some anxiety: Persist! As Rich points out, the quest for ‘simplicity’ in software development is often conflated with ‘easiness’, which implies familiarity and proficiency with certain tools or processes, agile and otherwise. Developers who have been inculcated to search for ‘simple’ solutions can find themselves with the opposite outcome: When they detect unfamiliarity, discomfort, and difficulty, they think ‘complexity!’ and retreat to the familiar. Rather than make the effort to learn new concepts and approaches to system design that conflict with their experience, they ‘cut scope’ and ‘be pragmatic’. This can be a Procrustean bed: Solutions that would be a great fit for the team and the problem don’t get explored, and the design is amputated to fit the biases and expectations close at hand.
I assert that we need to put the baggage of complexity as difficulty behind us and understand that there is an entire body of scientific work called ‘complex systems’ that addresses the emergent properties that arise from the “interconnections between things”. In particular, network science is a branch of complex systems that offers several insights and tools for software developers building large, interconnected, distributed systems. Network science is “the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena.” A network is a graph that represents something real, not the sterile random meaningless graphs you may have seen in school to practice your search algorithms. Real-world networks display surprising patterns independent of the domain-specific meaning of the vertices and edges. These patterns are not random and provide predictive power for real-world phenomena.
Albert-László Barabási is a physicist and one of the earliest contributors to the mathematical and conceptual foundations of the discipline. In 2009, ten years after the publication of a seminal paper on the scale-free nature of networks that helped define the field, he wrote in Science:
All systems perceived to be complex, from the cell to the Internet and from social to economic systems, consist of an extraordinarily large number of components that interact via intricate networks. To be sure, we were aware of these networks before. Yet, only recently have we acquired the data and tools to probe their topology, helping us realize that the underlying connectivity has such a strong impact on a system’s behavior that no approach to complex systems can succeed unless it exploits the network topology.
The science of complex systems is a much broader set of disciplines closely related to network science. According to NECSI:
Complex Systems is a new field of science studying how parts of a system give rise to the collective behaviors of the system, and how the system interacts with its environment. Social systems formed (in part) out of people, the brain formed out of neurons, molecules formed out of atoms, the weather formed out of air flows are all examples of complex systems. The field of complex systems cuts across all traditional disciplines of science, as well as engineering, management, and medicine. It focuses on certain questions about parts, wholes and relationships. These questions are relevant to all traditional fields.”
Software development happens in a complex adaptive system involving inherent difficulty: Abstract, ill-defined, interdependent problems need to be solved within budgetary and scheduling constraints by interdisciplinary groups of individuals following their own strategies and biases while responding to the actions of others. As if this wasn’t challenging enough, software development is a relatively new activity in human civilization: What few proven solutions do exist often lack precedent. Achieving ‘reuse’ of working solutions has proven illusive. Searching for simplicity, many developers fall into the trap of reductionist thinking, creating monolithic systems that can’t be modified by small teams in small batches and that use legacy technologies in the name of consistency and fewer parts (though the ones remaining are commensurately larger). The emergent properties that arise from interactions in networks may be the essential complexity that software developers must manage in order to better describe, measure, model, and understand the systems they build and the organizations in which they work. I believe we need to push past the fear of novelty and develop resources to start investigating what network science has to offer to software development.
To that end, I had the pleasure of presenting some of these ideas to the fine folks at 8th Light last week. I have the privilege of working with them this year as a mentor in their new “Master Cohort Program”. 8th Light embraces the inherent complexity of software development through apprenticeship: Rather than take a reductionist approach and pursue vanity prizes like certifications, 8th Lighters work with one another outside of their professional services engagements to constantly learn new things about development through collaborative open-source projects and “breakable toys” that ground their study in a context of real users with real problems.
In some cases, that might mean mastering Clojure macros in order to have them readily available when the opportunity presents itself to create a specialized DSL that eases reasoning about certain problems. For the Master Cohort Program, it means learning new ideas from people working on a diverse set of projects: I’m joined by Carin Meier, Dean Wampler, and Micah Martin. We’ll each be presenting quarterly talks and workshops, which gives 8th Lighters time with one of us each month this year.
I presented a workshop on the topics in this article on Friday, April 17, with my wife, Sarah Aslanifar and ten 8th Lighters. Using yEd and Clojure, we dove into modeling some of the social and technical networks formed by the group and the projects and technologies on which they’d worked. I presented a keynote in the afternoon to a larger group of about 50 people. The slides are organized Lessig-style: Consider them a visual companion to this essay.
I have spent the last six years of a seventeen year software development career solving engineering and technical leadership problems using insights inspired by network science, from graph databases to modeling software architecture and team structure with network visualization tools like yEd. I have begun documenting some of the experiences I’ve learned and have recently started investigating how we can mine software development artifacts for insight.
This essay is intended to serve as a brief introduction to some of these ideas. I intend to continue devoting time this year to sharing my results and collaborating with other developers and scientists interested in this approach. Watch this space and @bobbynorton. I hope that like me, you come to find that complexity, the essential kind that arises from a network of interactions, is inspiring, insightful, surprising, ubiquitous…and most importantly for engineers…pragmatic and useful.
Until then: Curb your reductionism!
Tweet