Friday, August 19, 2011

Engineering emergent behavior

Complex adaptive systems seem to be all the rage now.  This month's Harvard Business Review has several articles on managing complexity.   It has become fashionable to use computational complexity to explain failure or lack of engineering control in diverse areas, ranging from derivatives pricing to strong AI.

All of this is fun and interesting, though not exactly new.  It was new in the 1950's when Lorenz demonstrated the impossibility of predicting the weather.  Or in the 1970's when the "chaos cabal" began developing the ideas of self-organization and emergent behavior in nonlinear dynamical systems.

So what does all of this mean to us today?  In particular, how exactly are policy-makers and leaders supposed to "embrace complexity?"  The HBR articles make some reasonable practical recommendations that I am not going to repeat.  What I am interested in thinking about here is the general question of what it means to try to engineer or control emergent behavior and how the systems that we inhabit need to change to support meaningful attempts at this.

Unexpected outcomes are to be expected.
Emergent behavior is by definition unpredictable.  Broad patterns can be anticipated and to some extent engineered; but new and different behaviors need to be understood and exploited, rather than attempting to "minimize variance" or ensure mirco-level predictability.  In a recent article on open source product evolution, Tarus Balog talks about one way to cultivate what might be called emergent value by means of what he calls the "practice effect" - release early, release often and allow the community to work with and extend the product.  What this comes down to is presenting the interacting agents that make up a complex adaptive system with opportunities for productive evolution rather than trying to push completely predetermined outcomes through the system.  Commercial companies trying to leverage open source are learning the art of how to do this.  Open source community leaders are learning lessons that are broadly applicable from this experience as well.

Extreme sensitivity to initial conditions means small changes can create big effects.
Lorenz famously named this the "butterfly effect."  This is the essence of why long-term behavior of chaotic dynamical systems is not practically predictable and it is what makes managing complex adaptive systems extremely difficult.  The best strategy for dealing with this again comes from open source - move things along in what Stefano Mazzocchi dubbed "small, reversible steps."  At the Apache Software Foundation, most projects follow a process that we call "commit, then review."  Committers make changes to the source code and then the community reviews the changes.  "Patches" (where Apache got its name) get applied visibly in small increments and the community provides feedback.  If the feedback is not good, the patches get rolled back.  Big bang commits of very large amounts of code or sweeping changes are rare.  Proceeding via small, reversible steps and observing effects while reversibility is still possible is a great strategy for dealing with complex adaptive systems when this is practical.

There is no kernel.
The traditional systems engineering model starts with a "kernel" and builds manageable complexity on top of it.  Linux and the TCP/IP protocol stack are great examples.  Starting with a stable and solid foundation, we build specialized and advanced capabilities on top.  Similarly, governments, policies and organizations can be looked at this way, with a stable, centralized core forming the basis for bureaucratically articulated extension points.  Complex adaptive systems have no kernels.  Their core dynamics are driven by networks of relationships that evolve dynamically over time.  That means that to effectively drive change in these systems, you need to focus not on top-down, centralized programs but decentralized connection-oriented interventions.  See this article for an interesting analysis of a practical example in education reform.  As a side note, check out this fascinating analysis of how TCP/IP has itself evolved (or resisted evolution).

Broad and concurrent search for solutions.
Engineering well-conditioned, deterministic systems always comes down to posing and solving optimization problems sequentially.  Public policy and strategic planning in the command and control model of corporate governance has traditionally mimicked this approach.  Form a task force or planning team to analyze a problem.  Develop a plan.  Mobilize the organization to execute.  If things go awry, revisit the plan and try something else.  In computer science terms, this could broadly be described as depth-first search.  Given the weak and only extremely local engineering control available in complex adaptive systems, this approach tends to fail miserably.  What is needed is massively concurrent, breadth-first search.  Instead of a centralized, top-down approach to engineering emergent behavior, a loosely federated approach distributed across interaction points gets to better solutions faster.  Google works this way both literally as a search technology and by some accounts as an organization as well.