Open Minds, Open Source

How can giving up on central control, pre-planning and the vertical command organization of software development produce better results? The answer is implicit in the way that cost nonlinearities associated with scaling change the tradeoffs of complex systems.

Ask any architect. Have you ever wondered what the practical limit on the height of skyscrapers is? Turns out it’s not strength of materials, nor our ability to design very tall structures that are stable under load. It’s elevators!

For a skyscraper to be useful, people have to be able to get in and out of it at least twice a day (four times if they eat lunch). The number of elevators a building needs to get people in and out of it rises with the number of people in it, which is roughly proportional to its floor space, which is proportional to the square of the height. Thus, as buildings get taller a larger and larger percentage of the building core has to become elevators. At some critical height, so much of the building has to be elevators that the activity on the remaining floor space can’t pay for any more of them. The communications overhead implied by the system’s single choke point (the ground floor) crowds out production. Instead of building a taller vertical skyscraper, you need several shorter buildings connected by a subway.

Or, ask any economist. Today’s slow-motion collapse of closed-source software development mirrors the collapse of central economic planning two decades ago, and proceeds from the same underlying problems. Command systems are poor at dealing with complexity and ambiguity; as complexity rises, it inevitably outstrips the coping capacity of planners. As planning deteriorates, accelerating malinvestment pulls down the whole system. In economics, this is the end-stage of collectivism correctly predicted by economist F.A. Hayek in the 1930s fifty years before it was acted out in the Soviet Union. In software development, we observe a similar tendency of planned systems to complexify until they collapse of their own weight.

Ecologists, too, have learned to respect the kind of decentralized self-organization that occurs at every level of living systems. The tremendous interwoven complexity of an ecology isn’t designed — it doesn’t happen because any central organizer planned a preconceived set of interactions between the different species that make it up. We know this because those interactions aren’t even stable over historical let alone evolutionary time — climate fluctuations, predator-prey cycles, and sporadic events such as major fires or disease epidemics can and do change the rules at any time. Nevertheless, ecologies develop and sustain extremely rich interactions from the unscripted behavior of the selfish adaptive machines that compose them.

Ecologies, market economies and open-source development all have crucial patterns in common; they are all examples of what computer scientist John Holland has called a “Complex Adaptive System” (CAS). CASs are composed of selfish adaptive agents which have only limited, local information about the state of the system. Their complexity arises not from global planning but as an unintended result of each agent’s search for better, more competitive adaptive strategies. Global equilibrium and order at each level of a CAS emerges as what systems theorists call an “epiphenomenon” — organization that is not predictable from knowing only the rules of the next lower level. The information that sustains that organization is distributed and largely implicit in the evolved structure of the CAS itself, not explicit and centralized in the knowledge of any one agent.

The distributed intelligence of CASs, in fact, is precisely why they exhibit higher complexity, and cope with complexity, far more capably than can planned centralized systems. Distribution means there is no critical node, no single point of failure to be overwhelmed as the system scales up. Because the agents in such systems are constantly varying their adaptive behaviors in search of an edge on the competition, unpredictable stresses are far less likely to disrupt the CAS as a whole than they would be to blindside a planned system with planners looking in the wrong direction.

The subtler lesson is that the full use of a new technology may demand new narratives, new ways of seeing the world — and the technology itself doesn’t automatically generate the narrative to go with it. Without the right enabling theory or generative myth to organize peoples’ perceptions of otherwise isolated facts, even the most powerful set of innovations may languish in the margins of the economy for a long time. The Mayans had the wheel, but only used it for children’s toys; they did real cargo hauling with drag sledges.

Hackers did open-source development as a folk practice for fifteen years before RMS tried to create a new way of seeing the world around it. The wrong explanatory myth (as, in, arguably, RMS’s moral crusade against intellectual property) may actually retard acceptance.