Archive
Additive Vs Multiplicative
You may not believe this, but overall, I like “agile” management and coding practices- where they fit. The most glaring shortcoming that I, and perhaps only I, perceive in the “agile” body of knowledge concerns the dearth of guidance for handling both the social and technical dependencies present in every software development endeavor. The larger the project (or whatever you #noprojects community members want to call it), the more tangled the inter-dependencies. There are, simultaneously: social couplings, technical couplings, and the nastiest type of coupling of all: socio-technical couplings.
The best analogy I can think of to express my thoughts on the agile dependencies black hole is the linear vs multiplicative complexity conundrum as expressed so elegantly by Bertrand Meyer in his book, “Agile!“. But instead of the family friendly linguini and lasagna images Mr. Meyer employs in his book, I, of course, choose to use imagery more in line with the blasphemous theme of this blog:
The complexity of the system is at least equal to the product of the problem and solution complexities. At worst, there are exponents associated with one or both multiplicands.
No More JAMB On My Toast
Amazon just sent me a recommendation for this book on the management of complexity:
Since four out of five reviewers gave it 5 stars, I scrolled down to peruse the reviews. As soon as I read the following JAMB review, I knew exactly what the reviewer was talking about. I can’t even begin to count how many boring, disappointing management books I’ve read over the years that fit the description. What I do know is that I don’t want to spend any more money or time on gobbledygook like this.
On Complexity And Goodness
While browsing around on Amazon.com for more books to read on simplicity/complexity, the pleasant memory of reading Dan Ward’s terrific little book, “The Simplicity Cycle“, somehow popped into my head. Since it has been 10 years since I read it, I decided to dig it up and re-read it.
In his little gem, Dan explores the relationships between complexity, goodness, and time. He starts out by showing this little graph, and then he spends the rest of the book eloquently explaining movements through the complexity-goodness space.
First things first. Let’s look at Mr. Ward’s parsimonious definitions of system complexity and goodness:
Complexity: Consisting of interconnected parts. Lots of interconnected parts equal high degree of complexity. Few interconnected parts equal a low degree of complexity.
Goodness: Operational functionality or utility or understandability or design maturity or beauty.
Granted, these definitions are just about as abstract as we can imagine, but (always) remember that context is everything:
The number 100 is intrinsically neither large nor small. 100 interconnected parts is a lot if we’re talking about a pencil sharpener, but few if we’re talking about a jet aircraft. – Dan Ward
When we start designing a system, we have no parts, no complexity (save for that in our heads), no goodness. Thus, we begin our effort close to the origin in the complexity-goodness space.
As we iteratively design/build our system, we conceive of parts and we connect them together, adding more parts as we continuously discover, learn, employ our knowledge of, and apply our design expertise to the problem at hand. Thus, we start moving out from the origin, increasing the complexity and (hopefully!) goodness of our baby as we go. The skills we apply at this stage of development are “learning and genesis“.
At a certain point in time during our effort, we hit a wall. The “increasing complexity increases goodness” relationship insidiously morphs into an “increasing complexity decreases goodness” relationship. We start veering off to the left in the complexity-goodness space:
Many designers, perhaps most, don’t realize they’ve rotated the vector to the left. We continue adding complexity without realizing we’re decreasing goodness.
We can often justify adding new parts independently, but each exists within the context of a larger system. We need to take a system level perspective when determining whether a component increases or decreases goodness. – Dan Ward
Once we hit the invisible but surely present wall, the only way to further increase goodness is to somehow start reducing complexity. We can do this by putting our “learning and genesis” skills on the shelf and switching over to our vastly underutilized “unlearning and synthesis” skills. Instead of creating and adding new parts, we need to reduce the part count by integrating some of the parts and discarding others that aren’t pulling their weight.
Perfection is achieved not when there is nothing more to add, but rather when there is nothing more to take away. – Antoine de Saint Exupery
Dan’s explanation of the complexity-goodness dynamic is consistent with Joseph Tainter’s account in “The Collapse Of Complex Societies“. Mr. Tainter’s thesis is that as societies grow, they prosper by investing in, and adding layer upon layer, of complexity to the system. However, there is an often unseen downside at work during the process. Over time, the Return On Investment (ROI) in complexity starts to decrease in accordance with the law of diminishing returns. Eventually, further investment depletes the treasury while injecting more and more complexity into the system without adding commensurate “goodness“. The society becomes vulnerable to a “black swan” event, and when the swan paddles onto the scene, there are not enough resources left to recover from the calamity. It’s collapse city.
The only way out of the runaway increasing complexity dilemma is for the system’s stewards to conscientiously start reducing the tangled mess of complexity: integrating overlapping parts, fusing tightly coupled structures, and removing useless or no-longer-useful elements. However, since the biggest benefactors of increasing complexity are the stewards of the system themselves, the likelihood of an intervention taking place before a black swan’s arrival on the scene is low.
At the end of his book, Mr. Ward presents a few patterns of activity in the complexity-goodness space, two of which align with Mr. Tainter’s theory. Perhaps the one on the left should be renamed “Collapse“?
So, what does all this made up BD00 complexity-goodness-collapse crap mean to me in my little world (and perhaps you)? In my work as a software developer, when my intuition starts whispering in my ear that my architecture/sub-designs/code are starting to exceed my capacity to understand the product, I fight the urge to ignore it. I listen to that voice and do my best to suppress the mighty, culturally inculcated urge to over-learn, over-create, and over-complexify. I grudgingly bench my “learning and genesis” skills and put my “unlearning and synthesis” skills in the game.
Encroach And Dominate
Systems tend to grow, and as they grow, they encroach – John Gall (The Systems Bible)
Complex societies, once established, tend to expand and dominate – Joseph Tainter (The Collapse Of Complex Societies)
Related articles
- The Gall Of That Man! (bulldozer00.com)
- It’s Systems As Such (bulldozer00.com)
Tainted Themes
In “The Collapse of Complex Societies“, Joseph Tainter defines “complexity” as:
Complexity is generally understood to refer to such things as the size of a society, the number and distinctiveness of its parts, the variety of specialized social roles that it incorporates, the number of distinct social personalities present, and the variety of mechanisms for organizing these into a coherent, functioning whole. – Joseph Tainter.
Joseph then defines “collapse” as:
A society has collapsed when it displays a rapid, significant loss of an established level of sociopolitical complexity. Collapse then is not a fall to some primordial chaos, but a return to the normal human condition of lower complexity. – Joseph Tainter
Mr. Tainter then surveys the landscape of historic, archaeological, and anthropological literature explaining the collapse of societies like the western Roman empire, the Mayans, the Hittites, the Mycenaeans, etc. He groups the theories of collapse into 11 “tainted” themes:
Joseph then skillfully makes a case that all these specialized themes can be subsumed under his simple and universal theory of collapse:
His theory goes something like this. As a society grows, it necessarily becomes more complex. Exceedingly more and more investment in complexity (infrastructure, basic services, defense, food production control, public tributes to the elite to maintain their legitimacy in the minds of the non-elites) is then required to hold the society together. However, as the graphic below shows, at some point in time, the marginal return on the investments in complexity reaches a tipping point at which the society becomes vulnerable to collapse from one or more of the subsumed themes.
To illustrate further, BD00 presents the dorky growth-to-collapse scenario below:
As shown, societal growth begets a larger and more internally diverse production subsystem. That same growth requires investment in more and varied control (Ashby’s law of requisite variety) over production so that “the center can hold” and the society can “retain its identity as a whole“. In this runaway positive feedback system, the growing army of controller layers siphons off more and more of the production outputs for itself – starving the production subsystem in the process.
To prevent the production subsystem from dispersing (or revolting) and keep the whole system growing, more and more investment is poured into production control (compliance and efficiency) in an attempt to increase output and keep both the production and control subsystems viable. However, as the control subsystem growth outpaces production subsystem growth and a caste system emerges, the control subsystem requires a larger and larger share of the production subsystem outputs for itself – which further weakens and constrains and alienates the production subsystem. Hence, the “declining marginal return on investment in complexity” machine is kicked into overdrive and the vulnerability to collapse appears on the horizon. D’oh!
In your growing “society“, is the controller subsystem growing faster than the production subsystem? Are more specialized controller/administration layers being added faster than producers? Is the caste system becoming more stratified and prejudiced? Are more and more processes/rules/policies being imposed on the production subsystem for increased compliance and efficiency? Is the army of growing controllers siphoning off more and more of the production system outputs for themselves? If so, then maybe your society is vulnerable to sudden collapse. But then again, it may not. Tainter’s thesis is simply a bland and drama-less, economically based theory. It might be tainted itself.
Say it taint so, shoeless Joe! – unknown kid
The Gall Of That Man!
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. – John Gall (1975, p.71)
This law is essentially an argument in favour of underspecification: it can be used to explain the success of systems like the World Wide Web and Blogosphere, which grew from simple to complex systems incrementally, and the failure of systems like CORBA, which began with complex specifications. – Wikipedia
We can add the Strategic Defense Initiative (Star Wars), the FBI’s Virtual Case File System (VCS), JTRS, FCS, and prolly a boatload of other high falutin’ defense projects to the list of wreckage triggered by violations of Gall’s law. Do you have any other majestic violations you’d like to share? Can you cite any counter-examples that attempt to refute the law….
One of the great tragedies of life is the murder of a beautiful theory by a gang of brutal facts – Benjamin Franklin
C++, which started out simply as “C With Classes“, is a successful complex “system“. Java, which started out as a simple and pure object-oriented system, has evolved into a successful complex system that now includes a mix of functional and generic programming features. Linux, which started out as a simple college operating system project, has evolved into a monstrously successful complex system. DDS, which started out as a convergence of two similar, field-tested, pub-sub messaging implementations from Thales Inc. and RTI Inc., has evolved into a successful complex system (in spite of being backed by the OMG). Do you have any other law abiding citizens you’d like to share?
Gall’s law sounds like a, or thee, platform for Fred Brooks‘ “plan to throw one away” admonition and Grady Booch‘s “evolution through a series of stable intermediate forms” advice.
Here are two questions to ponder: Is your org in the process of trying to define/develop a grand system design from scratch? Scanning your project portfolio, can you definitively know if you’re about to, or currently are, attempting a frontal assault on Gall’s galling law – and would it matter if you did know?
A Hoarrific Failure
Work started (on the 503 Mark II software system) with a team of fifteen programmers and the deadline for delivery was set some eighteen months ahead in March 1965.
Although I was still managerially responsible for the 503 Mark II software, I gave it less attention than the company’s new products and almost failed to notice when the deadline for its delivery passed without event.
The programmers revised their implementation schedules and a new delivery date was set some three months ahead in June 1965. Needless to say, that day also passed without event.
I asked the senior programmers once again to draw up revised schedules, which again showed that the software could be delivered within another three months. I desperately wanted to believe it but I just could not. I disregarded the schedules and began to dig more deeply into the project.
The entire Elliott 503 Mark II software project had to be abandoned, and with it, over thirty man-years of programming effort, equivalent to nearly one man’s active working life, and I was responsible, both as designer and as manager, for wasting it.
The above story synopsis was extracted from Tony “Quicksort” Hoare‘s 1980-ACM Turing award lecture.
Mr. Hoare’s classic speech is the source of a few great quotes that have transcended time:
I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult…. No committee will ever do this until it is too late.
A feature which is included before it is fully understood can never be removed later.
At first I hoped that such a technically unsound project would collapse but I soon realized it was doomed to success.
The price of reliability is the pursuit of the utmost simplicity. It is a price which the
very rich find most hard to pay.The mistakes which have been made in the last twenty years are being repeated today on an even grander scale. (1980)
Dontcha think that last quote can be restated today as:
The mistakes which have been made in the last fifty years are being repeated today on an even grander scale.
Since man’s ability to cope with complexity is relentlessly being dwarfed by his propensity to create ever greater complexity, the same statement might probably be true 50 years hence, no?
Three Degrees Of Distribution
Behold the un-credentialed and un-esteemed BD00’s taxonomy of software-intensive system complexity:
How many “M”s does the system you’re working on have? If the answer is three, should it really be two? If the answer is two, should it really be one? How do you know what number of “M”s your system design should have? When tacking on another “M” to your system design because you “have to“, what newly emergent property is the largest complexity magnifier?
Now, replace the inorganic legend at the top of the page with the following organic one and contemplate how the complexity and “success” curves are affected:
From Complexity To Simplicity
As the graphic below shows, when a system evolves, it tends to accrue more complexity – especially man-made systems. Thus, I was surprised to discover that the Scrum product development framework seems to have evolved in the opposite direction over time – from complexity toward simplicity.
The 1995 Ken Schwaber “Scrum Development Process“ paper describes Scrum as:
Scrum is a management, enhancement, and maintenance methodology for an existing system or production prototype.
However, The 2011 Scrum Guide introduces Scrum as:
Scrum is a framework for developing and sustaining complex products.
Thus, according to its founding fathers, Scrum has transformed from a “methodology” into a “framework“.
Even though most people would probably agree that the term “framework” connotes more generality than the term “methodology“, it’s arguable whether a framework is simpler than a methodology. Nevertheless, as the figure below shows, I think that this is indeed the case for Scrum.
In 1995, Scrum was defined as having two bookend, waterfall-like, events: PSA and Closure. As you can see, the 2011 definition does not include these bookends. For good or bad, Scrum has become simpler by shedding its junk in the trunk, no?
The most reliable part in a system is the one that is not there; because it isn’t needed. (Middle management?)
I think, but am not sure, that the PSA event was truncated from the Scrum definition in order to discourage inefficient BDUF (Big Design Up Front) from dominating a project. I also think, but am not sure, that the Closure event was jettisoned from Scrum to dispel the myth that there is a “100% done” time-point for the class of product developments Scrum targets. What do you think?
Related articles
- Scrum Cheat Sheet (bulldozer00.com)
- Ultimately And Unsurprisingly (bulldozer00.com)
- Introduction to scrum (slideshare.net)
- Scrum Master, a management position? (scrumguru.wordpress.com)
- Developing, and succeeding, with Scrum (techworld.com.au)