Dream, Mess, Catastrophe
To build high quality, successful, long-lived, “Big” software, you must design it in terms of layers (that’s why the ISO ISO model for network architecture has 7, crisply defined layers). If you don’t leverage the tool of layering (and its close cousin – leveling) in an attempt to manage complexity, then: your baby won’t have much conceptual integrity; you’ll go insane; and you’ll be the unproud owner of a big ball of mud that sucks down maintenance funds like a Dyson and may crumble to pieces at the slightest provocation. D’oh!
The figure below shows a reference model for a layered application. Note that even though we have a neat stack, we can’t tell if we have a winner on our hands.
By adding the inter-layer dependencies to the reference architecture, the true character of our software system will be revealed:
In the “Maintenance Dream“, the inter-layer APIs are crisply defined and empathetically exposed in the form a well documented interfaces, abstractions, and code examples. The programmer(s) of a given layer only have to know what they have to provide to the users above them and what the next layer below lovingly provides to them. Ah, life is good.
Next, shuffle on over to the “Maintenance Mess“. Here, we have crisply defined layers, but the allocation of functionality to the layers has been hosed up ( a violation of the principle of “leveling“) and there’s a beast in the making. Thus, in order for App Layer programmers to be productive, they have to stuff their head with knowledge/understanding of all the sub-layer APIs to get their jobs done. Hopefully, their heads don’t explode and they don’t run for the exits.
Finally, skip on over to the (shhh!) “Maintenance Catastrophe“. Here, we have both a leveling mess and an incoherent set of incomprehensible (to mere mortals) inter-layer APIs. In the worst case: the layers aren’t discernible from one another; it takes “forever” to on-board new project members; it takes forever to fix bugs; it takes forever to add features; and it takes an heroic effort to keep the abomination alive and kicking. Double D’oh!
Forever == Lots Of Cash
In orgs that have only ever created “Maintenance Messes and Catastrophies“, since they’ve never experienced a “Maintenance Dream“, they think that high maintenance costs, busted schedules, and buggy releases are the norm. How do you explain the color green to someone who’s spent his/her whole life immersed in a world of red?
This is a difficult problem. Deterministic systems exist to fulfill a purpose. When that purpose is stable, then layering and leveling work fine. This is largely because layering and leveling are great tools for static complexity. When you look at the ISO 7 layer model, by the time it was formulated, network communication protocols were well understood, and the layers represented very stable abstractions.
When dynamic complexity enters into the picture, static methods like layering and leveling often aren’t enough. Dynamic complexity can occur in several ways:
1) significant changes in the purpose of the system or its environment over time
2) a product or system being used in different contexts of use (think the SUV)
3) a combination of 1) and 2)
Whatever the cause, another perspective is needed, one that deals with dynamic complexity. Commonality/variability analysis is a good start. This provides another dimension to the separation of concerns strategy – isolate by layer and seal off the volatility too. Service oriented architectures are another technique.
Consider a high-rise in an earthquake zone. Layers and levels give you stable, well-balanced floors when the Richter Scale is 3 or less (static complexity). But when the tectonic plates get active (dynamic complexity), problems occur unless the building was designed to sway.
Thanks for the input CA. I don’t know what you mean when you say that layering/leveling are “static methods”. Regardless, I still think that layering and leveling are key, canonical factors in the management and design of big systems. How one levels and layers a design should always be driven by both the static (structural) and dynamic (behavioral) requirements. Competent layering/leveling supports adapting to “significant changes to the purpose” by allowing *relatively* low-cost/low-risk replacement of existing layers and/or the addition of new “purposeful” layers. By “relative”, I mean relative to a mess or catastrophe as shown in the picture.
The desire/need for commonality/variability (product lines) should inform the specification and design of the lower layers so that (as Grady Booch has said) “there is no top”. Given a solid design/implementation of the lower, more general, layers with the right structural and behavioral characteristics for a niche domain, many different, but niche-related “tops” can be plugged in quickly and coherently to create a family of products – and reap the benefit of their additional revenue streams. The best case study of a successful transition from a company of one-offs to an efficient product line developer is that of Celsius Tech in the 90s. “Layering” was mentioned many times over as the one of the key factors to the financial and technical success of their SS2000 product line.
OK. Your reply did a better job of clarifying what you meant by layers and leveling. I had understood a more restricted view from the original post.
Having abstraction mechanisms like variability points and interfaces be specifically designated to predefined layers is a key strategy for adaptability and extensibility.