Archive
The Promotion Strategy
On the left side of the following figure, we have a typical mega-project structure with a minimal Overhead-to-Production personnel ratio (O/P). If all goes well, the O/P ratio stays the same throughout the execution phase. However, if the originally planned schedule starts slipping, there’s a tendency of some orgs to unconsciously exacerbate the slippage. In order to reign in the slippage via exercising tighter control, the org “promotes” one or more senior personnel out of the realm of production and into the ranks of overhead to help things along.
By executing the “promotion strategy“, the O/P ratio increases – which is never a good thing for profit margins. In addition, the remaining “unpromoted” senior + junior analysts and developers are left to pick up the work left behind by the newly minted overhead landlords.
So, if you’re in the uncomfortable position of being pressured to increase execution efficiency in order to pull in a slipping schedule, you might want to think twice about employing the “promotion strategy” to get things back on track.
The brilliant Fred Brooks famously stated something akin to “Adding people to a late project only makes it later“. In this infamous post, dimwit BD00 states that “Promoting personnel from production to overhead on a late project only makes it later“.
The Annihilation Of Conceptual Integrity
When a large group or committee is tasked with designing a complex system from scratch, or evolving an existing one, I always think of these timeless quotes from Fred Brooks:
“A design flows from a chief designer, supported by a design team, not partitioned among one.” – Fred Brooks
“The entire system also must have conceptual integrity, and that requires a system architect to design it all, from the top down.” – Fred Brooks
“Who advocates … for the product itself—its conceptual integrity, its efficiency, its economy, its robustness? Often, no one.” – Fred Brooks
“A little retrospection shows that although many fine, useful software systems have been designed by committees and built as part of multi-part projects, those software systems that have excited passionate fans are those that are the products of one or a few designing minds, great designers.” – Fred Brooks
Note Fred’s correlation between “conceptual integrity” and the individual (or small group) for success.
C++ is a large, sprawling, complex, programming language. With the next language specification update due to be ratified by the ISO C++ committee in 2017, Bjarne Stroustrup (the original, one-man, creator and curator of C++) felt the need to publish a passionate plea admonishing the committee to “stop the insanity“: Thoughts About C++17.
By reminding the committee members of the essence of what uniquely distinguishes C++ from its peers, Bjarne is warning against the danger of annihilating the language’s conceptual integrity. The center must hold!
It seems to be a popular pastime to condemn C++ for being a filthy mess caused by rampant design-by-committee. This has been suggested repeatedly since before the committee was founded, but I feel it is now far worse. Adding a lot of unrelated features and library components will do much to add complexity to the language, making it scarier to both novices and “mainline programmers”. What I do not want to try to do:
• Turn C++ into a radically different language
• Turn parts of C++ into a much higher-level language by providing a segregated sub-language
• Have C++ compete with every other language by adding as many as possible of their features
• Incrementally modify C++ to support a whole new “paradigm”
• Hamper C++’s use for the most demanding systems programming tasks
• Increase the complexity of C++ use for the 99% for the benefit of the 1% (us and our best friends).
Like all the other C++ committee members, Bjarne is a really, really, smart guy. For the decades that I’ve followed his efforts to evolve and improve the language, Bjarne has always expressed empathy for “the little people“; the 99% (of which I am a card-carrying member).
In a world in which the top 1% doesn’t seem to give a shit about the remaining 99%, it’s always refreshing to encounter a 1 percenter who cares deeply about the other 99 percenters. And THAT, my dear reader, is what has always endeared Mr. Bjarne Stroustrup to me.
I am often asked – often by influential and/or famous people – if I am planning a D&E2 (The Design And Evolution Of C++ version 2). I’m not, I’m too busy using and improving C++. However, should I ever find it convenient to semi-retire, D&E2 would be a great project. However, I could do that if and only if I could honestly write it without condemning the language or my friends. We need to ship something we can be proud of and that we can articulate.
W00t! I didn’t know I was influential and/or famous: And In The Beginning.
The Biggest Cheerleader
Herb Sutter is by far the biggest cheerleader for the C++ programming language – even more so than the language’s soft spoken creator, Bjarne Stroustrup. Herb speaks with such enthusiasm and optimism that it’s infectious.
In his talk at the recently concluded GoingNative2013 C++ conference, Herb presented this slide to convey the structure of the ISO C++ Working Group 21 (WG21):
On the left, we have the language core and language evolution working groups. On the right, we have the standard library working group.
But wait! That was the organizational structure as of 18 months ago. As of now, we have this decomposition:
As you can see, there’s a lot of volunteer effort being applied to the evolution of the language – especially in the domain of libraries. In fact, most of the core language features on the left side exist to support the development of the upcoming libraries on the right.
In addition to the forthcoming minor 2014 release of C++, which adds a handful of new features and fixes some bugs/omissions from C++11, the next major release is slated for 2017. Of course, we won’t get all of the features and libraries enumerated on the slide, but the future looks bright for C++.
The biggest challenge for Herb et al will be to ensure the conceptual integrity of the language as a whole remains intact in spite of the ambitious growth plan. The faster the growth, the higher the chance of the wheels falling off the bus.
“The entire system also must have conceptual integrity, and that requires a system architect to design it all, from the top down.” – Fred Brooks
“Who advocates … for the product itself—its conceptual integrity, its efficiency, its economy, its robustness? Often, no one.” – Fred Brooks
I’m not a fan of committees in general, but in this specific case I’m confident that Herb, Bjarne, and their fellow WG21 friends can pull it off. I think they did a great job on C++11 and I think they’ll perform just as admirably in hatching future C++ releases.
The Gall Of That Man!
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. – John Gall (1975, p.71)
This law is essentially an argument in favour of underspecification: it can be used to explain the success of systems like the World Wide Web and Blogosphere, which grew from simple to complex systems incrementally, and the failure of systems like CORBA, which began with complex specifications. – Wikipedia
We can add the Strategic Defense Initiative (Star Wars), the FBI’s Virtual Case File System (VCS), JTRS, FCS, and prolly a boatload of other high falutin’ defense projects to the list of wreckage triggered by violations of Gall’s law. Do you have any other majestic violations you’d like to share? Can you cite any counter-examples that attempt to refute the law….
One of the great tragedies of life is the murder of a beautiful theory by a gang of brutal facts – Benjamin Franklin
C++, which started out simply as “C With Classes“, is a successful complex “system“. Java, which started out as a simple and pure object-oriented system, has evolved into a successful complex system that now includes a mix of functional and generic programming features. Linux, which started out as a simple college operating system project, has evolved into a monstrously successful complex system. DDS, which started out as a convergence of two similar, field-tested, pub-sub messaging implementations from Thales Inc. and RTI Inc., has evolved into a successful complex system (in spite of being backed by the OMG). Do you have any other law abiding citizens you’d like to share?
Gall’s law sounds like a, or thee, platform for Fred Brooks‘ “plan to throw one away” admonition and Grady Booch‘s “evolution through a series of stable intermediate forms” advice.
Here are two questions to ponder: Is your org in the process of trying to define/develop a grand system design from scratch? Scanning your project portfolio, can you definitively know if you’re about to, or currently are, attempting a frontal assault on Gall’s galling law – and would it matter if you did know?
Messmatiques And Empathic Creators
Assume that you have a wicked problem that needs fixing; a bonafide Ackoffian “mess” or Warfieldian “problematique” – a “messmatique“. Also assume (perhaps unrealistically) that a solution that is optimal in some sense is available:
The graphic below shows two different social structures for developing the solution; individual-based and group-based.
If the messmatique is small enough and well bounded, the individual-based “structure” can produce the ideal solution faster because intra-cranial communication occurs at the speed of thought and there is no need to get group members aligned and on the same page.
Group efforts to solve a messmatique often end up producing a half-assed, design-by-committee solution (at best) or an amplified messmatique (at worst). D’oh!
Alas, the individual, genius-based, problem solving structure is not scalable. At a certain level of complexity and size, a singular, intra-cranial created solution can produce the same bogus result as an unaligned group structure.
So, how to resolve the dilemma when a messmatique exceeds the capacity of an individual to cope with the beast? How about this hybrid individual+group structure:
Note: The Brooksian BD/UA is not a “facilitator” (catalyst) or a hot shot “manager” (decider). He/she is both of those and, most importantly, an empathic creator.
Composing a problem resolution social structure is necessary but not sufficient for “success“. A process for converging onto the solution is also required. So, with a hybrid individual+group social structure in place, what would the “ideal” solution development process look like? Is there one?
Invisible, Thus Irrelevant
A system that has a sound architecture is one that has conceptual integrity, and as (Fred) Brooks firmly states, “conceptual integrity is the most important consideration in system design”. In some ways, the architecture of a system is largely irrelevant to its end users. However, having a clean internal structure is essential to constructing a system that is understandable, can be extended and reorganized, and is maintainable and testable.
The above paragraph was taken from Booch et al’s delightful “Object-Oriented Analysis And Design With Applications“. BD00’s version of the bolded sentence is:
In some ways, the architecture of a system is largely irrelevant to its end users, its developers, and all levels of management in the development organization.
If the architecture is invisible because of the lack of a lightweight, widely circulated, communicated, and understood, set of living artifacts, then it can’t be relevant to any type of stakeholder – even a developer. As the saying goes: “out of site, out of mind“.
Despite the long term business importance of understandability, extendability, reorganizability, maintainability, and testability, many revenue generating product architectures are indeed invisible – unlike short term schedulability and budgetability; which are always highly visible.
A Bunch Of STIFs
In “Object-Oriented Analysis and Design with Applications“, Grady Booch argues that a successful software architecture is layered, object-oriented, and evolves over time as a series of STIFs – STable Intermediate Forms. The smart and well liked BD00 agrees; and he adds that unsuccessful architectures devolve over time as a series of unacknowledged UBOMs (pronounced as “You Bombs“).
UBOMs are subtle creatures that, without caring stewardship, can unknowingly sneak up on you. Your first release or two may start out as a successful STIF or a small and unobtrusive UBOM. But then, since you’ve stepped into UBOM-land, it can grow into an unusable, resource-sucking abomination. Be careful, very careful….
“Who advocates for the product itself—its conceptual integrity, its efficiency, its economy, its robustness? Often, no one.” – Fred Brooks
“Dear Fred, when the primary thought held steadfast in everybody’s mind is “schedule is king” instead of “product is king“, of course no one will often advocate for the product.” – BD00
Mostly True
“Adding people to a late project makes it later” – Frederick Brooks
If an underway project is perceived as a schedule buster (and it usually is if managers are thinking of hurling more bodies into the inferno) then this famous Brooksian quote is true. However, if the project’s tasks are reasonably well defined, loosely coupled, and parallelized to the extent they can be, then adding people to a late project may actually help it finish earlier – without severely degrading quality.
Alas, the bigger the project, the less likely it is to be structured well enough so that adding people to it will speed up its completion. Not only is it harder to plan and structure larger projects, the people who get anointed to run bigger projects are more likely to be non-technical managers who only know how to plan and execute in terms of the generic, cookie-cutter, earned-value sequence of: requirements->design->code->test->deliver (yawn). Knowledge and understanding of any aspect of the project at the lower, more concrete, project-specific, level of detail that’s crucial for success is most likely to be absent. Bummer.
Benevolent Dictators And Unapologetic Aristocrats
The Editor in Chief of Dr. Dobb’s Journal, Andrew Binstock, laments about the “committee-ization” of programming languages like C, C++, and Java in “In Praise of Benevolent Language Dictators“:
Where the vision (of the language) is maintained by a single individual, quality thrives. Where committees determine features, quality declines inexorably: Each new release saps vitality from the language even as it appears to remedy past faults or provide new, awaited capabilities.
I think Andrew’s premise applies not only to languages, but it also applies to software designs, architectures, and even organizations of people. These constructs are all “systems” of dynamically interacting elements wired together in order to realize some purpose – not just bags of independent parts. As an example, Fred Brooks, in his classic book, “The Mythical Man-Month“, states:
To achieve conceptual integrity, a design must proceed from one mind or a small group of agreeing minds.
If a system is to have conceptual integrity, someone must control the concepts. That is an aristocracy that needs no apology.
The greater the number of people involved in a concerted effort, the lower the coherency within, and the lower the consistency across, the results. That is, unless a benevolent dictator or unapologetic aristocrat is involved AND he/she is allowed to do what he/she decides must be done to ensure that conceptual integrity is preserved for the long haul.
Of course, because of the relentless increase in entropy guaranteed by the second law of thermodynamics, all conceptually integrated “closed systems” eventually morph into a disordered and random mess of unrelated parts. It’s just a matter of when, but if an unapologetic aristocrat who is keeping the conceptual integrity of a system intact leaves or is handcuffed by clueless dolts who have power over him/her, the system’s ultimate demise is greatly accelerated.
No Man’s Land
Having recently just read my umpteenth Watts Humphrey book on his PSP/TSP (Personal Software Process/Team Software Process) methodology, I find myself struggling, yet again, to reconcile his right wing thinking with the left wing “Agile” software methodologies that have sprouted up over the last 10 years as a backlash against waterfall-like methodologies. This diagram models the situational tension in my hollow head:
It’s not hard to deduce that prescriptive, plan-centric processes are favored by managers (at least those who understand the business of software) and demonstrative, code-centric processes are favored by developers. Duh!
Advocates of both the right and the left have documented ample evidence that their approach is successful, but neither side (unsurprisingly) willingly publicizes their failures much. When the subject of failure is surfaced, both sides always attribute messes to bad “implementations” – which of course is true. IMHO, given a crack team of developers, project managers, testers, and financial sponsors, ANY disciplined methodology can be successful – those on the left, those on the right, or those toiling in NO MAN’s LAND.
l’ve been the anointed software “lead” on two, < 10 person, software teams in the past. Both as a lead and an induhvidual contributor, the approach I’ve always intuitively taken toward software development can be classified as falling into “no man’s land“. It’s basically an informal, but well-known, Brooksian, architecture-centric, strategy that I’d characterize as slightly right-leaning:
As far as I know, there’s no funky, consensus-backed, label like “semi-agile” or “lean planning” to capture the essence of architecture-centric development. There certainly aren’t any “certification” training courses or famous promoters of the approach. This book, which I discovered and read after I’d been designing/developing in this mundane way for years, sort of covers the process that I repeatedly use.
In my tortured mind (and you definitely don’t want to go there!), architecture-centricity simply means “centered on high level blueprints“. Early on, before the horses are let out of the barn and massive, fragmented, project effort starts churning under continuous pressure from management for “status“, a frantic iterative/sketching/bounding/synthesis activity takes place. With a visible “rev X” architecture in hand (one that enumerates the structural elements, their connectivity, and the macro behavior that the system must manifest) for guidance, people can then be assigned to the sparsely defined, but bounded, system elements so that they can create reasonable “Rev 0” estimates and plans. The keys are that only one or two or three people record the lay of the land. Subsequently, the element assignees produce their own “Rev 0” estimates – prior to igniting the frenetic project activity that is sure to follow.
In a nutshell, what I just described is the front-end of the architecture-centric approach as I practice it; either overtly or covertly. The subsequent construction activities that take place after a reasonably solid, lightweight, “rev X”, architecture (or equivalent design artifact for smaller scale projects) has been recorded and disseminated are just details. Of course, I’m just joking in that last sentence, but unless the macro front end is secured and repeatedly used as the “go to bible” shortly after T==start, all is lost – regardless of the micro-detailed practices (TDD, automated unit tests, continuous integration, continuous delivery, yada yada yada) that will follow. But hey, the content of this post is just Bulldozer00’s uncredentialed and non-expert opinion, so don’t believe a word of it.