Archive
A Professional Failure
I’m a professional failure. Why? Because I’m pretty sure that I’ve never satisfied any unreasonable schedule that I was ever “given” to meet. Since almost all schedules are unreasonable, then, by definition, I’m a professional failure. Hell, it didn’t even matter if I was the one who created the unreasonable schedule in the first place, I’ve failed. Bummer.

Looking back, I think that I’ve figured out why I underperformed (<– that’s management-speak for “failed”). It’s simply that the problem solving projects that I’ve worked on have been grossly underestimated. Why is that? Because they all required learning something new and acquiring new knowledge in the problem area of pecuniary interest.
So, how can you know if a given schedule is unreasonable, and does it matter if you conclude that meeting the schedule is a lost cause? You most likely can’t, and no, it doesn’t matter. Assume that, based on personal experience and a deep “knowing” of what’s involved in a project, you actually can determine that the schedule is a laughable, but innocent, lie. There’s nothing you can do about it. If you speak up, at best, you’ll be ignored. At worst, you’ll receive multiple peek-a-boo visits from one or more STSJs (Status Taker and Schedule Jockey) who don’t have to do any of the project work themselves.
How about you, have you been a perpetual failure like me? Of course not. Your resume says here that you have been 100% successful on every project you’ve worked on; and that implies that you’ve met every schedule. But wait, every other resume in my stack says the same thing. Damn! How am I gonna decide among all of these perfect people who gets the job?
My “Status” As Of 09-27-09
I recently finished a 2 month effort discovering, developing, and recording a state machine algorithm that produces a stream of integrated output “target” reports from a continuous stream of discrete, raw input message fragments. It’s not rocket science, but because of the complexity of the algorithm (the devil’s always in the details), a decision was made to emulate this proprietary algorithm in multiple, simulated external environmental scenarios. The purpose of the emulation-plus-simulation project is to work out the (inevitable) kinks in the algorithm design prior to integrating the logic into an existing product and foisting it on unsuspecting customers :^) .
The “bent” SysML diagram below shows the major “blocks” in the simulator design. Since there are no custom hardware components in the system, except for the scenario configuration file, every SysML block represents a software “class”.

Upon launch, the simulator:
- Reads in a simple, flat, ASCII scenario configuration file that specifies the attributes of targets operating in the simulated external environment. Each attribute is defined in terms of a <name=value> token pair.
- Generates a simulated stream of multiplexed input messages emitted by the target constellation.
- Demultiplexes and processes the input stream in accordance with the state machine algorithm specification to formulate output target reports.
- Records the algorithm output target report stream for post-simulation analysis via Commercial Off The Shelf (COTS) tools like Excel and MATLAB.
I’m currently in the process of writing the C++ code for all of the components except the COTS tools, of course. On Friday, I finished writing, unit testing, and integration testing the “Simulation Initialization” functionality (use case?) of the simulator.Yahoo!
The diagram below zooms in on the front end of the simulator that I’ve finished (100% of course) developing; the “Scenario File Reader” class, and the portion of the in-memory “Scenario Database Manager” class that stores the scenario configuration data in the two sub-databases.

The next step in my evil plan (moo ha ha!) is to code up, test, and integrate the much-more-interesting “Data Stream Generator” class into the simulator without breaking any of the crappy code that has already been written. 🙂
If someone (anyone?) actually reads this boring blog and is interested in following my progress until the project gets finished or canceled, then give me a shoutout. I might post another status update when I get the “Data Stream Generator” class coded, tested, and integrated.
What’s your current status?
90 Percent Done
In order for those in charge (and those who are in charge of those who are in charge ad infinitum) to track and control a project, someone has to estimate when the project will be 100% complete. For any software development project of non-trivial complexity, it doesn’t matter who conjures up the estimate, or what drugs they were on when they verbalized it, the odds are huge that the project will be underestimated. That’s because in most corpo command and control hierarchies, there is always implicit pressure to underestimate the effort needed to “get it done”. After all, time is money and everyone wants to minimize the cost to “get it done”. Even though everybody smells the silent-but-deadly stank in the air and knows that’s how the game is played, everybody pretends otherwise.
The graph below shows a made up example (like John Lovitt, I’m a patholgical liar who makes everything up, so don’t believe a word I say) of a project timeline. On day zero, the obviously infallible project manager (if you browse linkedin.com, no manager has ever missed a due date) plots a nice and tidy straight (blue) line to the 100% done date. During the course of executing the project, regular status is taken and plotted as the “actual” progress (red) line so that everybody who is important in the company can know what’s going on.

For the example project modeled by the graph, the actual progress starts deviating from the planned progress on day one. Of course, since the vast majority of project (and product and program) managers are klueless and don’t have the expertise to fix the deficit, the gap widens over time. On really dorked up projects, the red line starts above the blue line and the project is ahead of schedule – whoopee!
At around the 90-95% scheduled-to-be-done time, something strange (well, not really strange) happens. Each successive status report gets stuck at 90% done. Those in charge (and those who are in charge of those who are in charge ad infinitum) say “WTF?” and then some sort of idiotic and ineffective action, like applying more pressure or requiring daily status meetings or throwing more DICs (Dweebs In the Cellar) on the project, is taken. In rare cases, the project (or product or program) manager is replaced. It’s rare because project (and product and program) managers and those who appoint them are infallible, remember?
So, is “continuous replanning”, where new scheduled-to-be-done dates are estimated as the project progresses, the answer? It can certainly help by reducing the chance of a major “WTF” discontinuity at the 90% done point. However, it’s not a cure all. As long as the vast majority of project (and product and program) managers maintain their attitude of infallibility and eschew maintaining some minimum level of technical competence in order to sniff out the real problems, help the team, and make a difference, it’ll remain the same-old same-old forever. Actually, it will get worse because as the inherent complexity of the projects that a company undertakes skyrockets, this lack of leadership excellence will trigger larger performance shortfalls. Bummer.
Architectural, Mechanistic, And Detailed
Bruce Powel Douglass is one of my favorite embedded systems development mentors. One of his ideas is to categorize the activity of design into three levels of increasingly detailed abstraction:
- Architectural (5 views)
- Mechanistic
- Detailed
The SysML figure below tries to depict the conceptual differences between these levels. (Even if you don’t know the SysML, can you at least viscerally understand the essence of what the drawings are attempting to communicate?)

Since the size, algorithmic density, and safety critical nature of the software intensive systems that I’ve helped to develop require what the agile community mocks as BDUF (Big Design Up Front), I’ve always communicated my “BDUF” designs in terms of the first and third abstractions. Thus, the mechanistic design category is sort of new to me. I like this category because it shortens the gulf of understanding between the architectural and the detailed design levels of abstraction. According to Mr. Douglass, “mechanistic design” is the act of optimizing a system at the level of an individual collaboration ( a set of UML classes or SysML blocks working closely together to realize a single use case). From now on, I’m gonna follow his three tier taxonomy in communicating future designs, but only when it’s warranted, of course (I’m not a religious zealot for or against any method), .
BTW, if you don’t do BDUF, you might get CUDO (Crappy and Unmaintanable Design Out back). Notice that I said “might” and not “will”.
Wide But Shallow, Narrow But Deep
I just “finished” (yeah,that’s right –> 100% done (LOL!)) exploring, discovering, defining, and specifying, the functional changes required to add a new feature to one of our pre-existing, software-intensive products. I’m currently deep in the trenches exploring and discovering how to specify a new set of changes required to add a second related feature to the same product. Unlike glamorous “Greenfield” projects where one can start with a blank sheet of paper, I’m constrained and shackled by having to wrestle with a large and poorly documented legacy system. Sound familiar?
The extreme contrast between the demands of the two project types is illuminating. The first one required a “wide but shallow” (WBS) analysis and synthesis effort while the current one requires a “narrow but deep” (NBD) effort. Both types of projects require long periods of sustained immersion in the problem domain, so most (all?) managers won’t understand this post. They’re too busy running around in ADHD mode acting important, goin’ to endless agenda-less meetings, and puttin’ out fires (that they ignited in the first place via their own neglect, ignorance, and lack of listening skills). Gawd, I’m such a self-righteous and bad person obsessed with trashing the guild of management 🙂 .
The figure below highlights the difference between WBS and NBD efforts for a “hypothetical” product enhancement project.

In WBS projects, the main challenge is hunting down all the well hidden spots that need to be changed within the behemoth. Missing any one of these change-spots can (and usually does) eat up lots of time and money down the road when the thing doesn’t work and the product team has to find out why. In NBD projects, the main obstacle to overcome is the acquisition of the specialized application domain knowledge and expertise required to perform localized surgery on the beast. Since the “search” for the change/insertion spots of an NBD effort is bounded and localized, an NBD effort is much lower risk and less frustrating than a WBS effort. This is doubly true for an undocumented system where studying massive quantities of source code is the only way to discover the change points throughout a large system. It’s also more difficult to guesstimate “time to completion” for a WBS project than it is for an NBD project. On the other hand, much more learning takes place in a WBS project because of the breadth of exposure to large swaths of the code base.
Assuming that you’re given a choice (I know that this assumption is a sh*tty one), which type of project would you choose to work on for your next assignment; a WBS project, or an NBD project? No cheatin’ is allowed by choosing “neither” 😉 .
Functional Allocation VIII
Typically, the first type of allocation work performed on a large and complex product is the shall-to-function (STF) allocation task. The figure below shows the inputs and outputs of the STF allocation process. Note that it is not enough to simply identify, enumerate, and define the product functions in isolation. An integral sub-activity of the process is to conjure up and define the internal and external functional interfaces. Since the dynamic interactions between the entities in an operational system (human or inanimate) give the system its power, I assert that interface definition is the most important part of any allocation process.

The figure below illustrates two alternate STF allocation outputs produced by different people. On the left, a bland list of unconnected product functions have been identified, but the functional structure has not been defined. On the right, the abstract functional product structure, defined by which functions are required to interact with other functions, is explicitly defined.

If the detailed design of each product function will require specialized domain expertise, then releasing a raw function list on the left to the downstream process can result in all kinds of counter productive behavior between the specialists whose functions need to communicate with each other in order to contribute to the product’s operation. Each function “owner” will each try to dictate the interface details to the “others” based on the local optimization of his/her own functional piece(s) of the product. Disrespect between team members and/or groups may ensue and bad blood may be spilled. In addition, even when the time consuming and contentious interface decision process is completed, the finished product will most likely suffer from a lack of holistic “conceptual integrity” because of the multitude of disparate interface specifications.
It is the lead system engineer’s or architect’s duty to define the function list and the interfaces that bind them together at the right level of detail that will preserve the conceptual integrity of the product. The danger is that if the system design owner goes too far, then the interfaces may end up being over-constrained and stifling to the function designers. Given a choice between leaving the interface design up to the team or doing it yourself, which approach would you choose?
Functional Allocation VII
Here we are at blarticle number 7 on the unglamorous and boring topic of “Functional Allocation”. Once again, for a reference point of discussion, I present the hypothetical allocation tree below (your company does have a guidepost like this, doesn’t it?). In summary, product “shalls” are allocated to features, which are allocated to functions, which are allocated to subsystems, which are allocated to software and hardware modules. Depending on the size and complexity of the product to be built, one or more levels of abstraction can be skipped because the value added may not be worth the effort expended. For a simple software-only system that will run on Commercial-Off-The-Shelf (COTS) hardware, the only “allocation” work required to be performed is a shall-to-software module mapping.

During the performance of any intellectually challenging human endeavor, mistakes will be made and learning will take place in real-time as the task is performed. That’s how fallible humans work, period. Thus for the output of such a task like “allocation” to be of high quality, an iterative and low latency feedback loop approach should be executed. When one qualified person is involved, and there is only one “allocation” phase to be performed (e.g. shall-to-module), there isn’t a problem. All the mistake-making, learning, and looping activity takes place within a single mind at the speed of thought. For (hopefully) long periods of time, there are no distractions or external roadblocks to interrupt the performance of the task.
For a big and complex multi-technology product where multiple levels of “allocation” need to be performed and multiple people and/or specialized groups need to be involved, all kinds of socio-technical obstacles and roadblocks to downstream success will naturally emerge. The figure below shows an effective product development process where iteration and loop-based learning is unobstructed. Communication flows freely between the development groups and organizations to correct mistakes and converge on an effective solution . Everything turns out hunky dory and the customer gets a 5 star product that he/she/they want and the product meets all expectations.

The figure below shows a dysfunctional product development process. For one reason or another, communication feedback from the developer org’s “allocation” groups is cut off from the customer organization. Since questions of understanding don’t get answered and mistakes/errors/ambiguities in the customer requirements go uncorrected, the end product delivered back to the customer underperforms and nobody ends up very happy. Bummer.

The figure below illustrates the worst possible case for everybody involved – a real mess. Not only do the customer and developer orgs not communicate; the “allocation” groups within the developer org don’t, or are prohibited from, communicating effectively with each other. The product that emerges from such a sequential linear-think process is a real stinker, oink oink. The money’s gone. the time’s gone, and the damn thang may not even work, let alone perform marginally.
Obviously, this situation is a massive failure of corpo leadership and sadly, I assert that it is the norm across the land. It is the norm because almost all big customer and developer orgs are structured as hierarchies of rank and stature with “standard” processes in place that require all kinds and numbers of unqualified people to “be in the loop” and approve (disapprove?) of every little step forward – lest their egos be hurt. Can a systemic, pervasive, baked-in problem like this be solved? If so, who, if anybody, has the ability to solve it? Can a single person overcome the massive forces of nature that keep a hierarchical ecosystem like this viable?

“The Biggest problem To Communication Is The Illusion That It Has Taken Place.” – George Bernard Shaw
Functional Allocation VI
Every big system, multi-level, “allocation” process (like the one shown below) assumes that the process is initialized and kicked-off with a complete, consistent, and unambiguous set of customer-supplied “shalls”. These “shalls” need to be “shallocated” by a person or persons to an associated aggregate set of future product functions and/or features that will solve, or at least ameliorate, the customer’s problem. In my experience, a documented set of “shalls” is always provided with a contract, but the organization, consistency, completeness, and understandability of these customer level requirements often leaves much to be desired.

The figure below represents a hypothetical requirements mess. The mess might have been caused by “specification by committee”, where a bunch of people just haphazardly tossed “shalls” into the bucket according to different personal agendas and disparate perceptions of the problem to be solved.

Given a fragmented and incoherent “mess”, what should be done next? Should one proceed directly to the Shall-To-Function (STF) process step? One alternative strategy, the performance of an intermediate step called Classify And Group (CAG), is shown below. CAG is also known as the more vague phrase; “requirements scrubbing”. As shown below, the intent is to remove as much ambiguity and inconsistency as possible by: 1) intelligently grouping the “shalls” into classification categories; 2) restructuring the result into a more usable artifact for the next downstream STF allocation step in the process.

The figure below shows the position of the (usually “hidden” and unaccounted for) CAG process within the allocation tree. Notice the connection between the CAG and the customer. The purpose of that interface is so that the customer can clarify meaning and intent to the person or persons performing the CAG work. If the people performing the CAG work aren’t allowed, or can’t obtain, access to the customer group that produced the initial set of “shalls”, then all may be lost right out of the gate. Misunderstandings and ambiguities will be propagated downstream and end up embedded in the fabric of the product. Bummer city.

Once the CAG effort is completed (after several iterations involving the customer(s) of course), the first allocation activity, Shall-To-Function (STF), can then be effectively performed. The figure below shows the initial state of two different approaches prior to commencement of the STF activity. In the top portion of the figure, CAG was performed prior to starting the STF. In the bottom portion, CAG was not performed. Which approach has a better chance of downstream success? Does your company’s formal product development process explicitly call out and describe a CAG step? Should it?


Functional Allocation V
Holy cow! We’re up to the fifth boring blarticle that delves into the mysterious nature of “Functional Allocation”. Let’s start here with the hypothetical 6 level allocation reference tree that was presented earlier.

Assume that our company is smart enough to define and standardize a reference tree like this one in their formal process documentation. Now, let’s assume that our company has been contracted to develop a Large And Complex (LAC) software-intensive system. My fuzzy and un-rigorous definition of large and complex is:
“The product has, (or will have after it’s built) lots of parts, many different kinds of parts, lots of internal and external interfaces, and lots of different types of interfaces”.
The figure below shows a partial result of step one in the multi-level process; the Shall-To-Feature (STF) allocation process. Given a set of 5 customer-supplied abstract “shalls”, someone has made the design decisions that led to the identification and definition of 3 less-abstract features that the product must provide in order to satisfy the customer shalls.We’ve started the movement from the abstract to the less abstract.
Just imagine what the model below would look like in the case where we had 100s of shalls to wrestle with. How could anyone possibly conclude up front that the set of shalls have been completely covered by the feature set? At this stage of the game, I assert that you can’t. You have to make a commitment and move on. In all likelihood, the initial STF allocation result won’t work. Thus, if your process doesn’t explicitly include the concept of “iterating on mistakes made and on new knowledge gained” as the product development process lurches forward, you’ll get what you deserve.

Note that in the simple example above, there is no clean and proper one-to-one STF mapping and there are 2 cross-cutting “shalls”. Also, note that there is no logical rule or mathematical formula grounded in physics that enables a shallocator (robot or human) to mechanically compute an “optimum” feature set and perform the corresponding STF allocation. It’s abstract stuff, and different qualified people will come up with different designs. Management, take heed of that fact.
So, given the initial finished STF allocation output (recorded and made accessible and visible for others to evaluate, of course) how was it arrived at? Could the effort be codified in a step-by-step Standard Operating Procedure (SOP) so that it can be classified as “repeatable and predictable”? I say no, regardless of what bureaucrats and process managers who’ve never done it themselves think. What about you, what do you think?
Functional Allocation IV
Part IV is a continuation of our discussion regarding the often misunderstood and ill-defined process of “functional allocation”. In this part we’ll, explore the nature of the “shallocation” task, some daunting organizational obstacles to its successful completion, and the dependency of quality of output on the specific persons assigned (or allocated?) to the task.
If you can find one, the process description of functional allocation starts out assuming that a human allocator is given a linear text list, which maybe quite large, of “shalls” supplied by an external customer or internal customer advocate. Of course, these “shalls” are also assumed to be unambiguous, consistent, non-contradictory, complete, and intelligently organized (yeah, right).

My personal experience has been that the “shalls” are usually strewn all over the place and the artifact that holds them is severely lacking in what Fred Brooks called “conceptual integrity”. Sometimes, the “shalls” seem to randomly jump back and forth from high level abstractions down to physical properties – a mixed mess hacked together by a group of individuals with different agendas. In addition, some customers (especially government bureaucracies) often impose some overconstraining “shalls” on the structure of the development team and the processes that the team is required to use during the development of the solution to their problem (control freaks). Even worse, in order to project a false image of “we know what we’re doing” infallibility and the fact that they don’t have to do the hard value-creation work themselves, helpful managers of developer orgs often discourage, or downright prevent, clarification questions from being asked of the customer by development team members. All communciations must be filtered through the “proper” chain of command, regardless of how long it takes or whether technical questions get filtered and distorted to incomprehensibilty through non-technical wonks with a fancy title. Bummer.
Because we want to move forward with this discussion, assume the unassume-able; the customer “shall” list is perfectly complete and understood by the developer org’s system engineers. What’s next? The figure below shows the “initialization” state. The perfect list of “shalls” must be shallocated to a set of non-existent product functions. Someone, somehow, has got to conceive of and define the set of functions and the logical inter-function connectivity that will satisfy the perfectly clear and complete “shalls” list.

Piece of cake, right? The task of shallocation is so easy and well described by many others (yeah, right), that I won’t even waste any e-space giving the step by step recipe for it. The figure below shows the logical functional structure of the product after the trivial shallocation process is completed by a robot or an expensive automated software tool.

Realistically, in today’s world the shallocation process can’t be automated away, and it’s highly person-specific. As the example in the figure below shows, given the same set of “shalls”, two different people will, “after a miracle occurs”, likely conjure up a different set of functions in an attempt to meet the customer’s product requirements. Not only can the number of functions, and the internal nature of each function be different, the allocation of shalls-to-functions may be different. Expecting a pristine, one-to-one shall-to-function mapping is unrealistically utopian.
What the example below doesn’t show is the person-specific creation of the inter-function logical connectivity (see the right portion of the previous figure) that is required for the product system to “work”. After all, a set of unconnected functions, much like a heap of car parts, doesn’t do anything but sit there looking sophisticated. It’s the interactions between the functions during operation that give a product it’s power to maybe, just maybe, solve a customer’s problem.

The purpose of part IV in this seemingly endless series of blarticles on “functional allocation” was to basically point out the person-specific nature of the first step in a multi-level nested allocation process. It also hinted at some obstacles that conspire to thwart the effective performance of the task of shallocation.
