Archive
Gift Wrap
In dysfunctional corpocracies, it’s not only acceptable, but it’s expected that STSJs (Status Takers and Schedule Jockeys) will routinely drop turd-bombs on DICs (Dweebs In the Cellar) when schedules, no matter how far off the mark they are, are not met.

However, it’s socially unacceptable for a DIC to hurl a turd-coil skyward toward an STSJ. Nevertheless, if a DIC has been trained to “communicate effectively” and is clever and skillful enough, a gift-wrapped turd-ball may be accepted “temporarily” by an STSJ – until he/she opens the box. Thus, the best course of action for DICS “privileged” enough to work in a one way command and control hierarchy is to flush turd-bombs down the toilet when they are discovered. Whoosh!

Metrics Dilemma
“Not everything that counts can be counted. Not everything that can be counted counts.” – Albert Einstein.
When I started my current programming project, I decided to track and publicly record (via the internal company Wiki) the program’s source code growth. The list below shows the historical rise in SLOC (Source Lines Of Code) up until 10/08/09.
10/08/09: Files= 33, Classes=12, funcs=96, Lines Code=1890
10/07/09: Files= 31, Classes=12, funcs=89, Lines Code=1774
10/06/09: Files= 33, Classes=14, funcs=86, Lines Code=1683
10/01/09: Files= 33, Classes=14, funcs=83, Lines Code=1627
09/30/09: Files= 31, Classes=14, funcs=74, Lines Code=1240
09/29/09: Files= 28, Classes=13, funcs=67, Lines Code=1112
09/28/09: Files= 28, Classes=13, funcs=66, Lines Code=1004
09/25/09: Files= 28, Classes=14, funcs=57, Lines Code=847
09/24/09: Files= 28, Classes=14, funcs=53, Lines Code=780
09/23/09: Files= 28, Classes=14, funcs=50, Lines Code=728
09/22/09: Files= 28, Classes=14, funcs=48, Lines Code=652
09/21/09: Files= 26, Classes=10, funcs=35, Lines Code=536
09/15/09: Files= 26, Classes=10, funcs=29, Lines Code=398
The fact that I know that I’m tracking and publicizing SLOC growth is having a strange and negative effect on the way that I’m writing the code. As I iteratively add code, test it, and reflect on its low level physical design, I’m finding that I’m resisting the desire to remove code that I discover (after-the-fact) is not needed. I’m also tending to suppress the desire to replace unnecessarily bloated code blocks with more efficient segments comprised of fewer lines of code.
Hmm, so what’s happening here? I think that my subconscious mind is telling me two things:
- A drop in SLOC size from one day to the next is a bad thing – it could be perceived by corpo STSJs (Status Takers and Schedule Jockeys) as negative progress.
- If I spend time refactoring the existing code to enhance future maintainability and reduce size, it’ll put me behind schedule because that time could be better spent adding new code.
The moral of this story is that the “best practice” of tracking metrics, like everything in life, has two sides.
Endless Loop
Are you a player in an endless loop like this:

If so, are you the pontiff? A cardinal? A bishop? A “mass”? If you ARE a player in this endless loop, do you aspire to play a different role, and why? Do you want to be the selector and promulgator of the next “j” proclamation? Do you fancy the bling that the cardinals, bishops, and pontiff adorn themselves in? Do you want to change the content of one or more of the loop steps, especially number 9?

No BS, From BS
You certainly know what the first occurrence of “BS” in the title of this blarticle means, but the second occurrence stands for “Bjarne Stroustrup”. BS, the second one of course, is the original creator of the C++ programming language and one of my “mentors from afar” (it’s a good idea to latch on to mentors from afar because unless your extremely lucky, there’s a paucity of mentors “a-near”).
I just finished reading “The Design And Evolution of C++” by BS. If you do a lot C++ programming, then this book is a must read. BS gives a deeply personal account of the development of the C++ language from the very first time he realized that he needed a new programming tool in 1979, to the start of the formal standardization process in 1994. BS recounts the BS (the first one, of course) that he slogged through, and the thinking processes that he used, while deciding upon which features to include in C++ and which ones to exclude. The technical details and chronology of development of C++ are interesting, but the book is also filled with insightful and sage advice. Here’s a sampling of passages that rang my bell:
“Language design is not just design from first principles, but an art that requires experience, experiments, and sound engineering trade-offs.”
“Many C++ design decisions have their roots in my dislike for forcing people to do things in some particular way. In history, some of the worst disasters have been caused by idealists trying to force people into ‘doing what is good for them'”.
“Had it not been for the insights of members of Bell Labs, the insulation from political nonsense, the design of C++ would have been compromised by fashions, special interest groups and its implementation bogged down in a bureaucratic quagmire.”
“You don’t get a useful language by accepting every feature that makes life better for someone.”
“Theory itself is never sufficient justification for adding or removing a feature.”
“Standardization before genuine experience has been gained is abhorrent.”
“I find it more painful to listen to complaints from users than to listen to complaints from language lawyers.”
“The C++ Programming Language (book) was written with the fierce determination not to preach any particular technique.”
“No programming language can be understood by considering principles and generalizations only; concrete examples are essential. However, looking at the details without an overall picture to fit them into is a way of getting seriously lost.”
For an ivory tower trained Ph.D., BS is pretty down to earth and empathic toward his customers/users, no? Hopefully, you can now understand why the title of this blarticle is what it is.

Motivility
In one of the Vital Smarts crew’s books (I forget which one, and I’m too lazy-ass to look it up) they mention motivation and ability as two important metrics that leaders can leverage to help people improve performance. To make things simple, but hopefully not simplistic, I’ve constructed a “Leader’s Action Table” (LAT) below using a binary “Present” or “Absent” value for each of the motivility attributes.

Since, by definition, a leader is pro-active and he/she cares about people and performance (both), he/she will take the time and effort to get to know his/her people well. The leader can then use the simple, two attribute , four action LAT to help his/her people grow and develop.
With bozo managers, the story is much different. Even if they stopped thinking about themselves and their careers long enough to consider the individual needs of their people in terms of the two motivility attributes, those bozeltines would get it back-asswards and hose everything up – of course. Instead of a LAT, they’d wield the BAT shown below. BATter up!

Do you think the LAT could be useful? What would your LAT look like? Are there any important attributes that you think are missing from the table? Should one or either of the motivility attributes be multi-valued instead of binary? Meh?
“Half of the harm that is done in this world is due to people who want to feel important. They do not mean to do harm… They are absorbed in the endless struggle to think well of themselves.” – T. S. Eliot
Don’t Be Late!
The software-intensive products that I get paid to specify, design, build, and test involve the real-time processing of continuous streams of raw input data samples. The sample streams are “scrubbed and crunched” in order to generate higher-level, human interpretable, value-added, output information and event streams. As the external stream flows into the product, some samples are discarded because of noise and others are manipulated with a combination standard and proprietary mathematical algorithms. Important events are detected by monitoring various characteristics of the intermediate and final output streams. All this processing must take place fast enough so that the input stream rate doesn’t overwhelm the rate at which outputs can be produced by the product; the product must operate in “real-time”.
The human users on the output side of our products need to be able to evaluate the output information quickly in order to make important and timely command and control decisions that affect the physical well-being of hundreds of people. Thus, latency is one of the most important performance metrics used by our customers to evaluate the acceptability of our products. Forget about bells and whistles, exotic features, and entertaining graphical interfaces, we’re talking serious stuff here. Accuracy and timeliness of output are king.
Latency (or equivalently, response time) is the time it takes from an input sample or group of related samples to traverse the transformational processing path from the point of entry to the point of egress through the software-dominated product “box” or set of interconnected boxes. Low latency is good and high latency is bad. If product latency exceeds a time threshold that makes the output effectively unusable to our customers , the product is unacceptable. In some applications, failure of the absolute worst case latency to stay below the threshold can be the deal breaker (hard real time) and in other applications the average latency must not exceed the threshold xx percent of the time where xx is often greater than 95% (soft real time).

Latency is one of those funky, hard-to-measure-until-the-product-is-done, “non-functional” requirements. If you don’t respect its power to make or break your wonderful product from the start of design throughout the entire development effort, you’ll get what you deserve after all the time and money has been spent – lots of rework, stress, and angst. So, if you work on real-time systems, don’t be late!
Complexity Explosion
I’m in the process of writing a C++ program that will synthesize and inject simulated inputs into an algorithm that we need to test for viability before including it in one of our products. I’ve got over 1000 lines of code written so far, and about another 1000 to go before I can actually use it to run tests on the algorithm. Currently, the program requires 16 multi-valued control inputs to be specified by the user prior to running the simulation. The inputs define the characteristics of the simulated input stream that will be synthesized.

Even though most of the control parameters are multi-valued, assume that they are all binary-valued and can be set to either “ON” or “OFF” . It then follows that there are 2**16 = 65536 possible starting program states. When (not if) I need to add another control parameter to increase functionality, the number of states will soar to 131,072, if, and only if, the new parameter is not multi-valued. D’oh! OMG! Holy shee-ite!
Is it possible to setup the program, run the program, and evaluate its generated outputs against its inputs for each of these initial input scenarios? Never say never, but it’s not economically viable. Even if the setup and run activities can be “automated”, manually calculating the expected outputs and comparing the actual outputs against the expected outputs for each starting state is impractical. I’d need to write another program to test the program, and then write another program to test the program that tests the program that tests the first program. This recursion would go on indefinitely and errors can be made at any step of the way. Bummer.

No matter what the “experts” who don’t have to do the work themselves have to say, in programming situations like this, you can’t “automate” away the need for human thought and decision making. Based on knowledge, experience, and more importantly, fallible intuition, I have to judiciously select a handful of the 65536 starting states to run. I then have to manually calculate the outputs for each of these scenarios, which is impractical and error-prone because the state machine algorithm that processes the inputs is relatively dense and complicated itself. What I’m planning to do is visually and qualitatively scan the recorded outputs of each program run for a select few of the 65536 states that I “feel” are important. I’ll intuitively analyze the results for anomalies in relation to the 16 chosen control input values.
Got a better way that’s “practical”? I’m all ears, except for a big nose and a bald head.
“Nothing is impossible for the man who doesn’t have to do it himself.” – A. H. Weiler
Problems And Challenges
It’s easy to view a situation that requires action as a “challenge” instead of a “problem” if you don’t personally have to effect the change yourself. That’s why managers talk about challenges and workers talk about problems. Since hierarchical command and control corpocracies are inherently stratified caste systems, managers and workers don’t have a chance of seeing the same thing – a prallenge.

Four Or Two?
Assume that the figure below represents the software architecture within one of your flagship products. Also assume that each of the 6 software components are comprised of N non-trivial SLOC (Source Lines Of Code) so that the total size of the system is 6N SLOC. For the third assumption, pretend that a new, long term, adjacent market opens up for a “channel 1” subset of your product.

To address this new market’s need and increase your revenue without a ton of investment, you can choose to instantiate the new product from your flagship. As shown in the figure below, if you do that, you’ve reduced the complexity of your new product addition by 1/3 (6N to 4N SLOC) and hence, decreased the ongoing maintenance cost by at least that much (since maintainability is a non-linear function of software size).

Nevertheless, your new product offering has two unneeded software components in its design: the “Sample Distributor” and the “Multi-Channel Integrator”. Thus, if as the diagram below shows, you decide to cut out the remaining fat (the most reliable part in a system is the one that’s not there – because it’s not needed), you’ll deflate your long term software maintenance costs even further. Your new product portfolio addition would be 1/3 the original size (6N to 2N SLOC) of your flagship product.

If you had the authority, which approach would you allocate your resources to? Would you leave it to chance? Is the answer a no brainer, or not? What factors would prevent you from choosing the simpler two component solution over the four component solution? What architecture would your new customers prefer? What would your competitors do?
Best Of The Best
The breadth of variety of companies, markets, customers, industries, products, and services in the world is so wide and diverse that it can be daunting to develop objectively measurable criteria for “best in class” that cuts across all of the variability.

Being a simpleton, my pseudo-measurable criteria for a “best in class” company is:
- Everybody (except for the inevitable handful of malcontents (like me?) found in all organizations) who works in the company sincerely feels good about themselves, their co-workers, the products they build, their customers, and the company leadership.
That’s it. That’s my sole criterion (I told you I was a simpleton). Of course, the classical financial measures like year-over-year revenue growth, profitability, yada, yada, yada, matter too, but in my uncredentialed and unscholarly mind, those metrics are secondary. They’re secondary because good numbers are unsustainable unless the touchy-feely criterion is continuously satisfied.
The dilemma with any kind of “feel good” criteria is that there aren’t many good ways of measuring them. Nevertheless, one of my favorite companies, zappos.com, has conjured up a great way of doing it. Every year, CEO Tony Hsieh sends an e-mail out to all of his employees and solicits their thoughts on the Zappos culture. All the responses are then integrated and published, unedited, in a hard copy “Zappos Culture Book”.
The Zappos culture book is available free of charge to anyone who emails Tony (tony@zappos.com). Earlier this year, I e-mailed Tony and asked for a copy of the book. Lo and behold, I received the 400+ page tome, free-of-charge, four days later. I poured through the 100’s of employee, executive, and partner testimonials regarding Zappos’s actual performance against their espoused cultural values. I found no negative entries in the entire book. There were two, just two, lukewarm assessments of the company’s cultural performance. Of course, skeptics will say that the book entries were censored, and maybe they were, but I doubt it.
How would your company fare if it compiled a yearly culture book similar to Zappos’s? Would your company even entertain the idea? Would anyone feel comfortable proposing the idea? Is the concept of a culture book only applicable to consumer products companies like Zappos.com, or could its value be industry-independent?
Note: Zappos.com was recently bought out by Amazon.com. It should be interesting to see if the yearly Zappos culture book gets squashed by Jeff Bezos et al.
