Archive

Posts Tagged ‘software development’

Hopping On The Anti-Fragile Bandwagon

February 13, 2014 3 comments

Since Martin Fowler works there, I thought ThoughtWorks Inc. must be great. However, after watching two of his fellow ThoughtWorkers give a talk titled “From Agility To Anti-Fragility“, I’m having second thoughts. The video was a relatively lame attempt to jam-fit Nassim Taleb’s authentic ideas on anti-fragility into the software development process. Expectedly, near the end of the talk the presenters introduced their “new” process for making your borg anti-fragile: “Continuous Delivery/Discovery/Design“. Lookie here, it even has a superscript in its title:

CD3

Having read Mr. Taleb’s four fascinating books, the one hour and twenty-six minute talk was essentially a synopsis of his latest book, “Anti-Fragile“. That was the good part. The ThoughtWorkers’ attempts to concoct techniques that supposedly add anti-fragility to the software development process introduced nothing new. They simply interlaced a few crummy slides with well-known agile practices (small teams, no specialists, short increments, co-located teams, etc) with the good slides explaining optionality, black/grey swans, convexity vs concavity, hormesis, and levels of randomness.

smug consultant

Layering, Balancing, And The Number Seven

January 27, 2014 Leave a comment

Having recently watched a newer incarnation of Barbara Liskov‘s terrific Turing award acceptance speech on InfoQ.com, “The Power Of Abstraction“, I started doodling on my visio canvas to see where it would take me. Somehow, I wanted to explore how the use of abstraction imbues power to its wielders.

The figure below attempts to represent 3 different software designs that can result from the analysis of a given set of requirements (how the requirements came to be “given” in the first place is a whole ‘nother issue).

On the left, we have a seven class solution candidate (C1….. C7 ) organized as three layers of abstraction. On the right, we have a three class flat solution (FC1, FC2, FC3) that implements the same functionality (e.g. FC1 encapsulates the functionality of C1 + C4 + C7). For dramatic contrast, we have a fugly, single-class, monolith in the middle with all the solution functionality entombed within the MC1 class sarcophagus.

Flat Vs Tiered

So, what advantage, if any, does the three tier, abstract design give stakeholders over the two, flat, down-to-earth designs? Depending on the requirements specifics, it may offer up no advantage and might actually be the worst candidate in terms of code-ability, understandability, and maintainability. There are more “parts” and more inter-part interfaces. It may be overkill to transform the requirements into 3 layers of abstraction before (or during?) coding.

However, as a system to be coded gets larger and more complex, the intelligent use of abstract vertical layering and horizontally balancing can speed up system development and decrease maintenance costs via increased readability and understandability from multiple viewing angles. For large systems, conceptual “chunking“, both vertically in the form of layering and horizontally in the form of balancing is a winning strategy; especially when coupled with Miller’s magic number 7 (no more than 7 +/- 2 abstract elements within a given layer and no more than 7 +/- 2 abstract layers in the stack). Relatively speaking, the smaller, bounded parts can be doled out to team members more easily and integration will be less painful.

Note that doing some just-enough “pre-planning” in terms of layering/balancing the system’s structure/behavior seems to fly in the face of TDD – where you sprinkle a bunch of user stories from the backlog onto a group of programmers and have them start writing tests so that the design can miraculously emerge. But, as the saying goes: “whatever floats your boat“.

Sprinkle

Warts And Barnacles

December 31, 2013 4 comments

When I started this blog four years ago, I had to decide whether to publish as an anonymous coward or to use my real name. I struggled with the decision for a bit because I knew I was going to write frequently, real frequently, about dysfunctional management and institutional behaviors that I’ve both experienced and (even more so) read about over the years. In addition, since I’m a high energy, passionate animal who doesn’t hold much back and at times finds it hard to compromise, I knew that much of my content was going to be highly caustic and offensive.

caustic

Out of fear of repercussions, I decided to start writing incognito… until a dear friend brought up the perplexing issue again. After rethinking the situation, I resolved to let it all hang out. I gingerly hoisted my name up on my “About me” page.

Never say never, but I didn’t (and still don’t) care about climbing any corpo ladder or presenting the squeaky clean image that all main stream “leadership” books tout as necessary to “get ahead“. I have some hairy warts and barnacles growing on my brain and, hell, I choose to expose them.

BD00 Brain

So, if I don’t want to get ahead by movin’ on up, then WTF does BD00 want? I want to keep ruining drill bits while I blast away at the impenetrable bedrock that entombs the holy grail of effective software development. I like going deep, deep, deep down into the unexplored corners of programming (in C++, of course), design, architecture, requirements, and the squishy realm of team-based software development processes. These closely-coupled topics excite me because there seems to be no bottom, no final “truths“, no end to life-long learning in any of them. It’s what I was meant to do.

Driller

What were you meant to do?

white space

Big Design, But Not All Upfront

December 8, 2013 Leave a comment

When not ranting and raving on this blawg about “great injustices” (LOL) that I perceive are keeping the world from becoming a better place, I design, write, and test real-time radar system software for a living. I use the UML before, during, and after coding to capture, expose, and reason about my software designs. The UML artifacts I concoct serve as a high level coding road map for me; and a communication tool for subject matter experts (in my case, radar system engineers) who don’t know how to (or care to) read C++ code but are keenly interested in how I map their domain-specific requirements/designs into an implementable software design.

I’m not a UML language lawyer and I never intend to be one. Luckily, I’m not forced to use a formal UML-centric tool to generate/evolve my “bent” UML designs (see what I mean by “bent” UML here: Bend It Like Fowler). I simply use MSFT Visio to freely splat symbols and connections on an e-canvas in any way I see fit. Thus, I’m unencumbered by a nanny tool telling me I’m syntactically/semantically “wrong!” and rudely interrupting my thought flow every five minutes.

The 2nd graphic below illustrates an example of one of my typical class diagrams. It models a small, logically cohesive cluster of cooperating classes that represent the “transmit timeline” functionality embedded within a larger “scheduler” component. The scheduler component itself is embedded within yet another, larger scale component composed of a complex amalgam of cooperating hardware and software components; the radar itself.

Hostile Environment

When fully developed and tested, the radar will be fielded within a hostile environment where it will (hopefully) perform its noble mission of detecting and tracking aircraft in the midst of random noise, unwanted clutter reflections, cleverly uncooperative “enemy” pilots, and atmospheric attenuation/distortion. But I digress, so let me get back to the original intent of this post, which I think has something to do with how and why I use the UML.

The radar transmit timeline is where other necessarily closely coupled scheduler sub-components add/insert commands that tell the radar hardware what to do and when to do it; sometime in the future relative to “now“. As the radar rotates and fires its sophisticated, radio frequency pulse trains out into the ether looking for targets, the scheduler is always “thinking” a few steps ahead of where the antenna beam is currently pointing. The scheduler relentlessly fills the TxTimeline in real time with beam-specific commands. It issues those commands to the hardware early enough for the hardware to be able to queue, setup, and execute the minute transmit details when the antenna arrives at the desired command point. Geeze! I’m digressing yet again off the UML path, so lemme try once more to get back to what I originally wanted to ramble about.

TxTimeline UML

Being an unapologetic UML bender, and not a fan of analysis-paralysis, I never attempt to meticulously show every class attribute, operation, or association on a design diagram. I weave in non-UML symbology as I see fit and I show only those elements I deem important for creating a shared understanding between myself and other interested parties. After all, some low level attributes/operations/classes/associations will “go away” as my learning unfolds and others will “emerge” during coding anyway, so why waste the time?

Notice the “revision number” in the lower right hand corner of the above class diagram. It hints that I continuously keep the diagram in sync with the code as I write it. In fact, I keep the applicable diagram(s) open right next to my code editor as I hack away. As a PAYGO practitioner, I bounce back and forth between code & UML artifacts whenever I want to.

The UML sequence diagram below depicts a visualization of the participatory role of the TxTimeline object in a larger system context comprised of  other peer objects within the scheduler. For fear of unethically disclosing intellectual property, I’m not gonna walk through a textual explanation of the operational behavior of the scheduler component as “a whole“. The purpose of presenting the sequence diagram is simply to show you a real case example that “one diagram is not enough” for me to capture the design of any software component containing a substantial amount of “essential complexity“. As a matter of fact, at this current moment in time, I have generated a set of 7+ leveled and balanced class/sequence/activity diagrams to steer my coding effort. I always start coding/testing with class skeletons and I iteratively add muscles/tendons/ligaments/organs to the Frankensteinian beast over time.

Scheduler UML SD

In this post, I opened up my trench coat and showed you my…  attempted to share with you an intimate glimpse into the way I personally design & develop software. In my process, the design is not done “all upfront“, but a purely subjective mix of mostly high and low level details is indeed created upfront. I think of it as “Big Design, But Not All Upfront“.

Despite what some code-centric, design-agnostic, software development processes advocate, in my mind, it’s not just about the code. The code is simply the lowest level, most concrete, model of the solution. The practices of design generation/capture and code slinging/testing in my world are intimately and inextricably coupled. I’m not smart enough to go directly to code from a user story, a one-liner work backlog entry, a whiteboard doodle, or a set of casual, undocumented, face-to-face conversations. In my domain, real-time surveillance radar systems, expressing and capturing a fair amount of formal detail is (rightly) required up front. So, screw you to any and all  NoUML, no-documentation, jihadists who happen to stumble upon this post. 🙂

The Next Big Four!

November 28, 2013 1 comment

All Forked Up

October 23, 2013 2 comments

I dunno who said it, but paraphrasing whoever did:

Science progresses as a succession of funerals.

Even though more accurate and realistic models that characterize the behavior of mass and energy are continuously being discovered, the only way the older physics models die out is when their adherents kick the bucket.

The same dictum holds true for software development methodologies. In the beginning, there was the Traditional (a.k.a waterfall) methodology and its formally codified variations (RUP, MIL-STD-498, CMMI-DEV, your org’s process, etc). Next, came the Agile fork as a revolutionary backlash against the inhumanity inherent to the traditional way of doing things.

Forked Up

The most recent fork in the methodology march is the cerebral SEMAT (Software Engineering Method And Theory) movement. SEMAT can be interpreted (perhaps wrongly) as a counter-revolution against the success of Agile by scorned, closet traditionalists looking to regain power from the agilistas.

Semat Over Agile

On the other hand, perhaps the Agile and SEMAT camps will form an alliance and put the final nail in the coffin of the old traditional way of doing things before its adherents kick the bucket.

Agile plus SEMATSEMAT co-creator Ivar Jacobson seems to think that hitching SEMAT to the Agile gravy train holds promise for better and faster software development techniques.

Agile-SEMAT

Who knows what the future holds? Is another, or should I say, “the next“, fork in the offing?

The Drooping Progress Syndrome

September 21, 2013 Leave a comment

When a new product development project kicks off, nobody knows squat and there’s a lot of fumbling going on before real progress starts to accrue. As the hardware and software environment is stitched into place and initial requirements/designs get fleshed out, productivity slowly but surely rises. At some point, productivity (“velocity” in agile-ese) hits a maximum and then flattens into a zero slope, team-specific, cadence for the duration. Thus, one could be led to believe that a generic team productivity/progress curve would look something like this:

steady increaseIn “The Year Without Pants“, Scott Berkun destroys this illusion by articulating an astute, experiential, observation:

This means that at the end of any project, you’re left with a pile of things no one wants to do and are the hardest to do (or, worse, no one is quite sure how to do them). It should never be a surprise that progress seems to slow as the finish line approaches, even if everyone is working just as hard as they were before. – Scott Berkun

Scott may have forgotten one class of thing that BD00 has experienced over his long and un-illustrious career – things that need to get done but aren’t even in the work backlog when deployment time rolls in. You know, those tasks that suddenly “pop up” out of nowhere (BD00 inappropriately calls them “WTF!” tasks).

pop up task

Nevertheless, a more realistic productivity curve most likely looks like this:

decreasing productivity

If you’re continuously flummoxed by delayed deployments, then you may have just discovered why.

productivity cycle

A Concrete Agile Practices List

September 19, 2013 2 comments

Finally, I found out what someone actually thinks “agile practices” are. In “What are the Most Important and Adoption-Ready Agile Practices?”, Shane-Hastie presents his list:

Agile Practices

Kudos to Shane for putting his list out there.

Ya gotta love all the “explicit definition of done” entries (“Aren’t you freakin’ done yet?“). And WTF is “Up front architecture” doing on the list? Isn’t that a no-no in agile-land? Shouldn’t it be “emergent architecture“? And no kanban board entry? What about burn down charts?

Alas, I can’t bulldozify Shane’s list too much. After all, I haven’t exposed my own agile practices list for scrutiny. If I get the itch, maybe I’ll do so. What’s on your list?

Agile List

Pragmatically Feasible?

September 17, 2013 6 comments

From the MISRA web site:

The Motor Industry Software Reliability Association (MISRA), is a collaboration between vehicle manufacturers, component suppliers and engineering consultancies which seeks to promote best practice in developing safety-related electronic systems in road vehicles and other embedded systems.

While browsing through the MISRA C++:2008 standard, I came across this not-unexpected requirement:

No Heap

I don’t know enough about the standard to know if it’s true, but I interpret this requirement as banning not only the use of “new/delete“, but also as banning the use of the dynamically managed STL container abstractions (vectors, lists, sets, maps, queues) and, hence, the many standard library algorithms that operate on them. I wonder what the MISRA Java specification, if there is one, says about dynamic memory allocation.

If my interpretation of 18-4-1 is correct, then the requirement can severely jack up the cost, schedule, and technical risks of any software component that is required to be compliant with the specification. For non-trivial applications requiring more than low-level, statically allocated arrays…..

Complexity is pushed out of the language and into the application code. The semantics of language features are far better specified than the typical application code. – Bjarne Stroustrup & Kevin Carroll

Because of the safety-critical nature of embedded automotive software, I can understand the reasoning behind the no-dynamic-memory-allocation requirement. But is it pragmatically feasible in today’s world; especially since software components keep getting larger and commensurately more complex over time? In other words, is it one of those requirements that doesn’t scale? Is it too Draconian?

For those C++ programmers who work in the automotive industry and happen to stumble upon this blog (which will probably be none), what has been your experience with this MISRA requirement and some of the other similarly unsettling requirements in the specification? Are “waivers” often asked for and granted? Is it an unspoken truth that people/companies pay public lip service to the requirement but privately don’t comply?

misrable

The Same Old Wine

September 3, 2013 2 comments

Goldman McKinsey

Investment is to Goldman Sachs as management is to McKinsey & Co. These two prestigious institutions can do no wrong in the eyes of the rich and powerful. Elite investors and executives bow down and pay homage to Goldman McKinsey like indoctrinated North Koreans do to Kimbo Jongo Numero Uno.

As the following snippet from Art Kleiner’s “Who Really Matters” illustrates, McKinsey & Co, being chock full of MBAs from the most expensive and exclusionary business schools in the USA, is all about top-down management control systems:

…says McKinsey partner Richard Foster, author of Creative Destruction. If you ask companies how many control systems they have, they don’t know. If you ask them how much they’re spending on control, they say, ‘We don’t add it up like that.’ If you ask them to rank their control systems from most to least cost-effective, then cut out the twenty percent at the bottom, they can’t.” (And this from a partner at McKinsey, the firm whose advice has launched a thousand measurement and control systems.)

A dear reader recently clued BD00 into this papal release from a trio of McKinsey principals: “Enhancing the efficiency and effectiveness of application development”. BD00 doesn’t know fer sure (when does he ever?), but he’ll speculate (when does he never?) that none of the authors has ever been within binoculars distance of a software development project.

Kim Jong Un Approval

Yet, they laughingly introduce a…

..viable means of measuring the output of application-development projects.

Their highly recommended application development control system is based on, drum roll please…. “Use Cases” (UC) and “Use Case Points” (UCP).

Knowing that their elite, money-hoarding, efficiency-obsessed, readers most probably have no freakin’ idea what a UC is, they painstakingly spend two paragraphs explaining the twenty year old concept (easily looked up on the web); concluding that…

..both business leaders and application developers find UCs easy to understand.

Well, yeah. Done “right“, UCs can be a boon to development – just like doing “agile” right. But how often have you ever seen these formal atrocities ever done right? Oh, I forgot. All that’s needed is “training” in how to write high quality UCs. Bingo, problem solved – except that training costs money.

Next up, the authors introduce their crown jewel output measurement metric, the “UCP“:

UCP calculations represent a count of the number of transactions performed by an application and the number of actors that interact with the application in question. UCPs, because they are simple to calculate, can also be easily rolled out across an organization.

So, how is an easily rolled out UCP substantively different than the other well known metric: the “Function Point” (FP)?

Another approach that’s often talked about for measuring output is Function Points. I have a little more sympathy for them, but am still unconvinced. This hasn’t been helped by stories I’ve heard of that talk about a single system getting counts that varied by a factor of three from different function point counters using the same system. – Martin Fowler

I guess that UCPs are superior to FPs because it is implied that given X human UCP calculators, they’ll all tally the same result. Uh, OK.

Not content to simply define and describe how to employ the winning UC + UCP metrics pair to increase productivity, the McKinseyians go on to provide one source of confirmation that their earth-shattering, dual-metric, control system works. Via an impressive looking chart with 12 project data points from that one single source (perhaps a good ole boy McKinsey alum?), they confidently proclaim:

Analysis therefore supports the conclusion that UCPs’ have predictive power.

Ooh, the words “analysis” and “predictive” and “power” all in one sentence. Simply brilliant; spoken directly in the language that their elite target audience drools over.

The article gets even more laughable (cry-able?) as the authors go on to describe the linear, step-by-step “transformation” process required to put the winning UC + UCP system in place and how to overcome the resistance “from below” that will inevitably arise from such a large-scale change effort. Easy as pie, no problemo. Just follow their instructions and call them for a $$$$$$ consultation when obstacles emerge.

So, can someone tell BD00 how the McKinsey UC + UCP dynamic duo is any different than the “shall” + Function Point duo? Does it sound like the same old wine in a less old bottle to you too?

Same Wine