Archive
There “Shall” Be A Niche
Someone (famous?) once said that a good strategy to employ to ensure that you get something done is to publicize what you’re going to do for all to see:
As you can see, my new found friend, multi-book author Jon M. Quigley (check out his books at Value Transformation LLC), proposed, and I accepted, a collaborative effort to write a book on the topic of product requirements. D’oh!
Why the “D’oh!”? As you might guess, there are a bazillion “requirements” books already out there in the wild. Here is just a sampling of some of those that I have access to via my safaribooksonline.com account:Of course, I haven’t read them all, but I have read both of Mr. Wiegers’s books and the Hatley/Hruschka book – all very well done. I’ve also read two great requirements books (not on the above list) by my favorite software author of all time, Mr. Gerry Weinberg: “Exploring Requirements” and “Are Your Lights On?“.
Jon and I would love to differentiate our book from the current crop – some of which are timeless classics. It’s not that we expect to eclipse the excellence of Mr. Weinberg or Mr. Wiegers, we’re looking for a niche. Perhaps a “Head First” or “Dummies” approach may satisfy our niche “requirement” :). Got any ideas?
The biggest obstacle, and it is indeed huge, in front of me is simply that:
“My ambition is handicapped by laziness” – Charles Bukowski
Uncomfortable With Ambiguity
Uncertain: Not able to be relied on; not known or definite
Ambiguous: Open to more than one interpretation; having a double meaning.
Every spiritual book I’ve ever read advises its readers to learn to become comfortable with “uncertainty” because “uncertainty is all there is“. BD00 agrees with this sage advice because fighting with reality is not good for the psyche.
Ambiguity, on the other hand, is a related but different animal. It’s not open-ended like its uncertainty parent. Ambiguity is the next step down from uncertainty and it consists of a finite number of choices; one of which is usually the best choice in a given context. The perplexing dilemma is that the best choice in one context may be the worst choice in a different context. D’oh!
Many smart people that I admire say things like “learn to be comfortable with ambiguity“, but BD00 is not onboard with that approach. He’s not comfortable with ambiguity. Whenever he encounters it in his work or (especially) in the work of others that his work depends on for progressing forward, he doggedly tries to resolve it – with whatever means available.
The main reason behind BD00’s antagonistic stance toward ambiguity is that ambiguity doesn’t compile – but the code must compile for progress to occur. Thus, if ambiguity isn’t consciously resolved between the ambiguity creator (ambiguator) and the ambiguity implementer (ambiguatee) before slinging code, it will be unconsciously resolved somehow. To avoid friction and perhaps confrontation between ambiguator and ambiguatee (because of differences in stature on the totem pole?), an arbitrary and “undisclosed” decision will be made by the implementer to get the code to compile and progress to occur- most likely leading to functionally incorrect code and painful downstream debugging.
So, BD00’s stance is to be comfortable with uncertainty but uncomfortable with ambiguity. Whenever you encounter the ambiguity beast, consciously attack it with a vengeance and publicly resolve it ASAP with all applicable stakeholders.
SysML Support For Requirements Modeling
“To communicate requirements, someone has to write them down.” – Scott Berkun
Prolific author Gerald Weinberg once said something like: “don’t write about what you know, write about what you want to know“. With that in mind, this post is an introduction to the requirements modeling support that’s built into the OMG’s System Modeling Language (SysML). Well, it’s sort of an intro. You see, I know a little about the requirements modeling features of SysML, but not a lot. Thus, since I “want to know” more, I’m going to write about them, damn it! 🙂
SysML Requirements Support Overview
Unlike the UML, which was designed as a complexity-conquering job performance aid for software developers, the SysML profile of UML was created to aid systems engineers during the definition and design of multi-technology systems that may or may not include software components (but which interesting systems don’t include software?). Thus, besides the well known Use Case diagram (which was snatched “as is” from the UML) employed for capturing and communicating functional requirements, the SysML defines the following features for capturing both functional and non-functional requirements:
- a stereotyped classifier for a requirement
- a requirements diagram
- six types of relationships that involve a requirement on at least one end of the association.
The Requirement Classifier
The figure below shows the SysML stereotyped classifier model element for a requirement. In SysML, a requirement has two properties: a unique “id” and a free form “text” field. Note that the example on the right models a “non-functional” requirement – something a use case diagram wasn’t intended to capture easily.
One purpose for capturing requirements in a graphic “box” symbol is so that inter-box relationships can be viewed in various logically “chunked“, 2-dimensional views – a capability that most linear, text-based requirements management tools are not at all good at.
Requirement Relationships
In addition to the requirement classifier, the SysML enumerates 6 different types of requirement relationships:
A SysML requirement modeling element must appear on at least one side of these relationships with the exception of <<derivReqt>> and <<copy>>, which both need a requirement on both sides of the connection.
Rather than try to write down semi-formal definitions for each relationship in isolation, I’m gonna punt and just show them in an example requirement diagram in the next section.
The Requirement Diagram
The figure below shows all six requirement relationships in action on one requirement diagram. Since I’ve spent too much time on this post already (a.k.a. I’m lazy) and one of the goals of SysML (and other graphical modeling languages) is to replace lots of linear words with 2D figures that convey more meaning than a rambling 1D text description, I’m not going to walk through the details. So, as Linda Richman says, “tawk amongst yawselves“.
References
1) A Practical Guide to SysML: The Systems Modeling Language – Sanford Friedenthal, Alan Moore, Rick Steiner
2) Systems Engineering with SysML/UML: Modeling, Analysis, Design – Tim Weilkiens
Increased Cost And Increased Time
Before the invention of the formal “Use Case“, and the less formal “User Story“, the classic way of integrating, structuring, and recording requirements was via the super-formal Software Requirements Specification (SRS). Like “agile” was a backlash against “waterfall“, the lightweight “Use Case” was a major diss against the heavyweight “SRS“.
However, instead of replacing SRSs with Use Cases, I surmise that many companies have shot themselves in the foot by requiring the expensive and time consuming generation and maintenance of both types of artifacts. Instead of decreasing the cost/time and increasing the quality of the requirements engineering process, they most likely have done the opposite – losing ground to smarter competitors who do one or the other effectively. D’oh! Is your company one of them?
Quantification Of The Qualitative
Because he bucked the waterfall herd and advocated “agile” software development processes before the agile movement got started, I really like Tom Gilb. Via a recent Gilb tweet, I downloaded and read the notes from his “What’s Wrong With Requirements” keynote speech at the 2nd International Workshop on Requirements Analysis. My interpretation of his major point is that the lack of quantification of software qualities (you know, the “ilities”) is the major cause of requirements screwups, cost overruns, and schedule failures.
Here are some snippets from his notes that resonated with me (and hopefully you too):
- Far too much attention is paid to what the system must do (function) and far too little attention to how well it should do it (qualities) – in spite of the fact that quality improvements tend to be the major drivers for new projects.
- There is far too little systematic work and specification about the related levels of requirements. If you look at some methods and processes, all requirements are ‘at the same level’. We need to clearly document the level and the relationships between requirements.
- The problem is not that managers and software people cannot and do not quantify. They do. It is the lack of ‘quantification of the qualitative’ that is the problem.
- Most software professionals when they say ‘quality’ are only thinking of bugs (logical defects) and little else.
- There is a persistent bad habit in requirements methods and practices. We seem to specify the ‘requirement itself’, and we are finished with that specification. I think our requirement specification job might be less than 10% done with the ‘requirement itself’.
I can really relate to items 2 and 5. Expensive and revered domain specialists often do little more than linearly list requirements in the form of text “shalls”; with little supporting background information to help builders and testers clearly understand the “what” and “why” of the requirements. My cynical take on this pervasive, dysfunctional practice is that the analysts themselves often don’t understand the requirements and hence, they pursue the path of least resistance – which is to mechanically list the requirements in disconnected and incomprehensible fragments. D’oh!
The Best Defense
In “The Design Of Design“, Fred Brooks states:
The best defense against requirements creep is schedule urgency.
Unfortunately, “schedule urgency” is also the best defense against building a high quality and enduring system. Corners get cut, algorithm vetting is skipped, in-situ documentation is eschewed, alternative designs aren’t investigated, and mistakes get conveniently overlooked.
Yes, “schedule urgency” is indeed a powerful weapon. Wield it carefully, lest you impale yourself.
My Velocity
The figure below shows some source code level metrics that I collected on my last C++ programming project. I only collected them because the process was low ceremony, simple, and unobtrusive. I ran the source code tree through an easy to use metrics tool on a daily basis. The plots in the figure show the sequential growth in:
- The number of Source Lines Of Code (SLOC)
- The number of classes
- The number of class methods (functions)
- The number of source code files
So Whoopee. I kept track of metrics during the 60 day construction phase of this project. The question is: “How can a graph like this help me improve my personal software development process?”.
The slope of the SLOC curve, which measured my velocity throughout the duration, doesn’t tell me anything my intution can’t deduce. For the first 30 days, my velocity was relatively constant as I coded, unit tested, and integrated my way toward the finished program. Whoopee. During the last 30 days, my velocity essentially went to zero as I ran end-to-end system tests (which were designed and documented before the construction phase, BTW) and refactored my way to the end game. Whoopee. Did I need a plot to tell me this?
I’ll assert that the pattern in the plot will be unspectacularly similar for each project I undertake in the future. Depending on the nature/complexity/size of the application functionality that will need to be implemented, only the “tilt” and the time length will be different. Nevertheless, I can foresee a historical collection of these graphs being used to predict better future cost estimates, but not being used much to help me improve my personal “process”.
What’s not represented in the graph is a metric that captures the first 60 days of problem analysis and high level design effort that I did during the front end. OMG! Did I use the dreaded waterfall methodology? Shame on me.
Requirements Before, Design After
The figure below depicts a UML sequence diagram of the behavior of a simulator during the execution of a user defined scenario. Before the code has been written and tested, one can interpret this diagram as a set of interrelated behavioral requirements imposed on the software. After the code has been written, it can be considered a design artifact that reflects what the code does at a higher level of abstraction than the code itself.
Interpretations like this give credence to Alan Davis’s brilliant quote:
One man’s requirement is another man’s design
Here’s a question. Do you think that specifying the behavior requirements in the diagram would have been best conveyed via a user story or a use case description?
The Requirements Landscape
Kurt Bittner, of Ivar Jacobson International, has written a terrific white paper on the various approaches to capturing requirements. The mind map below was copied and pasted from Kurt’s white paper.
In his paper, Bittner discusses the pluses and minuses of each of his defined approaches. For the text-based “declarative” approaches, he states the pluses as: “they are familiar” and “little specialized training” is needed to write them. Bittner states the minuses as:
- They are “poor at specifying flow behavior”
- It’s “hard to connect related requirements”
IMHO, as systems get more and more complex, these shortcomings lead to bigger and bigger schedule, cost, and quality shortfalls. Yet, despite the advances in requirements specification methodologies nicely depicted in Bittner’s mind map, defense/aerospace contractors and their bureaucratic government customers seem to be forever married to the text-based “shall” declarative approach of yesteryear. Dinosaur mindsets, the lack of will to invest in corpo-wide training, and expensive past investments in obsolete and entrenched text-based requirements tools have prevented the newer techniques from gaining much traction. Do you think this encrusted way of specifying requirements will change anytime soon?