Convex, Not Linear
For a large, complex software system that can be represented via an instantiation of the above template, three levels of testing are required to be performed prior to fielding a system: unit, integration, and system. Unit testing shakes out some (hopefully most) of the defects in the local, concretely visible, micro-behavior, of each of possibly thousands of units. Integration testing unearths the emergent behaviors, both intended and unintended, of each individual subsystem assembly. System level, end-to-end, testing is designed to do the same for the intended/unintended behaviors of the whole shebang.
Now that the context has been set for this post, let’s put our myopic glasses on and zero in on the activity of unit testing. Unit testing is unarguably a “best practice“. However, just because it’s a best practice, does it mean we should, as one famous software character has hinted, strive to “turn the dials up to 10” on unit testing (or any other best practice?).
Check out these utterly subjective, unbacked-by-data, convex, ROI graphs:
If you believe these graphs hold true, then you would think that at some point during unit testing, you’d get more bang for the buck by investing your time in some other value-added task – such as writing another software unit or defining and running “higher level” integration and/or system tests on your existing bag of units.
Now, check out these utterly subjective, unbacked-by-data, linear, “turn the dials up to 10“, ROI graphs:
People who have drank the whole pitcher of unit-testing koolaid tend to believe that the linear model holds true. These miss-the-forest-for-the-trees people have no qualms about requiring their developers to achieve arbitrarily high unit test coverage percentages (80%, 85%, 90%, 100%) whilst simultaneously espousing the conflicting need to reduce cost and delivery time.
Given a fixed amount of time for unit + integration + system testing in a finite resource environment, how should that time be allocated amongst the three levels of testing? I don’t have any definitive answer, but when schedules get tight, as so often happens in the real world, something has gotta give. If you believe the convex unit testing model, then lowering unit test coverage mandates should be higher on your list of potential sacrificial lambs than cutting integration or system test time – where the emergent intended, and more importantly, unintended, behaviors are made visible.
Like Big Requirements Up Front (BRUF) and its dear sibling, Big Design Up Front (BDUF), perhaps we should add “Big Unit Testing Tragedy” (BUTT) to our bag of dysfunctional practices. But hell, it won’t be done. The linear thinkers seem to be in charge.
Note: Jim Coplien has been eloquently articulating the wastefulness of too much unit testing for years. This post is simply a similar take on his position: