Home > technical > Asstimation!

Asstimation!

Here’s one way of incrementally learning how to generate better estimates:

good estLike most skills, learning to estimate well is simple in theory but difficult in practice. For each project, you measure and record the actuals (time, cost, number/experience/skill-types of people) you’ve invested in your project. You then use your historical metrics database to estimate what it should take to execute your next project. You can even supplement/compare your empirical, company-specific metrics with industry-wide metrics available from reputable research firms.

It should be obvious, but good estimates are useful for org-wide executive budgeting/allocating of limited capital and human resources. They’re also useful for aiding a slew of other org-wide decisions (do we need to hire/fire, take out a loan, restrict expenses in the short term, etc). Nevertheless, just like any other tool/technique used to develop non-trivial software systems, the process of estimation is fraught with risks of “doing it badly“. It requires discipline and perseverance to continuously track and record project “actuals“. Perhaps hardest of all is the ongoing development and maintenance of a system where historical actuals can be easily categorized and painlessly accessed to compose estimates for important impending projects.

In the worst cases of estimation dysfunction, the actuals aren’t tracked/recorded at all, or they’re hopelessly inaccessible, or both. Foregoing the thoughtful design, installation, and maintenance of a historical database of actuals (rightfully) fuels the radical #noestimates twitter community and leads to the well-known, well-tread, practice of:

asstimation

Categories: technical Tags: ,
  1. November 20, 2014 at 1:06 am

    here in defense and space, it’s called the rectal database. Three reasons programs overrun
    1. We couldn’t know – it’s a science project
    2. We didn’t know – we didn’t do our homework
    3. We don’t want to know – if the customer knew the cost, they wouldn’t fund the program

    James Webb telescope started out at under a Billion, now 7 Billion and counting

    • November 20, 2014 at 3:55 am

      Hah! Over my long and un-illustrious carear I’ve participated on several “science” projects that were either assumed or “hoped” to be turn-the-crank, no-problemo projects. I think your number three is pathologically systemic. Competitors consistently underbid big bux development programs in the hope of making money when the product is plcaed into production several years after the big “win”.

  2. November 20, 2014 at 9:45 am

    Schedules generally bother developers and for good reasons.

    A). They start out barely correct and lacking many details
    B). they are seldom updated
    C). They are treated as if they are written in stone and can not change.
    D). They are a linear models of a non-linear process.

    I believe management should be more worried about the non-linear bullocks that will inevitably creep in and how to deal with those droppings. This would make for better schedules. Also whenever a new “oh shit” shows up the schedule should be adjusted for it.

    In general the linear parts of the process are the easy parts to model and these are the only parts that people can wrap their heads around. Alas, dealing with non-linearities has plagued science and engineering from the dawn of its existence, hence the voluminous amount of mathematical techniques and transforms used to linearize problems. If schedules were built around the “oh shit” nuggets and a prediction of how many turds would be present they would be much more accurate. Now go flush your browser cookies.

    • November 20, 2014 at 9:58 am

      Phillipe, you look look a serious Drew Carey (before he lost the weight) in your new pic πŸ™‚

      Hey, I just noticed you re-entered the blogosphere. Good for you.

    • November 20, 2014 at 10:17 am

      Phillipe,

      The non-linearities are the outcome of all complex systems. Complex systems are the outcome if all complex problems. Avoiding those unnecessary complexities is the role of the systems engineering and architectural; frameworks – from Zachmann, to ToGAF, to DODAF.
      For the most part the easy problems have been solved and what remains are the hard problem. Several I’m familiar with directly http://goo.gl/72EOY4 and Class I of http://goo.gl/vtvRQl

      The planning and scheduling of these systems has moved from linear models to stochastic network processes. But most importantly the movement to the Integrated Master Planning paradigm, where incremental increases in product maturity are used to assess Measures of Effectiveness and Measures of Performance.

      The recent work of Boehm, Lane, Kollmanjwong, and Turner, in The Incremental Commitment Spiral Model provides an approach to manage these complexities.

    • November 20, 2014 at 11:22 am

      Don’t forget that since you’re (supposedly) using “actuals” from past experience, some coverage of the “oh shit” nuggets and non-linearity of the process will be accounted for in the data. That’s the main reason for recording the actuals. The dysfunction you’re talking about is yet another side effect of not recording actuals and not “reusing” them in the future – Asstimation πŸ™‚

      • November 20, 2014 at 11:26 am

        Past performance is a statistical process as well, whose behaviour must be accounted for http://goo.gl/o1m0Wl

  3. November 20, 2014 at 11:27 am
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: