View Sidebar
The Art and War of Estimating and Scheduling Software

The Art and War of Estimating and Scheduling Software

August 29, 2007 11:18 am4 comments

In my experience, the majority of estimates and schedules for software development projects are derived from hunches, guesswork, and gut instincts.  We say things like “that should take about 4 hours of work” without backing it up with data.  And we set timelines based on those estimates without adjusting for past accuracy in estimating.  And what about the tendency of the customer / stakeholder to change scope midstream?  That should drive how big the “buffer” is in the schedule for changing requirements.

I’m as guilty as the next person in many cases.  I fight an internal battle on this issue.  Part of me wants to just be a carefree developer with little to no regard for estimates.  I dare say that my time as a computer science student was when I got stuck in that trap.  Nobody taught me to estimate my work, let alone track the accuracy of my estimates over time.  I usually had ample time to do assignments and the code flowed easily for me so I never really worried about how long I spent working on things.  [In recent years, I know that people like Rick Wightman at the University of New Brunswick have been working on teaching students how to think more like professional developers.]

The professional developer in me knows that good estimates are essential to everyone in the development chain.

The businessperson in me cherishes accurate estimates.

ACM Queue has an interview with Joel Spolsky in which he talks a bit about evidence-based scheduling (which apparently is somehow supported in FogBugz):

In evidence-based scheduling, you come up with a schedule, and a bunch of people create estimates. And then, instead of adding up their estimates – instead of taking them on faith – you do a little Monte Carlo simulation where you look at what speeds developers had in the past, vis-à-vis their estimates. You use that same distribution of probabilities, as we call them, that you had in the past and run a simulation of all of your futures. What you get, instead of a date, is a probability distribution curve that shows the probability that the product will ship on such-and-such a date.

Nice!  I need to try that.  Sounds so much better than just using spreadsheets to track history.

What’s most interesting and useful about tracking estimate-to-actual history on a per-developer basis is that you get data about how accurate the hunch / gut instinct of each person really is.  This is so much more powerful than just collecting team or project based accuracy.  Of course there will be outliers in the data, like when a developer who does a lot of similar tasks has to estimate something new or unrelated.

I asked a friend who is a project manager that I respect (and trust) to comment on this.  (I was wondering if there was a more popular term than “evidence-based scheduling” or other good tools to support it.) At his old job he used to teach an estimating course for developers.  Check out the part I highlighted:

Well when you run a project you have a bunch of planned dates (when are you planning to be done), and you are really supposed to keep track of actual dates (when did the work actually finish).  If you are really keen (and perhaps have a database to track this by resource -and perhaps by technology and task type)… you should be able to form some projections on developers… I.e. Billy always takes twice as long to do a design but he does a detailed job so the coding goes twice as fast.  You can use this information to do a gut check on teams too… but Joel is right … these estimates are as complex as the people who are on the teams and the types of tasks they are working on.  There is also the issue of doing something for the first time… very hard to estimate.  It is only when you do something multiple times that you get good at estimating… and only if you are looking back on your estimates looking for trends.  What Joel is describing is putting some trends to the old estimate / vs. actuals data.

Not sure what to call it.  But PMs are always trying to log history of missing or hitting deadlines and you very quickly get a sense of which developers have good estimates, and which developers you need to double, triple, etc… On my last project I had a developer tell me something would be able to get something done by Friday of that week and it actually took about 3 months of 8 people full time… ;-)  I’ve heard the statistic of 8:1 (fastest developer to slowest developer).

To my memory, I have never had a developer highlight his/her accuracy in estimating on a resume or during an interview.  That would be an interesting thing to do.  It would blow me away if someone told me they had a spreadsheet with their personal estimation accuracy showing their estimates-to-actuals history.  That would be an impressive artifact!

4 Comments

  • One thing that I learned at my last gig was to factor in Uncertainty and Complexity into each item, which helps in determining estimates for a particular feature. So for example:

    Developer – Fred
    Knows VB.NET and SQL Server 2005, Intermediate Windows Developer

    Work Item – Insert object data into database table from ASP.NET app written in C# into an Oracle database. All components must be written from scratch.

    Complexity: Writing some data to a table isn’t that complex…we’re not throwing anything crazy at him. Out of 10, he rates this as a 2.

    Certainty: Fred has never actually touched any of the required technologies, although he has worked with a .NET language and understands interacting with a RDBMS and its tools. However, his Uncertainty is very high since he hasn’t actually touched either. Out of 10, he rates it an 8

    So now we know that this task, for this resource, isn’t complex but isn’t a “can do it in my sleep” sort of thing. This information can help project managers drive out real estimtes from the hours quoted by the resources.

    In face, one spreadsheet I’ve seen used in a previous gig relied ENTIRELY on those two values to drive all other estimates. Features were distilled enough that you never had to worry about the feature size: all would be relative. If one was too big, that was a sign it needed to be refactored into smaller features.

    Anyway, just adding to the conversation. Great post!

    D

  • Just curious, how did you arrive at 2/10 for Complexity and 8/10 for Uncertainty? A educated guess?

  • Yeah…I don’t think estimates will every truly be calculatable as long as humans are the machines that are doing the work (way too many variables in that), so it comes down to educated guesses. There’s definately a human part of this too though:

    – Is the developer someone with years of experience and a track record of good estimating?
    – Is the developer someone who wants to please their boss and always understates certainty and complexity?
    – Does the technology itself dictate the numbers without any human input?

    For instance, I need a developer to execute a LINQ call to a collection of objects to return a subset for binding to a grid. Now that in itself doesn’t sound terribly complex: we’re not saying that we need a generic base-class to be created through a factory and execute a dynamically loaded LINQ statement against a collection of objects created via NHibernate…we just want a simple call executed against a pre-existing collection.

    The complexity of this is probably relatively low…but the ability of the developer comes into play: How quickly does the dev pick up new technologies, have they written data-tier operations before, etc.

    With LINQ being so new, there’s great uncertainty as well: it doesn’t *seem* complex, but then again we’ve never touched it before so we don’t really know what we’re getting into.

    I suppose that asking directly for a developers thoughts on Complexity and Uncertainty is just more discrete than asking “How long do you thin it’ll take you?” With that question, a developer isn’t really thinking through more telling trains of thought that asking “How complex do you feel it is?” and “How certain are you that you can do it?” can form.

    D

  • phloid domino

    Can’t dispute the value of accurate estimates, nor the merit, on the surface, of trying to be ‘scientific’ about tracking data and improving, etc, BUT…

    Almost every discussion, blog, book, whatever on this topic ignores the elephant in the room: the lack of good faith on the part of most managements of most software development organizations.

    As long as the obsession with schedules prevails over all else, developers will feel pressure to make ‘ambitious’ estimates. Factor in that scope and requirements are never stable, so any correlations between original estimates and ‘completion’ will always be moot.

    The above describes more than 90% of all corporate software environments, and I speak from experience, having worked as either a contractor or employee at dozens of companies over 20 years, including well recognized big brand name software publishers. They are all the same.

    It’s the managers, stupid.