I used to be fond of quoting a statistic that says you can only find around 40% of your own mistakes.
Michael Stahl emailed me to ask where this number came from. Interesting question - first thought - I don't remember! I'm sure I must have read it somewhere at some time, but where, by whom and was it based in a study?
I checked with Mark Fewster, one of my former colleagues, and he thinks it might have come from a study done by the Open University in the UK.
I checked with Tom Gilb, as he uses an estimate of around a third (33%) for the effectiveness of an initial inspection - which is probably more effective than an individual anyway! Tom has demonstrated an effectiveness of 33% repeatedly by experimentation with early Inspections; he said it also agrees with Capers Jones' data.
I think we used the figure of 40% only because people found it more believable than 33%.
The frightening consequence is that if you don't have anyone else review your work, you are guaranteed to leave in two thirds of your own mistakes!
Thursday, 14 January 2010
Saturday, 2 January 2010
DDP Discussions and challenges
Several people have asked about benchmarks for DDP. I have actually blogged about this, but my comments are "buried" in the comments to the post about "Starting with DDP". Please have a look at the comments for that post, which include:
- benchmarking DDP with other organisations (raised by Bernhard Burger)
- Paul Herzlich's challenges about the seriousness of defects, getting data, DDP being hard to use and code-based metrics (all of which I have replied to in my following comment)
- using DDP to improve development (raised by Ann-Charlotte)
- Michael Bolton's challenges:
5 examples to show when it doesn't work (which I reply to in my first comment following his) (including some Dilbert Elbonian testers ;-)
7 "problems" - some of which I agree with, some I don't understand, and some I think illustrate the benefit of DDP rather than being problems with it (replied to in my second comment after his)
Thanks to Michael B's comments, I also formulated 3 Rules for when to use DDP:
DDP is appropriate to use when:
1) you keep track of defects found during testing
2) you keep track of defects found afterwards
3) there are a reasonable number of defects – for both 1) and 2)
1) you keep track of defects found during testing
2) you keep track of defects found afterwards
3) there are a reasonable number of defects – for both 1) and 2)
These are not the whole story (as illustrated by Michael's examples) but I think are a pre-requisite to sensible use of DDP.
Subscribe to:
Posts (Atom)