I used to be fond of quoting a statistic that says you can only find around 40% of your own mistakes.
Michael Stahl emailed me to ask where this number came from. Interesting question - first thought - I don't remember! I'm sure I must have read it somewhere at some time, but where, by whom and was it based in a study?
I checked with Mark Fewster, one of my former colleagues, and he thinks it might have come from a study done by the Open University in the UK.
I checked with Tom Gilb, as he uses an estimate of around a third (33%) for the effectiveness of an initial inspection - which is probably more effective than an individual anyway! Tom has demonstrated an effectiveness of 33% repeatedly by experimentation with early Inspections; he said it also agrees with Capers Jones' data.
I think we used the figure of 40% only because people found it more believable than 33%.
The frightening consequence is that if you don't have anyone else review your work, you are guaranteed to leave in two thirds of your own mistakes!
Thursday, 14 January 2010
Subscribe to:
Post Comments (Atom)
8 comments:
They also say that 90% of all statistics are made up.
Clearly there is a compelling argument to have others review your work. Otherwise you risk not finding many mistakes.
But creating a number (and adjusting that number to make it more believable), doesn't really strengthen the argument, IMHO.
Thanks Joe.
I see your point about weakening the argument if you adjust the number, and I agree with you - in theory.
The reason we did it was that a more "believable" number would lead managers to take action (and then they would find out that the number was worse than they had thought). But a number that wasn't believable would actually be ignored, so it was less effective.
And I had always believed it was only 43% of statistics that were made up on the spot - things are even worse than I thought!
;-)
This is very troubling to me.
In this business, my reputation is all I've got. People rely on me, as a tester, to tell them the truth about the product and the project when everyone else is outright dissembling (which is only occasionally the case) or overly optimistic (more often).
If I'm going to be considered credible, I need to provide some evidence that warrants my assertions. What I see here is "we made up a number, but that number sounded unpleasant, so we made up different number".
Why not avoid damage to your reputation by saying something that can be warranted, without having to resort to bogus figures? Why not report compelling stories from your own experience? Why not relay other people's stories with attribution, if that works for you? Why not give people an exercise to show how easily they can be fooled?
Why not enhance your reputation by doing sufficient research, or a study of the existing literature, or even enough research to cite plausible data from even one plausible study?
How should your clients who place trust in you to tell the truth about the state of their products and their projects interpret that? How should they apply knowledge of the fact that you're pulling numbers out of the air sometimes?
---Michael B.
Hi ,
The difficulty in software testing stems from the complexity of software testing of net .
http://www.softwaretestingnet.com/
Hi hs,
Yes, things to test are getting ever more complex.
So maybe we find an even lower percentage of defects now?
Hi Michael,
Thanks for your comment.
It wasn't actually "we made up a number", it was "we have been using a number that we are sure came from a reputable source, but when asked, couldn't remember where it came from". (Hence my checking with Tom etc. in response to this email.)
However, I do take your point about putting forward an exaggerated/altered number to make it more acceptable, and I accept that this was not a good idea, even if it was in a situation where there wasn't time to substantiate the evidence-supported number. We should have stuck with the actual number and challenged people to prove it wrong. Mea culpa!
I'm often in the position of justifying the cost of Quality Assurance. My experience shows that it's a great value, but when people are trying to cut costs, QA is often the first thing cut.
People often tell me that since we have excellent developers, we shouldn't need QA. The information in this post will be one more way I can show the value of QA. Thanks!
When I started this thread, I tried to find the source for the idea that you can only find around 30% of errors. It was frustrating to me that I couldn't chase down the source, but now I think I have found it (if anyone else is still interested!)
In clearing out some files, I came across a paper by Capers Jones, dated 28th July 1995, for IEEE Computer "Software Challenges" column. The paper is called "Software Estimating Rules of Thumb".
Capers states that although they are not a substitute for a formal estimate, these "rules of thumb" are capable of providing rough early estimates or sanity checks of formal estimates.
Here is what he says:
"Rule 7: Each software review, inspection or test step will find and remove 30% of the bugs that are present."
He then points out that this means that a series of between 6 and 12 consecutive defect removal operations are needed to achieve very high quality levels.
Post a Comment