Thursday, 27 May 2010

Automated tests should find bugs? No!

I have recently been having what seems like the same discussion with a number of different people.

"Automated tests should find bugs" or "find more bugs" is a very common misconception. Basically this says that finding bugs is a valid objective for automation. I don't agree - I think this is generally a very poor objective for test automation. The reasons are to do with the nature of testing and of automation.

Testing is an indirect activity, not a direct one. We don't just "do testing", we "test something". (Testing is like a transitive verb which requires an object to be grammatically correct.) This is why the quality of the software we test has a large impact on testing: if a project is delayed because testing finds lots of bugs, we shouldn't blame the testing! (I hope that most people realize this by now, but do have my doubts at times!) Testing is not responsible for the bugs inserted into software any more than the sun is responsible for creating dust in the air. Testing is a way of assessing the software, whatever the quality of that software is.

Test automation is doubly indirect. We don't "do automation", we "automate tests that test something".

Automation is a mechanism for executing tests, whatever the quality of those tests are (that assess the software, whatever the quality of that software is).

Bugs are found by tests, not by automation.

It is just as unfair to hold automation responsible for the quality of the test, as it is to hold the testing responsible for the quality of the software.

This is why "finding bugs" is not a good objective for test automation. But there are a couple more points to make.

Most people automate regression tests. Regression tests by their nature are tests that have been run before and are run many times. The most likely time a test will find a bug is the first time it is run, so regression tests are less likely to find bugs than say exploratory tests. In addition the same test run for a second time (and more) is even less likely to find a bug. Hence the main purpose of regression tests (whether automated or not) is to give confidence that what worked before is still working (to the extent that the tests cover the application).

Of course, this is complicated by the fact that because automated tests can be run more often, they do sometimes find bugs that wouldn't have been found otherwise. But even this is not because those tests are automated, it is because they were run. If the tests that are automated had been run manually, then those manual tests would have found the bugs. So even this bug-finding is a characteristic of the tests, not of the automation.

So should your goal for automation be to find bugs? No! At least not if you are planning to automate your existing regression tests.

I have been wondering if there may be two exceptions: Model-Based Testing (where tests are generated from a model), and mature Keyword-driven automation, i.e. using a Domain Specific Test Language. In both cases, the first time a test is run is in its automated form.

But hang on, this means that again it is the tests that are finding the bugs, not the fact that those tests are automated!

"Finding bugs" is a great objective for testing - but it is not a good objective for automation.

10 comments:

Calkelpdiver said...

Dorothy,

I'm in 100% agreement with you. Automation is a tool for execution of the tests themselves only. A test is only as good as the person who wrote it originally. And that 99% of the time the automation is being used for Regression style testing. Regression testing is run after first pass on the code to make sure something that was working before (or didn't if you test for that via negative tests) is still the same. A regression test (automated or not) will only catch a difference, the code 'regressed', from before (some baseline).
Automation allows you to do this in a repeatable fashion and gain efficiencies in execution by spreading the workload and some semblance of speed. Which is an illusion to itself to a degree.
After all "It's automation, Not automagic".
Anyway... this one gets forwarded on to some colleagues. Thanks.

Jim Hazen

Bryan said...

Nice title Dorothy. And I agree. Automation is just a mechanism to automatically execute a repetitive activity. The automation can perform the activities very precisely, but also very dumb.
The same holds for test automation. Automate a regression test, as this is very repetitive work, and the test engineer has time to perform other valuable tests, e.g. exploratory testing. The regression test is executed very precisely, each time the same way. A deviation with respect to the the previous run will be detected immediately, but the test can also miss very obvious defects (the complete UI has the wrong colour... and nobody noticed it...). So an automated test can be very valuable, but one should not rely on it.
In previous projects I used test automation for reliability tests. These are short tests (several seconds to several minutes) which are repeated over and over again. This cannot be done by a human. It can take days, constantly performing (almost) the same tests. Nobody likes to do this, and most of the time the test passes. And when the test finally fails (after maybe thousands of runs), the test engineer is not paying attention anymore, and will not notice the failure. Test automation will detect this defect, and report it. Of course, assuming that the test is strong enough.
But also in this case... it is just automating a repetitive task.

Bryan Bakker

Kashif Ali Habib said...

You are right Dorothy, the main purpose of the automation is to execute tests in a speedy way, use it in a regression testing.

Nice title and nice post.

hanshartmann said...

Hi Dorothy,

cordial greetings from the CONQUEST-2010:)

I should say, of course, the statement is true. Very often I had to answer the question by myself when asked.
There are, however, some situations, when automation really finds test.
1. During the time when the automation is implemented. Very often just by trying to address a widget or a unit, one will find out a lack of robustness or a violation of the MVC-pattern, which will not be found in manual testing, which will not even harm the product at the moment, but will make the product very volatile in respect to future changes.
2) Having certain testcases automated (left-overs from funtional regression testing) you can program highly repetitive workflows, where the same flow will crash the program runs a sequence for the umpteenths time.
Mostly due to memory leakages, sometimes due to dead-lock situations, these types of automated tests will form a kind of stress test that could not be simulated by manual tests.

Dot Graham said...

Thanks very much for your comments, all of you!

I was thinking mainly of automated regression tests, but as Bryan and Hans point out, automation can enable some forms of stress/load/reliability testing that can find bugs because many more tests are run - either sheer volume of tests, or tests done in a different order, or tests repeated many times.

It is a fine line between whether these bugs are found by tests (that would only be run automated) or by the automation itself. In the end, it probably doesn't matter a great deal - automation helps [testing] find bugs!

Hans, your first point does seem a valid counter-example to my heading, as the bugs are found through the act of automating! Thanks!

Dot

JulianHarty said...

Dot,
with 'model-based-testing' (MBT) my recent experience of working with Mika Katara and his team in Tampere is that roughly 2/3 of the bugs were discovered during the modelling process. This ratio seems to be relatively consistent for other MBT they constructed and executed.

The model helps the person designing the model to ask detailed questions of the implemented software and of the intentions of the developer (comparing what was intended with what has been found). Potentially the same person could discover these bugs without building the model (they'd probably miss the remaining 1/3 of the bugs found by executing the model) however few people take the time and effort applied by the modeller, so don't find the bugs...

Other concepts and approaches can also help find bugs e.g. 'Tours', Test Charters, etc. It might be interesting to compare the effectiveness of each approach (including designing automated tests) to see which identify the most important/relevant bugs for a given piece of software being tested.

Wade Wachs said...

I have put a lot of thought into after attending your tutorial at StarWest. Ultimately, I think it comes down to knowing what your goals are for testing within a specific organization. Whatever those goals are, automation should help you meet them.

If automation is being created simply to serve automation, i.e. it is being created as an end in and of itself or being produced as the product, then sure, maybe it shouldn't actually find any bugs. I believe however that automation is just a piece of an overall testing department. If the automation doesn't help the department meet their goals, just as the department helps the company meet it's goals, then it is useless.

There could be a caveat to that in the case of automation vendors, in which case automation is the product or goal. In that case there may be no value added to the company implementing the vendor's tool, but the vendor stands to benefit from implementing automation.

This is not an argument against automation, I think automation has a great place within the industry. Some of my friends are automators. Automation is just a piece of testing, and should only be considered as such. Talking about the goals of automation apart from the goals of testing is dangerous.

To use your own logic, if the outcome of tests are the same whether automated or not, should not the goals of testing be the same whether automated or not? (regardless if the goal of testing is to find bugs, save money, increase confidence, etc.)

Susan said...

Bravo! Too many folks get wrapped up in and dependent on automated tests as the be all end all of testing. They are great for regression testing something stable, but there's nothing like human eyes and a human brain to make sure something is correct. Its a tool made of code that runs against more code. Code must be maintained and must be correct. If it makes sense do it, if not - don't. A team shouldn't feel obligated to do it just because so many people say it MUST be done. Bravo again!

Dot Graham said...

Thanks for your comments.

Julian, thanks for that very interesting statistic about model-based test design finding 2/3 of the bugs. I have long found that the act of thinking of what to test and how to test it is one of the most effective bug-finding strategies, and building models is a very thorough way of doing this kind of thinking!

Wade: Yes, automation should not serve itself, it serves testing, and both serve the organisation (except for tool vendors for whom the tool is their product and profit-maker).

Execution automation is a part of testing, but only a small part. Testing is much more than execution - it includes test analysis, test design, exploring, planning, and improving processes. Test execution automation is one way of implementing one of the test activities, that of running (executing) tests.

I feel it is dangerous not to separate out the goals of automation from the goals of testing. Automation is not capable of meeting many of the goals of testing, only those that are related to a narrow aspect, i.e. efficient execution. Holding the automation effort responsible for finding bugs or increasing confidence is misplaced - it is the quality of the tests that determine whether bugs are found or confidence is justified, not the manner in which they are run.

(Saving money is a good objective both for testing and for automation.)

So yes, the goals of testing should be the same whether the tests are automated or not, but the goals for automation are to do with the support of only one aspect of testing and therefore should not be the same.

Susan - I agree - automation is not an end, but a means to an end and certainly does not replace a good manual tester with an instinct for what could go wrong.

Andrea said...

Hi Dorothy,

I completely agree with you!! I'm working in a Test Factory since 6 years and I have always this problem: the client think the "Test Automation" mean "Magician Test" . Now I can read then this article!! Great!!