At a couple of recent conferences, I became
aware that people are asking the wrong question with regard to automation.
There was an ISTQB survey that asked “How many (what percentage of) test cases
do you automate?”. In talking to a delegate after my talk on automation at
another conference, she said that her manager wanted to know what percentage of
tests were automated; she wasn’t sure how to answer, and she is not alone. It
is quite common for managers to ask this question; the reason it is difficult
to answer is because it is the wrong question.
Why do people ask this? Probably to get
some information about the progress of an automation effort, usually when
automation is getting started. This is not unreasonable, but this question is
not the right one to ask, because it is based on a number of erroneous
assumptions:
Wrong assumption 1) All manual tests should be automated. “What percentage of tests” implies that all existing tests are
candidates for automation, and the percentage will measure progress towards the
“ideal” goal of 100%.
It assumes that there is a
single set of tests, and that some of them are manual and some are automated.
Usually this question actually means “What percentage of our existing manual tests
are automated?”
But your existing manual
tests are not all good candidates for automation – certainly some manual tests
can and should be automated, but not all of them!
Examples: if you could
automate “captcha”, then the “captcha” isn’t working, as it’s supposed to tell
the difference between a human and a computer. “Do these colours look nice?” or
“Is this exactly what a real user would do”? And tests that take too long to
automate such as tests that are not run very often or those that. are complex
to automate.
Wrong assumption 2) Manual tests are the only candidates for automation. “What percentage of tests” also implies that the only tests worth
automating are existing manual tests, but this is also incorrect. There are
many things that can be done using tools that are impossibly or infeasible to
do when testing manually.
Examples: additional
verification or validation of screen objects – are they in the correct state?
When testing manually, you can see what is on the screen, but you may not know
its state or whether the state is displaying correctly.
Tests using random inputs
and heuristic oracles, which can be generated in large volume and checked
automatically.
Wrong assumption 3) A manual test is the same as an automated test. “What percentage of tests” also assumes that a manual test and an
automated test are the same - but they are not. A manual test consists of a set
of directions for a human being to follow; it may be rather detailed (use
customer R Jones), or it could be quite vague (use an existing customer). A manual
test is optimised for a human tester. When tests are executed manually,
they may vary slightly each time, and this can be both an advantage (may find
new bugs) and a disadvantage (inconsistent tests, not exactly repeated each
time).
An automated test should be
optimized for a computer to run. It should be
structured according to good programming principles, with modular scripts that
call other scripts. It shouldn’t be one script per test, but each test should
use many scripts (most of them shared) and most scripts should be used in many
tests. An automated test is executed in exactly the same way each time, and
this can be an advantage (repeatability, consistency) and a disadvantage (won’t
find new bugs).
One manual test may be
converted into 3, 5, 10 or more automated scripts. Take for example a manual
test that starts at the main menu, navigates to a particular screen and does
some tests there, then returns to the main menu. And suppose you have a number
of similar tests for the same screen, say 10. If you have one script per test,
each will do 3 things: navigate to the target area, do tests, navigate back. If
the location of the screen changes, all of those tests will need to be changed
– a maintenance nightmare (especially if there are a lot more than 10 tests)!
Rather, each test should consist of at least 3 scripts: one to navigate to the relevant
screen, one (or perhaps many) scripts to perform specific tests, and one script
to navigate back to the main menu. Note that the same “go to screen” and “return
to main menu” script is used by all of these tests. Then if the screen is
re-located, only 2 scripts need to be changed and all the automated tests will
still work.
But now the question is:
how many tests have you automated? Is it the 10 manual tests you started with?
Or should you count automated scripts? Then we have at least 12 but maybe 20. Suppose
you now find that you can very easily add another 5 tests to your original set,
sharing the navigation scripts and 4 of the other scripts. Now you have 15
tests using 13 scripts – how many have you automated? Your new tests never were
manual tests, so have you automated 10 tests (of the original set) or 15?
Wrong assumption 4) Progress in automation is linear (like testing). A “what percent completed” measure is fine for an activity that is
stable and “monotonic”, for example running sets of tests manually. But when
you automate a test, especially at first, you need to put a lot of effort in
initially to get the structure right, and the early automated tests can’t reuse
anything because nothing has been built yet. Later automated tests can be written
/ constructed much more quickly than the earlier ones, because there will
(should) be a lot of reusable scripts that can just be incorporated into a new
automated test. So if your goal is to have say 20 tests automated in 2 weeks,
after one week you may only have automated only 5 of those tests, but the other 15
can easily be automated in week 2. So after week 1 you have automated 25% of
the tests, but you have done 50% of the work.
Eventually it should be
easier and quicker to add a new automated test than to run that test manually,
but it does take a lot of effort to get to that point.
Good progress measures. So if these are all reasons NOT to measure
the percentage of manual tests automated, what would be a good automation
progress measure instead? Here are three suggestions:
1)
Percentage of automatable tests that have been
automated. Decide first which tests are suitable for automation, and/or that you
want to have as automated tests, and measure the percentage automated compared
to that number, having taken out tests that should remain manual and tests that
we don’t want to automate now. This can be done for a sprint, or for a longer
time frame (or both). As Alan Page says, "Automate 100% of the tests that should be automated."
2)
EMTE: Equivalent Manual Test
Effort: Keep track of how much time a set of automated tests would have taken
if they had been run manually. Each time those tests are run (automatically),
you “clock up” the equivalent of that manual effort. This shows that automation
is running tests now that are no longer run manually, and this number should
increase over time as more tests are automated.
3)
Coverage: With automation, you
can run more tests, and therefore test areas of the application that there was
never time for when the testing was done manually. This is a partial measure of
one aspect of the thoroughness of testing (and has its own pitfalls), but is a
useful way to show that automation is now helping to test more of the system.
Conclusion So if your manager asks you “What percent of
the tests have you automated?” you need to ask something like: Percent of what?
Out of existing tests that could be automated or that we decide to automate?
What about additional tests that would be good to automate that we aren’t doing
now? Do you want to know about progress in time towards our automation goal, or
literally only the tests, as this will be different because automated tests are
structured differently to manual tests.
It might be a good idea to find out why he
or she has asked that question – what is it that they are trying to see? They
need to have some visibility for automation progress, and it is up to you to
agree something that would be useful and helpful, honest and reasonably easy to
measure. Good luck! And let me know how you measure your progress in
automation!
If you want more advice on automation, see the wiki that I am doing with Seretta Gamba at TestAutomationPatterns.org.
If you want more advice on automation, see the wiki that I am doing with Seretta Gamba at TestAutomationPatterns.org.