Sunday, 1 November 2015

The wrong question: What percentage of tests have you automated?

At a couple of recent conferences, I became aware that people are asking the wrong question with regard to automation. There was an ISTQB survey that asked “How many (what percentage of) test cases do you automate?”. In talking to a delegate after my talk on automation at another conference, she said that her manager wanted to know what percentage of tests were automated; she wasn’t sure how to answer, and she is not alone. It is quite common for managers to ask this question; the reason it is difficult to answer is because it is the wrong question.

Why do people ask this? Probably to get some information about the progress of an automation effort, usually when automation is getting started. This is not unreasonable, but this question is not the right one to ask, because it is based on a number of erroneous assumptions:

Wrong assumption 1)  All manual tests should be automated. “What percentage of tests” implies that all existing tests are candidates for automation, and the percentage will measure progress towards the “ideal” goal of 100%.

It assumes that there is a single set of tests, and that some of them are manual and some are automated. Usually this question actually means “What percentage of our existing manual tests are automated?”

But your existing manual tests are not all good candidates for automation – certainly some manual tests can and should be automated, but not all of them!

Examples: if you could automate “captcha”, then the “captcha” isn’t working, as it’s supposed to tell the difference between a human and a computer. “Do these colours look nice?” or “Is this exactly what a real user would do”? And tests that take too long to automate such as tests that are not run very often or those that. are complex to automate.

Wrong assumption 2) Manual tests are the only candidates for automation. “What percentage of tests” also implies that the only tests worth automating are existing manual tests, but this is also incorrect. There are many things that can be done using tools that are impossibly or infeasible to do when testing manually.

Examples: additional verification or validation of screen objects – are they in the correct state? When testing manually, you can see what is on the screen, but you may not know its state or whether the state is displaying correctly.

Tests using random inputs and heuristic oracles, which can be generated in large volume and checked automatically.

Wrong assumption 3) A manual test is the same as an automated test. “What percentage of tests” also assumes that a manual test and an automated test are the same - but they are not. A manual test consists of a set of directions for a human being to follow; it may be rather detailed (use customer R Jones), or it could be quite vague (use an existing customer). A manual test is optimised for a human tester. When tests are executed manually, they may vary slightly each time, and this can be both an advantage (may find new bugs) and a disadvantage (inconsistent tests, not exactly repeated each time).

An automated test should be optimized for a computer to run. It should be structured according to good programming principles, with modular scripts that call other scripts. It shouldn’t be one script per test, but each test should use many scripts (most of them shared) and most scripts should be used in many tests. An automated test is executed in exactly the same way each time, and this can be an advantage (repeatability, consistency) and a disadvantage  (won’t find new bugs).

One manual test may be converted into 3, 5, 10 or more automated scripts. Take for example a manual test that starts at the main menu, navigates to a particular screen and does some tests there, then returns to the main menu. And suppose you have a number of similar tests for the same screen, say 10. If you have one script per test, each will do 3 things: navigate to the target area, do tests, navigate back. If the location of the screen changes, all of those tests will need to be changed – a maintenance nightmare (especially if there are a lot more than 10 tests)! Rather, each test should consist of at least 3 scripts: one to navigate to the relevant screen, one (or perhaps many) scripts to perform specific tests, and one script to navigate back to the main menu. Note that the same “go to screen” and “return to main menu” script is used by all of these tests. Then if the screen is re-located, only 2 scripts need to be changed and all the automated tests will still work.

But now the question is: how many tests have you automated? Is it the 10 manual tests you started with? Or should you count automated scripts? Then we have at least 12 but maybe 20. Suppose you now find that you can very easily add another 5 tests to your original set, sharing the navigation scripts and 4 of the other scripts. Now you have 15 tests using 13 scripts – how many have you automated? Your new tests never were manual tests, so have you automated 10 tests (of the original set) or 15?

Wrong assumption 4) Progress in automation is linear (like testing). A “what percent completed” measure is fine for an activity that is stable and “monotonic”, for example running sets of tests manually. But when you automate a test, especially at first, you need to put a lot of effort in initially to get the structure right, and the early automated tests can’t reuse anything because nothing has been built yet. Later automated tests can be written / constructed much more quickly than the earlier ones, because there will (should) be a lot of reusable scripts that can just be incorporated into a new automated test. So if your goal is to have say 20 tests automated in 2 weeks, after one week you may only have automated only 5 of those tests, but the other 15 can easily be automated in week 2. So after week 1 you have automated 25% of the tests, but you have done 50% of the work.

Eventually it should be easier and quicker to add a new automated test than to run that test manually, but it does take a lot of effort to get to that point.

Good progress measures. So if these are all reasons NOT to measure the percentage of manual tests automated, what would be a good automation progress measure instead? Here are three suggestions:

1)            Percentage of automatable tests that have been automated. Decide first which tests are suitable for automation, and/or that you want to have as automated tests, and measure the percentage automated compared to that number, having taken out tests that should remain manual and tests that we don’t want to automate now. This can be done for a sprint, or for a longer time frame (or both). As Alan Page says, "Automate 100% of the tests that should be automated."

2)            EMTE: Equivalent Manual Test Effort: Keep track of how much time a set of automated tests would have taken if they had been run manually. Each time those tests are run (automatically), you “clock up” the equivalent of that manual effort. This shows that automation is running tests now that are no longer run manually, and this number should increase over time as more tests are automated.

3)            Coverage: With automation, you can run more tests, and therefore test areas of the application that there was never time for when the testing was done manually. This is a partial measure of one aspect of the thoroughness of testing (and has its own pitfalls), but is a useful way to show that automation is now helping to test more of the system.

Conclusion So if your manager asks you “What percent of the tests have you automated?” you need to ask something like: Percent of what? Out of existing tests that could be automated or that we decide to automate? What about additional tests that would be good to automate that we aren’t doing now? Do you want to know about progress in time towards our automation goal, or literally only the tests, as this will be different because automated tests are structured differently to manual tests.

It might be a good idea to find out why he or she has asked that question – what is it that they are trying to see? They need to have some visibility for automation progress, and it is up to you to agree something that would be useful and helpful, honest and reasonably easy to measure. Good luck! And let me know how you measure your progress in automation!

If you want more advice on automation, see the wiki that I am doing with Seretta Gamba at


halperinko said...

Hi Dorothy - Thanks for this interesting post,
Another issue not mentioned above is management of the percentage of each manual or expected test content coverage vs. what was eventually implemented by automation.
Managers and others seem to instinctively think that if Feature X was automated, the full extent of what we used to test manually is now automated,
But in many cases only a fraction of the "expected" test is automated, either since automation cannot see all that manual tester do, or since some parts are just too hard and wasteful to automate.
In most companies these deviations are not managed, so we are left with an assumption of "full" test, while actually there should be a list of additional items to be run manually.

BTW - please consider increasing the font size in your site (while we can zoom, it's best to have it that way in 1st place).

@halperinko - Kobi Halperin

Dot Graham said...

Hi Kobi,

Thanks very much for you comment - yes you are right, and I think that coverage is something that is also often mis-understood. Just because you have one test for something, as you say, does not mean that it is "completely" tested.

I have increased the font size (and changed font) so I hope it is more readable now.

PS I also added in "Page's Law", thanks to Alan's tweet.

daver22 said...

Takes me back to a test automation exercise in a certain company in Tewkesbury!

Martin Gijsen said...

Hi Dot,

very valid points, worth revisiting every now and then. Thanks.

If I may add to them, the importance of the related topic of maintenance is hard to overstate, and it could be considered the flip side of the question you address. (You do in fact refer to it already in your article.) The business value of any number of working automated test cases is very quickly reduced to zero if they start failing because they do not keep up with reality. So having any number of test cases automated, in whatever way, usually amounts to close to nothing unless their continuity is also ensured. There are several ways of addressing it, and much more text can and has been devoted to it, but the essential point is that keeping maintenance effort low needs very serious attention.

When I read the Agile manifesto again recently, I found it interesting that one of the principles behind it calls for 'continuous attention to technical excellence.' It struck me that it does not say 'quality' but 'excellence.' The principle above it mentions the related topic of sustainable development, being able to maintain a constant pace indefinately. This requires that technical debt is not allowed to pile up. I for one make no distinction between production code and automation code here. If something is worth doing, it is worth doing well. If automation code is treated as a second-class citizen, it will quickly start to behave like one.


Martin Gijsen

Viktor Eydel said...

Actually, it is a perfectly valid question and managers should be asking it all the time. It does not mean that people should only automate so called manual tests (bad name to begin with) or that people should aim to automate everything there is in the test cases repository (if you don't have one, get one). It means that people try to automate test cases when it makes sense and that they keep track of what's automated vs what's not. If you don't understand why, well, you do not understand QA.

Ruben Smits said...

Great reflection, Dorothy! Thanks!

I guess the best question instead that should be asked is about the business value that the automation initiative brought. This involves time that has been saved, technical debt (assuming user stories have been created for that), time it took to write scripts, but also time to setup a framework and infrastructure.

Conrad Braam said...

Yes the EMTE (Estimated Manual test Effort) that automation gives you needs to be a metric - but not to be confused or just bundled together with the "coverage". Automation does include lots of unit-testing in applications that have been designed to be tested at the component and even at the feature level - and that's where the magic sauce goes bad - Automation is best when used to test "components", and in scenarios where the scope is narrow. automated tools are not great at feature testing because feature changes kill dumb automated scripts which are typically just testing a computer-observable-behaviour and not an "outcome". Manual testing is thus better at feature breadth or new features. But automation should never be limited to regression-testing only because of this flaw.

So I like to keep EMTE in sight, but prefer to keep technical debt low and speed up automated test execution and use automation up and down the entire fabrication/assembly-line. Development happens in teams right? teams around the globe benefit if you place your automated tests into an "appliance" and let them run it anywhere in the world, anytime - but without having to own the appliance. In other words the developers use the test-appliances, and only ask you for help when the test-runs fail.

Michael Eckhoff said...

Thanks for the great post. As a automation tool vendor, (Tricentis), we are often asked how to get to 100% automation. Our answer is always the same, "that is the wrong question." Where we see the future of Continuous Testing, or testing as a direct component of a CI/CD environment is a shift to highly automated API tests, with UI testing primarily focused on automated end to end, system integration testing, and a SIGNIFICANTLY reduced level of manual testing, primarily driven by exploratory testing. A base assumption of achieving this vision is a clear understanding of what test should be performed at all, and which are candidates for automation.

Much appreciation for keeping this discussion going!

Dot Graham said...

Thanks everyone for your great comments!

@Viktor: I agree. What I am attempting to address is the most common misunderstanding of this question, but clearly you are looking at this in the right way, looking at percentage of automatable tests for example.

@Martin: Yes! Maintenance is critical for continuing automation, as is minimising technical debt. In Lisa Crispin's chapter in the Experiences book, one of their success factors was re-factoring the automation in a separate Sprint every 6 months, so technical debt was addressed.

@Ruben: Yes! Business value is what we should be delivering, but I find it difficult to find a way to measure this - any suggestions? You mention time spent and saved, but that's not the same thing as value, right?

@Conrad: I was suggesting EMTE and/or coverage as alternatives - I agree they shouldn't be confused. If automated tests are easily de-railed by changes, then IMHO, they haven't been designed well enough, and the testware architecture needs attention. If the automate tests are built so that the most likely changes are the easiest for the automated tests to cope with, then the automation is flexible and likely to be long-lived. I also agree that automated support should not be limited to regression tests/checks - there is much more scope for useful tool support for testing. I like the idea of an automation "appliance" - sounds interesting.

@Michael: Good to hear that! Tool vendors have a definite role to play in educating about what can and can't be done with tools. Whether or not the level of manual testing is significantly reduced depends on the amount of manual testing that was suitable for automation - sometimes it will be a lot, other times not so much.

@Dave: Yes I well remember those days too!

Thanks for your very useful and insightful comments.

Ruben Smits said...

I absolutely agree, time gained and time spent are 2 factors in a bigger picture called business value. Also factors like reduced number of escapes, higher coverage, improved reporting, improved cooperation and much more are relevant. Some can be predicted, some are very hard to express in numbers.

Anonymous said...
This comment has been removed by a blog administrator.
Farhana Muna said...

Thank you so much for sharing your thought with us.