Friday, 19 June 2009

ROI from automation

How do you measure Return on Investment from automation?

According to http://www.investopedia.com/terms/r/returnoninvestment.asp, return on investment is (Gain - Cost) / Cost.

So if we have a set of automated tests that save 5 hours of manual testing, then if those tests took one hour to build and run, our ROI would be (5 - 1) / 1 = 4 (if my arithmetic is correct). So that would be a four-fold ROI for those automated tests.

Of course in the initial stages of automation, before our automation regime is mature, it might take us 10 hours to build and run those same tests. This would give an ROI of (5 - 10) / 10, i.e. a negative return on investment of -0.5.

Negative ROI is not generally a good idea, as it means you are losing money. But you may have negative ROI in the early stages of automation, but a very good positive ROI long-term. It might also make more sense to look at ROI over a larger sample, say all of the automation build and run in a month, and monitor that month by month.

How do you measure ROI for automation?

6 comments:

Unknown said...

Hi Dorothy,

I am pleased to see that you blog. I was at the Testing & Finance conference yesterday and immensely enjoyed your enlightening talk on our subconscience - you gave me a new instrospection tool!

As for ROI, that is something I (in the role of Test Architect) have been calculating calculate in our organization across different projects using GUI test automation for functional testing. Monetary benefits from regression test automation has been very high in some projects, while at others, test automation is being unfairly nipped at the bud owing to premature conclusions based on cost benefit analysis, without giving time for the TA ecosystem to grow within the project.

I did consider the usage of ROI formulae such as the one you mentioned above, but then decided to stop at presenting the costs saved by test automation over the years, as cost figures seem to directly catch the attention of a eager reviewer. In the end, I have realized that it is the road to achieving ROI that needs to be made simpler for an agnostic manager to convince him/her to make TA an integral part of his/her quality improvement and cost reduction strategy and provide support to it from all faculties. But alas, test automation gets the first boot in hard times such as these and only those initiatives seem to survive that have already borne fruit, but with added pressure to deliver more at lesser future costs. It's a good challenge!

Dot Graham said...

Hi Chaitanya,

I think one of the major problems is unrealistic expectations, especially managers. It's human nature to want a quick fix that will solve your problems and be a bargain, so if tool expectations latch onto that, you are setting yourself up for failure.

I think any automation effort should provide return on investment, but in what sort of time scale? If you expect ROI in 4 weeks, then as you say, the conclusion will be premature. I like your idea of allowing the automation ecosystem time to grow!

There are several approaches here:
1) set realistic expectations for what can be achieved by when
2) appreciate that investment is needed before you can get a return, and the automation ecosystem needs time as well as money
3) look out for quick wins ("low-hanging fruit") and achieve them whenever you see one. But again back to expectations - not all fruit worth having is low-hanging and will take more effort to achieve.

Thanks very much for your comments on my blog! Keep in touch.

Dot Graham said...

PS If a test automation effort really isn't providing more value than it is costing, and if this has continued for several months with no improvement, then I think it is right that such an automation effort doesn't deserve to live!

sewcool said...

I am actually in the process of quantifying the benefits and value that automation has afforded us in our organization.

How do you capture the time saved that allows you to do activities earlier in the lifecycle?

How do you quantify the additional data scenarios you were able to run now that it is automated?

I find myself struggling to put it into numbers...

Unknown said...

I posted the "sewcool" post. This is Molly Mahai. I met you at StarWest 2009 and I sent you some info for your book in progress.

see ya
Molly

Dot Graham said...

Hi Molly/sewcool,

Thanks for your comment. It's not easy to put benefits into numbers. Cost is so much easier, isn't it - just count the hours spent (easily translated into money), but benefits don't make themselves visible in the same way - so it takes effort to make them visible.

You asked how to capture the time saved. I suppose we could say that we haven't actually saved time, but we have avoided spending time.

I have three suggestions:

1) The best way to show cost avoidance/time saved is to compare it to what used to happen before the automation existed. But in order to do that, there needs to be a record kept of where the time was spent in manual testing.

If your current metrics don't allow you to know how much time was spent on various tasks, then it would be difficult to see exactly what had been improved with test automation. But if you can say something like "Last year we spent 200 hours per release in test execution, and this year we only spend 20 hours per release supporting the automated execution, including maintenance and automation work", then you have quantified the benefit.

2) If you can choose one or two significant events where the automation enabled you to get something "out of the door" very quickly, where previously the testing may have delayed it by days or weeks, this can be helpful to show the benefit of the automation. Don't be afraid to make a big noise about individual successes!

3) EMTE - Equivalent Manual Test Effort - is a measure of automation benefit. A set of automated tests will take a certain amount of time to run, but if those same tests had been run manually, they would take much longer (e.g. 4 days versus 2 hours). Of course, there isn't time to run those tests manually (that's why they are automated), but every time those tests are run, they are "worth" the equivalent of 4 days of manual testing. So the EMTE of these tests would be 4 days.

If you keep track of the EMTE of all automated tests, and add the EMTE to a running total each time the automated tests are run, you will get a measure of the amount of manual testing that the automated tests have replaced. This can help to show the benefits of automation.

You also asked about quantifying the additional data scenarios that you are able to run with the automation.

This could be implemented in a similar way to EMTE, by including information in a header for each automated test, telling the automation controller (or framework) what to count each time that test is run, and then to report the totals at the end of a set of tests (along with time taken, number of passed/failed tests, bugs/problems etc.)

I hope this is useful - let me know if this makes sense and how you get on with quantifying your automation benefits!