Friday 18 December 2009

Can Agile be Context-Driven?

"They say they are doing Agile, but they're not doing all of the practices, so they aren't really Agile."

This seems to be a popular view, implying that Agile needs to be done "properly". A search on "agile best practice" brings up many entries. Is there such a thing as Agile Best Practice?

The Context-driven principles (www.context-driven-testing.com) include:
1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices. (emphasis mine)

So if you adapt agile practices to suit your context, as was described by Gitte Ottosen at EuroSTAR, isn't that context-driven agile? And isn't that the best way to practice it?



Friday 19 June 2009

ROI from automation

How do you measure Return on Investment from automation?

According to http://www.investopedia.com/terms/r/returnoninvestment.asp, return on investment is (Gain - Cost) / Cost.

So if we have a set of automated tests that save 5 hours of manual testing, then if those tests took one hour to build and run, our ROI would be (5 - 1) / 1 = 4 (if my arithmetic is correct). So that would be a four-fold ROI for those automated tests.

Of course in the initial stages of automation, before our automation regime is mature, it might take us 10 hours to build and run those same tests. This would give an ROI of (5 - 10) / 10, i.e. a negative return on investment of -0.5.

Negative ROI is not generally a good idea, as it means you are losing money. But you may have negative ROI in the early stages of automation, but a very good positive ROI long-term. It might also make more sense to look at ROI over a larger sample, say all of the automation build and run in a month, and monitor that month by month.

How do you measure ROI for automation?

Monday 16 March 2009

Make testing fun?

On the LinkedIn group "Software Testing Club", the question was asked: "How can we make testing more fun?" They are running 1300 scripts over 4 months.

There were some interesting answers about holding weekly competitions, creating visibility with management, and having a movie in work time. There were also a couple of mentions about automation also. Here is my response to the thread:

--------------------

Draw a graph of the "boringness" of the testing - on a scale of 1 to 10 - of different things the testers are doing. The more boring the activity, the more ripe it is for automation. But don't think of automation as all-or-nothing. Pick out the most boring things for people to do and automate them. You can get real benefit by automating only 2% of the scripts, for example.

You don't need to purchase expensive tools to automate either. Use what you already have or look into open source tools (see the FreeTest Conference link from my web site).

However, do be warned that a free tool is not free! In order to be successful in automation, you must plan and prepare how you will do it.

In fact there is so much to say about how to go about automation that I have written a book about it with Mark Fewster (see link on my web site). Although the case studies are now getting a little "long in the tooth", the main body of the book covers the principles of automation - whatever tool you use - that will help you creat a long-lasting "regime".

Manual testing is not fun if you are just following a script; but manual testing is tremendous fun if you are doing exploratory testing, for example. Get the computers (tools) to do what they are good at, and free the people to do what they are good at.




Monday 23 February 2009

Don't blame the bread-making machine - thoughts on test automation

If you have a bread-making machine, it may or may not make good bread.

If it doesn't, the reason might be the ingredients you put into the machine. If you put in too much salt (or no salt at all), the bread won't be edible. Is that the fault of the bread-making machine?

Some people blame the test automation for things which are actually the responsibility of the testing. For example, if you mistakenly think that finding lots of bugs is the main aim of your regression test automation, you are setting yourself up for disappointment, if not failure of your automation effort. Finding bugs is what testing (and testers) do - it is not the job of automation - the job of automation is to run tests.

The tests are the ingredients that determine whether the testing is effective or not. The effectiveness of the tests is the same whether those tests are run manually or using an automation tool. 

The tool is no more responsible for the quality of the testing than the bread-making machine is responsible for the taste of the bread.