Wednesday, 1 May 2019

My favourite techniques.

As my lightning keynote at Star East today, I put in a plea not to forget some of the "old things", even as we embrace new things.

Over my nearly 50 years in the industry, I have seen many new things come along, most of which were supposed to solve "all of our problems". They never do, of course, but often there is something good that comes from them and often that lasts.

But don't forget that the old things are still useful, even if they aren't new. Remember the classic techniques - they still have value.

But rather than talk about this, I have a song, to the tune of "These are a few of my favourite things".

Boundary Analysis, and State Transitions,
Trees of decisions, Equivalence Partitions,
Coverage of statements is not just for freaks,
These are a few of my favourite techniques.
Walkthroughs, Reviews, and Inspections are awesome
They belong in your toolbox and not in a museum
Welcome the feedback as useful critiques
These are a few of my favourite techniques
When the bugs bite, when my build breaks
When I'm feeling sad,
I simply remember my favorite techniques
And then I don't feel so bad.

DevOps and Agile, integration continuous
Manual testing should not be superfluous
Techniques that work well are not antiques
They are a few of my favourite techniques.
If you want your testing to be highest quality
You’d better investigate testing exploratory
Start with a charter and give it some tweaks
Just use these lovely heuristic techniques.
When the users find a defect,
Is it such a crime?
I simply remember my favorite techniques
will help me find more next time.

Issues and Patterns – automated regression
Testware architecture should be an obsession
Test in production and analytics
These are a few of my favourite techniques.
Artificial Intelligence is hot stuff at Star East
Are testing careers becoming deceased?
Not if you listen to test conference geeks,
They’ll recommend we still need these techniques.
When they tell me, it's a feature
And I get depressed
I simply remember my favorite techniques
And oh how I love to test!

Thursday, 26 May 2016

Test automation as an orchard

At StarEast in May 2016, I was kindly invited to give a lightning keynote, which I did on this analogy. Hope you find it interesting and useful!

-----------------------------------------------------------------------

Automation is SO easy.

Let me rephrase that - automation often seems to be very easy.
When you see your first demo, or run your first automated test, it’s like magic - wow, that’s good, wish I could type that fast.

But good automation is very different to that first test.

If you go into the garden and see a lovely juicy fruit hanging on a low branch, and you reach out and pick it, you think, "Wow, that was easy - isn’t it good, lovely and tasty".

But good test automation is more like building an orchard to grow enough fruit to feed a small town.

Where do you start?
First you need to know what kind of fruit you want to grow - apples? oranges? (oranges would not be a good choice for the UK). You need to consider what kind of soil you have, what kind of climate, and also what will the market be - you don’t want to grow fruit that no one wants to buy or eat.

In automation, first you need to know what kind of tests you want to automate, and why. You need to consider the company culture, other tools, what the context is, and what will bring lasting value to your business.

Growing pains?
Then you need to grow your trees. Fortunately automation can grow a lot quicker than trees, but it still takes time - it’s not instant.

While the trees are growing, you need to prune them and prune them hard especially in the first few years. Maybe you don’t allow them to fruit at all for the first 3 years - this way you are building a strong infrastructure for the trees so that they will be stronger and healthier and will produce much more fruit later on. You may also want to train them to grow into the structure that you want from the trees when they are mature.

In automation, you need to prune your tests - don’t just let them grow and grow and get all straggly. You need to make sure that each test has earned its place in your test suite, otherwise get rid of it. This way you will build a strong infrastructure of worthwhile tests that will make your automation stronger and healthier over the years, and it will bring good benefits to your organisation. You need to structure your automation (a good testware architecture) so that it will give lasting benefits.

Feeding, pests and diseases
Over time, you need to fertilise the ground, so that the trees have the nourishment they need to grow to be strong and healthy.

In automation, you need to nourish the people who are working on the automation, so that they will continue to improve and build stronger and healthier automation. They need to keep learning, experimenting, and be encouraged to make mistakes - in order to learn from them.

You need to deal with pests - bugs - that might attack your trees and damage your fruit.

Is this anything to do with automation? Are there bugs in automated scripts? In testing tools? Of course there are, and you need to deal with them - be prepared to look for them and eradicate them.

What about diseases? What if one of your trees gets infected with some kind of blight, or suddenly stops producing good fruit? You may need to chop down that infected tree and burn it, because it you don’t, this blight might spread to your whole orchard.

Does automation get sick? Actually, a lot of automation efforts seem to decay over time - they take more and more effort to maintain. technical debt builds up, and often the automation dies. If you want your automation to live and produce good results, you might need to take drastic action and re-factor the architecture if it is causing problems. Because if you don’t, your whole automation may die.

Picking and packing
What about picking the fruit? I have seen machines that shake the trees so they can be scooped up - that might be ok if you are making cider or applesauce, but I wouldn’t want fruit picked in that way to be in my fruit bowl on the table. Manual effort is still needed. The machines can help but not do everything (and someone is driving the machines).

Test execution tools don’t do testing, they just run stuff. The tools can help and can very usefully do some things, but there are tests that should not be automated and should be run manually. The tools don’t replace testers, they support them.

We need to pack the fruit so it will survive the journey to market, perhaps building a structure to hold the fruit so it can be transported without damage.

Automation needs to survive too - it needs to survive more than one release of the application, more than one version of the tool, and may need to run on new platforms. The structure of the automation, the testware architecture, is what determines whether or not the automated tests survive these changes well.

Marketing, selling, roles and expectations
It is important to do marketing and selling for our fruit - if no one buys it, we will have a glut of rotting fruit on our hands.

Automation needs to be marketed and sold as well - we need to make sure that our managers and stakeholders are aware of the value that automation brings, so that they want to keep buying it and supporting it over time.

By the way, the people who are good at marketing and selling are probably not the same people who are good at picking or packing or pruning - different roles are needed. Of course the same is true for automation - different roles are needed: tester, automator, automation architect, champion (who sells the benefits to stakeholders and managers).

Finally, it is important to set realistic expectations. If your local supermarket buyers have heard that eating your fruit will enable them to leap tall buildings at a single bound, you will have a very easy sell for the first shipment of fruit, but when they find out that it doesn’t meet those expectations, even if the fruit is very good, it may be seen as worthless.

Setting realistic expectations for automation is critical for long-term success and for gaining long-term support; otherwise if the expectations aren’t met, the automation may be seen as worthless, even if it is actually providing useful benefits.

Summary
So if you are growing your own automation, remember these things:
  • -      it takes time to do it well
  • -      prepare the ground
  • -      choose the right tests to grow
  • -      be prepared to prune / re-factor
  • -      deal with pests and diseases (see previous point)
  • -      make sure you have a good structure so the automation will survive change
  • -      different roles are needed
  • -      sell and market the automation and set realistic expectations
  • -      you can achieve great results


I hope that all of your automation efforts are very fruitful!



Sunday, 1 November 2015

The wrong question: What percentage of tests have you automated?

At a couple of recent conferences, I became aware that people are asking the wrong question with regard to automation. There was an ISTQB survey that asked “How many (what percentage of) test cases do you automate?”. In talking to a delegate after my talk on automation at another conference, she said that her manager wanted to know what percentage of tests were automated; she wasn’t sure how to answer, and she is not alone. It is quite common for managers to ask this question; the reason it is difficult to answer is because it is the wrong question.

Why do people ask this? Probably to get some information about the progress of an automation effort, usually when automation is getting started. This is not unreasonable, but this question is not the right one to ask, because it is based on a number of erroneous assumptions:


Wrong assumption 1)  All manual tests should be automated. “What percentage of tests” implies that all existing tests are candidates for automation, and the percentage will measure progress towards the “ideal” goal of 100%.

It assumes that there is a single set of tests, and that some of them are manual and some are automated. Usually this question actually means “What percentage of our existing manual tests are automated?”

But your existing manual tests are not all good candidates for automation – certainly some manual tests can and should be automated, but not all of them!

Examples: if you could automate “captcha”, then the “captcha” isn’t working, as it’s supposed to tell the difference between a human and a computer. “Do these colours look nice?” or “Is this exactly what a real user would do”? And tests that take too long to automate such as tests that are not run very often or those that. are complex to automate.

Wrong assumption 2) Manual tests are the only candidates for automation. “What percentage of tests” also implies that the only tests worth automating are existing manual tests, but this is also incorrect. There are many things that can be done using tools that are impossibly or infeasible to do when testing manually.

Examples: additional verification or validation of screen objects – are they in the correct state? When testing manually, you can see what is on the screen, but you may not know its state or whether the state is displaying correctly.

Tests using random inputs and heuristic oracles, which can be generated in large volume and checked automatically.

Wrong assumption 3) A manual test is the same as an automated test. “What percentage of tests” also assumes that a manual test and an automated test are the same - but they are not. A manual test consists of a set of directions for a human being to follow; it may be rather detailed (use customer R Jones), or it could be quite vague (use an existing customer). A manual test is optimised for a human tester. When tests are executed manually, they may vary slightly each time, and this can be both an advantage (may find new bugs) and a disadvantage (inconsistent tests, not exactly repeated each time).

An automated test should be optimized for a computer to run. It should be structured according to good programming principles, with modular scripts that call other scripts. It shouldn’t be one script per test, but each test should use many scripts (most of them shared) and most scripts should be used in many tests. An automated test is executed in exactly the same way each time, and this can be an advantage (repeatability, consistency) and a disadvantage  (won’t find new bugs).

One manual test may be converted into 3, 5, 10 or more automated scripts. Take for example a manual test that starts at the main menu, navigates to a particular screen and does some tests there, then returns to the main menu. And suppose you have a number of similar tests for the same screen, say 10. If you have one script per test, each will do 3 things: navigate to the target area, do tests, navigate back. If the location of the screen changes, all of those tests will need to be changed – a maintenance nightmare (especially if there are a lot more than 10 tests)! Rather, each test should consist of at least 3 scripts: one to navigate to the relevant screen, one (or perhaps many) scripts to perform specific tests, and one script to navigate back to the main menu. Note that the same “go to screen” and “return to main menu” script is used by all of these tests. Then if the screen is re-located, only 2 scripts need to be changed and all the automated tests will still work.

But now the question is: how many tests have you automated? Is it the 10 manual tests you started with? Or should you count automated scripts? Then we have at least 12 but maybe 20. Suppose you now find that you can very easily add another 5 tests to your original set, sharing the navigation scripts and 4 of the other scripts. Now you have 15 tests using 13 scripts – how many have you automated? Your new tests never were manual tests, so have you automated 10 tests (of the original set) or 15?

Wrong assumption 4) Progress in automation is linear (like testing). A “what percent completed” measure is fine for an activity that is stable and “monotonic”, for example running sets of tests manually. But when you automate a test, especially at first, you need to put a lot of effort in initially to get the structure right, and the early automated tests can’t reuse anything because nothing has been built yet. Later automated tests can be written / constructed much more quickly than the earlier ones, because there will (should) be a lot of reusable scripts that can just be incorporated into a new automated test. So if your goal is to have say 20 tests automated in 2 weeks, after one week you may only have automated only 5 of those tests, but the other 15 can easily be automated in week 2. So after week 1 you have automated 25% of the tests, but you have done 50% of the work.

Eventually it should be easier and quicker to add a new automated test than to run that test manually, but it does take a lot of effort to get to that point.

Good progress measures. So if these are all reasons NOT to measure the percentage of manual tests automated, what would be a good automation progress measure instead? Here are three suggestions:

1)            Percentage of automatable tests that have been automated. Decide first which tests are suitable for automation, and/or that you want to have as automated tests, and measure the percentage automated compared to that number, having taken out tests that should remain manual and tests that we don’t want to automate now. This can be done for a sprint, or for a longer time frame (or both). As Alan Page says, "Automate 100% of the tests that should be automated."

2)            EMTE: Equivalent Manual Test Effort: Keep track of how much time a set of automated tests would have taken if they had been run manually. Each time those tests are run (automatically), you “clock up” the equivalent of that manual effort. This shows that automation is running tests now that are no longer run manually, and this number should increase over time as more tests are automated.

3)            Coverage: With automation, you can run more tests, and therefore test areas of the application that there was never time for when the testing was done manually. This is a partial measure of one aspect of the thoroughness of testing (and has its own pitfalls), but is a useful way to show that automation is now helping to test more of the system.

Conclusion So if your manager asks you “What percent of the tests have you automated?” you need to ask something like: Percent of what? Out of existing tests that could be automated or that we decide to automate? What about additional tests that would be good to automate that we aren’t doing now? Do you want to know about progress in time towards our automation goal, or literally only the tests, as this will be different because automated tests are structured differently to manual tests.

It might be a good idea to find out why he or she has asked that question – what is it that they are trying to see? They need to have some visibility for automation progress, and it is up to you to agree something that would be useful and helpful, honest and reasonably easy to measure. Good luck! And let me know how you measure your progress in automation!

If you want more advice on automation, see the wiki that I am doing with Seretta Gamba at TestAutomationPatterns.org.



Tuesday, 8 April 2014

Testers should learn to code?

It seems to be the "perceived wisdom" these days that if testers want to have a job in the future, they should learn to write code. Organisations are recruiting "developers in test" rather than testers. Using test automation tools (directly) requires programming skills, so the testers should acquire them, right?

I don't agree, and I think this is a dangerous attitude for testing in general.

Here's a story of two testers:

  • Les has degree in Computer Science, started out in a traditional test team, and now works in a multi-disciplinary agile team. Les is a person who likes to turn a hand to whatever needs doing, and enjoys a technical challenge. Les is very happy to write code, and has recently starting coding for a recently acquired test automation tool, making sure that good programming practices are applied to the testware and test code. Les is very happy as a developer-tester.
  • Fran came into testing through the business. Started out being a user who was more interested in any new release from IT than the other users, so became the “first user”. Got drawn into the user acceptance test group and enjoyed testing – found things that the technical people missed, due to a good business background. With training in testing techniques, Fran became a really good tester, providing great value to the organization. Probably saved them hundreds of thousand pounds a year by advising on new development and testing from a user perspective. Fran never wanted anything to do with code.


What will happen when the CEO hears: “Testers should learn to code”? Les’s job is secure, but what about Fran? I suspect that Fran is already feeling less valued by the organisation and is worried about job security, in spite of having provided a great service for years as an excellent software tester.


So what’s wrong with testers who write code?
  • -       absolutely nothing
  • -       for testers who want to code, who enjoy it, who are good at it
  • -       for testers in agile teams


Why is this a dangerous attitude for testing in general?
  • -       it reads as “all testers should write code” and is taken as that by managers who are looking to get rid of people
  • -       not all testers will be good at it or want to become developers (maybe that's why they went into testing)
  • -       it implies that “the only good tester is one who can write code”
  • -       it devalues testing skills (now want coders, not [good] testers. In fact, if coders can test, why do we need specialist testers anyway?)
  • -       tester-developers may "go native" and be pushed into development, so we lose more testing skills
  • -       it's not right to force good testers out of our industry
So I say, let's stand up for testing skills, and for non-developer testers!


Thursday, 21 June 2012

Is it dangerous to measure ROI for test automation?


I have been a fan of trying to show ROI for automation in a way that is simple enough to understand easily, and provides a way of showing the benefit of automation compared to its cost.

I have been developing a spreadsheet with sample calculations (sparked off initially by Molly Mahai, including an example from Mohacsi & Beer's chapter in the new book, and some other people have also been influential - thanks). I have sent this spreadsheet out to around 300 people, including most who have attended my automation tutorials.

The problem with showing ROI is that it's hard to quantify some of the important factors, so I have focused on showing ROI using what is the most straight-forward to quantify - people's effort and/or time. This can be converted into money, using some kind of salary cost, if desired, and either effort or money can then be plugged into the ROI calculation = (benefit - cost) / cost.

So basically, this is showing how a set of tests requires less human effort when those tests are automated than would be required if those same tests were run manually.

This is great, right? We have a clear business case showing savings from automation than are greater than the cost of developing the automation, so our managers should be happy.

Recently, however, I have been wondering whether this approach can be dangerous.

If we justify automation ONLY in terms of reduced human effort, we run the risk of implying that the tools can replace the people, and this is definitely not true! Automation supports testing, it does not replace testers. Automation should free the testers to be able to do better testing, designing better tests, having time to do exploratory testing, etc.

So should we abandon ROI for automation? I don’t think that’s a good idea – we should be gaining business benefit from automation, and we should be able to show this.

Scott Barber told me about Social ROI – a way of quantifying some of the intangible benefits – I like this but haven’t yet seen how to incorporate it into my spreadsheet.

In our book, there are many success stories of automation where ROI was not specifically calculated, so maybe ROI isn’t as critical as it may have seemed.

I don’t know the answer here – these are just my current thoughts!

Wednesday, 25 January 2012

How long does it take to write a book?

A number of people have asked me this, since our new book is now out.

This book took us 2 and a half years. This doesn’t include the effort put in by the case study authors and other contributors, so this book represents a lot of work! What exactly had we been doing all that time? I wondered that too, so here is where the time went.

In August 2009, I have my first note of our plan to solicit contributions for a new book on automation experience. We sent emails, put a call for contributions on my web site, and talked to people at conferences, and began gathering potential contributions.

I started keeping track of the hours we spent from December 2009. We had a big initial “push” (the first peak on the graph) and produced a “protobook” – 4 chapters with an introduction. We were sure this would be snapped up by a publisher!

We submitted to the publisher of our previous book in mid-February, but initially they weren’t very keen! This was a blow, as we were convinced this would be a great book! I tried several other publishers over the next few months, and got rejected; I continued to try and convince Pearson/Addison Wesley that they should publish our book.

They eventually relented in July and we signed a contract. We worked steadily on the book over the rest of that year, a total of around 400 hours between us. The complete draft manuscript, ready for independent review was due on the 15th April, and the final manuscript on the 15th October. “No problem”, we thought.

However, we found that we did need to work at a more intense level - we spent another 300 hours to the end of April (the double peak with some time out in February), and another 300 hours to the end of 2011. The final peak was editing the final page proofs and doing the index, more work that we had anticipated at that stage. The total for us comes to just under 1000 hours. We don't know how much time the contributors spent, but it their time was equivalent to ours, the collaboration represents around 2000 hours of time - that's more than one working year.

We enjoyed working on this book and reading all the stories as they came in. There are many useful lessons, some heartfelt pain, and many gratifying successes among the stories.

You can follow the book on Twitter on @AutExpBook. The book tweets tips and good points every few days!

Thanks to all the book contributors, and to co-author Mark Fewster.

Saturday, 5 February 2011

Part 3. Certification schemes do not assess tester skill?

(Continuing from Parts 1 and 2, certification is evil? and some history about ISTQB)

Now I would like to say something about the criticism that the current schemes do not assess tester skill. I am thinking mainly of the Foundation level, as this seems to be where the criticism is mainly directed, and I am more familiar with that than the Advanced Levels.

In the main, I agree. There is a modicum of skill needed in testing techniques to be able to answer multiple-choice questions about them, but that is not the same as being able to test well in practice. And I also agree that the current ISTQB Foundation level is based more on learning facts than on practicing the craft. Why is that? Because the current scheme was designed to meet a different need, a basic ignorance about testing in general; it was not designed to assess testing skill.

I feel it is unfair for people to criticize a scheme because it doesn’t conform to what they think assessment of testers should be today, when the scheme was never meant to be that kind of assessment. It’s a bit like criticizing a bicycle for not powering you up a hill by itself – it’s not intended to do that.

When we were developing the original Foundation Syllabus with ISEB, I remember many discussions about what was possible and practical, including ways of assessing tester skills beyond basic concepts and vocabulary:

- interviews?

- looking at projects submitted from their workplace?

- observing them at work?

- substantial pieces of testing work to be done in a supervised exam-like setting or as a project to be given in within a time frame?

All of these have significant challenges. For example, how to ensure fairness if different people interview in different ways, ensuring that the work being assessed was actually done by the person submitting it, time commitment, scope and fair comparison of observation at work, designing a testing task that would be applicable to people from different industries.

We decided that the place to start was with something very basic that could be built on later, something that would try to cover common ground that all testers should know and build on - hence it was called "Foundation".

Criticism is good – we all learn by having our ideas challenged. But current qualification schemes are not “evil”, even if there are aspects of their current implementation that are not as they should be. So let’s take the context of the certification schemes into account, and remember that what may be ideal for today was not possible 12 or 13 years ago.

Part 2. A bit of history about ISTQB certification

(Continuing from Part 1 where I was surprised at reactions to certification as "evil")

In the early 1990s, software testing was not a respected profession; in fact many thought of testing at best as a “necessary evil” (if they thought of testing at all!). There were few people who specialized in testing, and it was seen as a “second-class” activity. There was a general perception that testing was easy, that anyone could do it, and that you were rather strange if you liked it.

It was then that I decided to specialize in testing, seeing great scope for improvement in testing activities in industry, not only in imparting fundamental knowledge about testing (basic principles and techniques), but also in improving the view testers had of themselves, and the perceptions of testers in their companies. I developed training courses in testing, and began Grove Consultants, named after my house in Macclesfield. One of my most popular talks at the time was called “Test is a four-letter word”, reflecting the prevailing culture about testing. The UK’s Specialist Interest Group in Software Testing (SIGIST) was started by Geoff Quentin in 1989, and was the only gathering of testers in the UK.

It was into this context that the initiative to create a qualification for testers was born. Although I was not the initiator, I was involved from the first meeting (called by Paul Gerrard at a STARWest conference in 1997) and the earliest working groups that developed the first Foundation Syllabus, donating many hours of time to help progress this effort. This work was carried out with support from ISEB (Information Systems Examination Board) of the British Computer Society (meeting rooms, travel expenses and admin help). The testing qualification was modeled on ISEB’s qualifications in Project Management and Information Systems Infrastructure, which were perceived as useful and valuable in their respective sectors. One of the aims was to give people a common vocabulary to talk about testing, since at the time people seemed to be using many different terms for the same thing.

The first course based on the ISEB Foundation Syllabus was given in October 1998 and the first Foundation Certificates in Software Testing were awarded at that time. But an important aspect of the scheme was that it was not necessary to take a training course in order to get the qualification; you could just take the exam. (Some other schemes were based on attendance at courses which seemed too training-provider profit-oriented to us.)

The success of the Foundation qualification took everyone by surprise – there seemed to be a hunger for something that gave testers more respect, both for themselves and from their employers. It also gave testers a common vocabulary and more confidence in their work. The Foundation qualification was meeting its main objective of “removing the bottom layer of ignorance” about software testing.

Work then began on extending the ISEB qualification to a more advanced level (which became the ISEB Practitioner qualification) and also to extending it to other countries, as news of the qualification spread in the international community. I was a facilitator at the meeting that formed ISTQB in 2001 in Sollentuna, Sweden.

I became a member of the working party that produced the first ISTQB Foundation Syllabus in 2005, and I am amazed at how ISTQB has grown; it has certainly changed over the past six years. While working on the update to the Foundation book, I was rather surprised and disappointed at the apparent lack of review the before release of the 2010 Syllabus.

In Part 3 I return to the criticism that current certification schemes do not address tester skill.