Thursday, May 26, 2011

Important skill for an exploratory tester

One of the ways to learn something new or make improvement being a tester is to be challenged by someone more experienced. I've first time experienced such a "way of learning" during coaching session with James Bach. This time the challenge came from Michael Bolton.
"Micheal: One KEY skill for an exploratory tester is to find out what the requirements REALLY are. What ARE the ways of finding out? In general, what are the sources of information on a project?"

I started with the question. What do you mean by requirement?

"Micheal: A requirement is /what someone important wants/ as a part of the completed product.
Notice that /what someone wants/ is not necessarily written down. A requirement is not a piece of paper, or something written on one. The paper and the writing are /representations/, literally re-presentations, of /ideas/."


The above idea has really opened my eyes. How often we rely on written documents only treating them as the one and only one oracle. It's hard even for technical people to articulate what they really want. Now imagine that you need to describe a thing that you haven't seen before. It's hard to put everything on paper and easy to miss something that may become important on the end. It's good to keep in my mind that what we read in formal document is the re-presentation of client idea only. And now question, how often have you aimed with your tests to cover the specification only (assuming that your test mission is for something more than specification coverage) ?

I started answering to challenge with following ideas: people, stakeholders, previous bugs, written specification, developers know-how, service desk and their database of incidents, regulations (eg. FDA), previous versions/products, meetings with clients/stakeholders

Michael cataloged my ideas into:

/References/-> things that we can point to. The reference allows you to say, "It says here..." or "It looks like this..." (previous bug reports, specifications, regulations, previous version of the products)

/Conference/ -> Information obtained by /conference/ - interaction with other people. (people, stakeholders, programmers' know-how, meetings with clients)

/Experience/ -> The program provides with information via /experience/, the actual empirical interaction with the product.

/Inference/->. When you reason about information that you have, extend ideas, weigh competing ideas and figure out what they mean or what they might suggest, that's information from /inference/.

Micheal summed up the challenge with following:

"Michael: When we think about exploring, we think about identifying and developing the information we have. That's a key part of the *learning* mission of exploratory testing."


The coaching session was great. This session was exacting and bring a fun at the same time. Thanks to this session I realized that I need to improve identifying and developing information that I already have. Going beyond what is written in formal document should give me broader perspective and help in creating better tests in the future.


Alek

Tuesday, May 17, 2011

FDA, Explorotary Testing and Michael Bolton

One of the abbreviations I've learned during a project for one of my clients was "FDA".
FDA ( Food and Drug Administration) is an agency within the Department of Health and Human Services which is responsible for protecting the public health in USA. The project I was involved was strongly "validated" and had to be compliant with FDA regulations (21 CFR Part 11 - Electronic Records and Electronic Signatures).

How these affect testing?
Every single piece of test (usually test script) had to be formally reviewed and signed before and after execution by 3 persons. In total 6 signatures per each test script. My first feeling was that this project is really good tested. But soon I have realized several problems. This huge bureaucracy introduced many consequences for example very long loop between test idea (the moment of creating the script) and test execution (the moment when formally written test step were used for "testing").
Furthermore testers responsible for test scripts creation where seldom executing own test scripts. This approach was compliant with General Principles of Software Validation and guidance for Electronic Signatures Validation provided by FDA. But from my point with such enormous obligatory bureaucracy something which supposed to be "testing" becomes "checking" and regular office work. This 'heavy scripted testing' highlighted the problem of separation between the test creation and execution. Even though we’ve covered all written requirements with test scripts we didn’t go beyond this, we didn’t find out new informations about the product, we didn’t present more insights to stakeholders, we didn't use diversity of tester experiences and skills - simply we were not serving the project as we could do. In addition with huge amount of resources we were still struggling to meet deadlines, leaving people exhausted and unhappy.

Below simple example:
Imagine that there are two testers. Tester A covers requirement with 2 test scripts. One test script per positive and negative test case. Tester B executes those two test scripts. Execution occurs usually month from the moment of test script creation. What are the results of such approach? For example you will never hear statements like:

"Wait, let’s do this test first instead of that test" or
"Hey, I wonder what would happen if…", or
"Bag it, let’s just play around for a while"*


There was simply lack of exploratory testing. For every suggestion of change I heard we have to follow FDA rules and keep "scripting" requirements. This made me wonder, how on earth some government agency can know better than our company what is best for testing ? I went with this problem to Michael Bolton. I was lucky and he found a minute to discuss this with me:
What I learned from this coaching session:

1. There is no procedure and script for producing test scripts.
It means that to some extent, we all apply exploratory approach. When we create any test script we often combine our subjective knowledge about the program, our previous testing experience, and formal requirements. Rarely same approach works for each test script and we need to make decision what is best along the way which can be the argument the every scripted processes "come from" exploratory processes, and not the other way around. Michael also pointed out that "whatever benefit the script provides after you've done your exploration the real benefit came when you found problems "during" the exploration" and I agree with this. From my experience I can say that often more problem are found during test script creation ("exploration") than while actual test script execution

2. There is no single statement in FDA guidance that we have to apply scripted testing to achieve validation and verification.
I have reviewed FDA guidance and regulations and indeed I couldn’t find a word that test scripts are the must. They mention about the evidences, which can be gathered with any approach to testing.
He also suggested to read the principle of the least burdensome approach. This document is not directly related to testing but we can read there few interesting statements for example :
"(...) FDA is an agency committed to fostering innovation and ensuring timely public access to beneficial new products. A least burdensome approach should be used in almost all regulatory activities. Application of the least burdensome principles to premarket requirements will help to reduce regulatory burden and save Agency and industry resources (...)"

I see above as suggestion for well-balanced approach in every aspect of project validated under FDA regulations. For example: why can't we be more flexible, let’s have enough evidences of executed scripted tests while on the other hand give testers freedom to explore the product so they can use their experience and unique testing mind in best possible way.

3. What can we change ?
- Think in terms of reference, inference, conference, and experience. Think in terms of keeping the design activities closely linked to execution. Instead of telling people what steps to follow, provide them with a motivation or a risk or an information mission.

- For the less experienced or less skillful testers, supervise them and coach them. Train them.

- Now, when there are specific things that need to be checked, specific procedures that MUST be followed - specify those.

- Provide them with guidewords for the sorts of problems that you want them to focus on.

- Another thing to get them to do: keep notes of your observations, questions, concerns, confusions. Talk about them after the testing session, or get help during it.

- If someone wants to know /exactly/ what you did, consider this: screen recorder (eg. http://www.bbsoftware.co.uk/BBTestAssistant.aspx )




Alek

*(I borrowed above quotes from Micahel Bolton)