Saturday, April 30, 2011

Thinking like a tester and lateral thinking

There is a common belief that testers are "negative" thinkers, that testers complain, that testers like to break stuff, that testers take a special thrill in delivering bad news. While reading "Thinking Like a Tester" chapter from Lesson Learned in Software Testing book we can realize that there is an alternative view. For example: Tester don’t like to break things -> they like to dispel the illusion that things work.
With every lesson form this chapter we learn how tester can develop their mind and how different thinking can help to become better testers. One of the lesson is:

Lesson 21. Good testers thinks technically, creatively, critically and practically

According to authors all kinds of thinking fit into testing. They consider however 4 major categories of thinking as worth to highlight:
Technical thinking - the ability to model technology and understand cases and effects
Creative thinking - the ability to generate ideas and possibilities
Critical thinking - the ability to evaluate ideas and make inferences
Practical thinking- the ability to put ideas into practice

This lesson reminds me that there is another way of thinking which is not a new category but for sure worth to mention in this context - Lateral Thinking. I've read about this term for the first time in one of the book by Edward de Bono : "The Use of Lateral Thinking".
According to Edward de Bono":
Lateral thinking is solving problems through an indirect and creative approach, using reasoning that is not immediately obvious and involving ideas that may not be obtainable by using only traditional step-by-step logic.

We can read in this book that lateral thinking is not always related to solving problems but also helps to find new way of perception of old things and to generate new ideas. De Bono also illustrate lateral thinking in following way :
"You cannot dig a hole in a different place by digging the same hole deeper"

I think it relates very much to testing. Sometimes doing more tests in same way or in the same place is not giving us any new information. To learn something new we have to change direction.

In this book we can also read that people get used to one way of thinking. It reminds me that very often when I see something new (eg. software) my first way of using or running it becomes immediately a new habit - one and the only way of using it. I am limiting myself and forget that this way of using that thing (eg. program) is just one from many. Using lateral thinking may help us in such situations. Applying lateral thinking remind us that we should not be accustomed to the only one way of doing something only because it worked out for the first time.

Alek

Tuesday, April 26, 2011

Book Review: Lesson Learned in Software Testing

I heard about this book so many times, but thanks to company library it has finally slipped to my hands :)

Lesson Learned in software Testing
A Context-Drive approach
by
Cem Kaner
James Bach
Brat Pettichord

Inspired by Michael Larsen and his book reviews, I decided to write my own book review. I think it will help me with better understanding of this book and also push me to think about it’s content for longer than couple of seconds.

This book consists of advices not only for people in Testing Field but for all who are working around Software Testing and Software Quality.

Chapter 1. The Role of the Tester

Do you agree with below statements ?

- Testers assure the quality
- Testers should be able to block the release
- We have bug-free product
- The product can be tested completely
- Tester should be focused on requirements only
- Programmers are on the opposite site for testers

If you agree with any of above I strongly recommend reading this chapter to see different point of view. Authors present their opinion about many misconceptions regarding role of tester and testing. It's worth to read their arguments and attached examples.
Below couple lessons from this chapter with my own explenation:

"You are headlight of the project"
We don’t drive the project, we don’t make decisions, we are in project to find information about the product and present these information to every stakeholder who is interested in it so they can make better decisions.


"You will not find all bugs"
Authors present their opinion that it is impossible to find all bugs in a product unless your product is very simple or you have limited imagination. During my career I heard sometimes "bug-free software" which simply presents same misconception. We would have "bug-free software" if all the existing bugs were found but we don't have because it's impossible to check every possible place in the product with every possible combination of different circumstances. Even if we have enough resources to achieve this we should still remember the what is feature for someone can be bug for someone else. I like the example which comes from AST BA exam question:

"Suppose we revised a program so that it looked up all display text (e.g. menu names) from a text file on disk, so the user could change the language of the program at any time (e.g. French menu text to Chinese). Suppose too that, because of this change, the new version is much slower than the last, making some people unhappy with it. How we would evaluate this. Is it bug or feature ?"


"Mission drives everything we do"
Having in mind that completely testing is impossible, we should have clear mission on what we are going to achieve with testing. Some example test missions from book :

"Find important bugs fast"
"Prove a general assessment of the quality of the product"
"Do whatever is necessary to satisfy particulate client"


The mission may change from place to place, from context to context and from project to project but still we should have clear guidance for our testing activity.

Alek

P.S.
The initial plan was to create review for each chapter, however I think it's not good idea. The book is packed with so many tips, suggestions and insights that I think that each lesson is worth to review separately. Instead of reviewing all chapters I will try to focus on some of the lessons in later posts

Wednesday, April 13, 2011

What I learned from Skype coaching with James Bach

I was lucky and had another opportunity for Skype coaching with James Bach. I was interested in learning something more regarding Exploratory Testing (ET).

Basic definitions

James started the session from asking questions, to understand what ET means for me comparing to non-ET. I knew the ET definition, that learning, test design, and test execution are happening in parallel and are mutually supportive, but I had a problem trying to explain it with my own words which simply revealed that I really didn't understand it well.

So we started from beginning, what is testing for me, what is the difference between exploratory and non-exploratory testing ?. James used examples :

If I stand behind you and tell you what to type and what to look at - is it ET ?

If I tell you that I'm doing ET, and you see me type on the keyboard and move the mouse, and I appear to be testing, and you see no script, and I insist that I'm doing ET-- is that ET or not?


So what is exploratory testing ?

All testing that seems free is actually guided by unconscious impulses and we cannot be fully aware of where our ideas come from. ET is also self-managing process and acts upon itself. To sum up all good testing is to some degree exploratory.

How to be better at ET ?

I asked James how to prove thart what we do is somehow exploratory. It appeared to be a silly question. James replied that there is no need for proving that we should rather focus on:

* developing our skills
* learning how to spot biases and ruts
* using variety of methods
* using random testing
* learning from experiences

"The reason we talk about ET is because we want to learn how to manage ourselves well"
Which I see as key concept in improving our testing skill.

How to spot exploratory testing ?

We should start from asking questions:
1. Where do the test procedures come from?
2. Who controls the testing?
3. Is there a feedback loop that modifies testing from within the testing process?

The 2nd part of the session was a challenge. Key question was how many tests you can spot on presented image. It was a trap, no image can be called "test". Of course test is human activity and we can not consciously say that any image consist of "tests". Image can facilitate "tests" only.

It was tough session but I think I did better on it comparing to the previous one. I still see problem however in applying my knowledge. Even though I didn't fall into any obvious traps I couldn’t explain my reasoning well.

Alek

PS. If you want learn more about testing in exploratory way, check Micheal Bolton's resource page