Saturday, September 24, 2011

New Testing Book: "How to Reduce the Cost of Software Testing"

There is a new testing book recently published : "How to Reduce the Cost of Software Testing".


From the product description :
"Plenty of software testing books tell you how to test well; this one tells you how to do it while decreasing your testing budget."
Having in mind that testing is often first in line to reject in case of budget cut, I see this book valuable not only for testing teams but also for people who drive the software projects. In addition the list of contributors is amazing : Matt Heusser, Michael Larsen, Markus Geartner, Micheal Bolton, Selena Delesie, Jonathan Bach, Scott Barber just to name a few.

Alek

Saturday, September 17, 2011

"Test Design" from AST

"Test Design" is the 3rd course in Black Box Software Testing series provided by Associaton for Software Testing (AST). As you can read in Cem Kaner's recent post this course should be soon available. I have successfully completed BBST Foundation and BBST Bug Advocacy and I think they both were great. I strongly recommend them everyone interested in professional software testing. Looking at the he bibliography and reference list (around 400 books and articles !!) for Test Design I am sure that the course #3 in BBST series will be also so much demanding and valuable. I am looking forward to it.

Alek

Monday, September 12, 2011

Tools for session-based testing

Session-based testing is the structured way of exploratory testing developed by James and Jon Bach. In the article Session-Based Test Management we can read :
"What we call a session is an uninterrupted block of reviewable, chartered test effort. By "chartered," we mean that each session is associated with a mission—what we are testing or what problems we are looking for. By "uninterrupted," we mean no significant interruptions, no email, meetings, chatting or telephone calls. By "reviewable," we mean a report, called a session
sheet, is produced that can be examined by a third-party, such as the test manager, that provides information about what happened"

As mentioned above after each session tester hands a report with important information and results. There is an example of Sample Session Report on James Bach page. The session by design should be uninterrupted which means that things like note taking should be done on fly. It can be easy achieved with applications which support session-based testing.

1. Rapid Reporter, exploratory notetaking

This is tool created by Shmuel Gershon. Rapid Reporter is really small, stays always on top of the screen and doesn't interrupt the testing process. You can type into the app all information worth noted. The types of information like : "Installation", "Bug", "Issue" etc. can be tailored to the individual needs. In addition you can easily assign time range for the session.


What is really cool is the possibility of doing screenshot with just one click, moreover this screenshot will be printed next to the inserted comment in a session report.


After the session the app generates session report with all entered information along with screenshot taken during the session. The report can be presented in e.g. HTML format

2. Session Tester

This is tool created by Jonathan Kohl. Before the session you need specify: Tester Name, Mission, Session Length.


Once you start your own session Session Tester window is opened. You can type there all information learned during the session. This app stays also in tray and thanks to this you can get information in form of the balloons about time left. You can also find in this app some nice little feature called "Prime Me". When you click the button you get some random inspiring notes which may help you during the session. e.g.:
- Consider self-reference.
- What are the boundaries?
- Try something negative.
- Testability first.
- Who's your client?
- What's the language?
- Consider the opposite.
- Unless...
etc.

The results of the sessions are initially saved to the xml files but there is also an option to produce session report in formatted HTML file.



3. Atlassian Bonfire
I haven't had chance to work with Bonfire yet but if a picture is worth a thousand words, a video must be worth a million :)




Alek

Saturday, September 10, 2011

Numberz Challenge - my approach

Recently Alan Page has posted testing challenge on his private blog - Numberz Challenge. He has attached little application and described it in this way:
"When you press the "Roll!" button, the app generates 5 random numbers between 0 & 9 (inclusive), and sums the numbers in the "Total" above. The stakeholder’s primary objectives are that the numbers are random, and that the summing function is correct."

And here was the challange :

"If you want to play a little game for me and do some testing, test this app and tell me if it’s ready to ship (according to the stakeholder expectations)."


My approach to this challenge:

Learn as much as possible about the stakeholders and application context.

- Are there any other requirements beside those written down?
- What are the important information we need to know prior to testing ?
- Who is the customer ?
- Where this application is going to be used ?
- What are the consequences of shipping defective product ?
- What time is given to test this product ?
- Is this standalone product, or maybe just an part of something bigger ?
- On what OS and hardware this app is going to be used ?
- How long this app is intended to run without restart ?

Test against written requirements.

I clicked the "Roll!" button couple times and I could verify that application generates 5 numbers (0-9 inclusive), that numbers seemed to be randomly generated, that total were calculated correctly. This simple sampling however was not enough. To verify randomness you need way more tests. I was also curious if "Total" is always calculated correctly. To have more samples I hired automation tool. I have started with AutoIt and it worked out perfectly. I could process 10k rolls in about 60 seconds. Because I had pure blackbox approach I didn't know if restarting application might impact the results. So I decided to rolls numbers in 2 way :
- 20.000 times in a row without restarting the application
- 20.000 times but after each 1000 rolls I restart the application.

Based on the new results I've created normal distribution chart for the total.



Here we can spot some deviations from normal distribution e.g. the '20' shouldn't appear more often then '21'. Then I tried to focus on generated numbers to see if all numbers appear equally.



Number 3 has about 13% chance for appearing whereas it should has around 10%. It was first possible bug. I also noticed that for 20k rolls the Total has been 400 times calculated incorrectly which could be another possible bug.

Test against non-written requirements

The application had some problems with closing and I've encountered this on Windows 7 and XP OS. I was trying to observe the memory usage after 50k of rolls and without restarting the application for 24 hours but there wasn't any significant change in memory usage.

Ship / No Ship

Based on the information I learned from Alan about the app and it's context I come up with following conclusion.

Hard to say really when keeping in mind that shipping decision is always business decision. All in all we know that this app doesn’t meet all stakeholder expectation because numbers are not random and total doesn’t always work correctly. On the other hand there will be around 50 rolls per year per car dealer where randomness deviation may be not easily spotted. In addition statistically there may be one bad sum calculation a year (2% * 50). Is this acceptable ? I don’t know but for sure doesn’t meet given requirements.


Here you can read the summary post from Alan : Numberz Winnerz

Alek

Friday, September 9, 2011

Questioning Simple Test

This is the 2nd post about the testing challenge that was given to my testing team during one of the internal testing workshop. The presenter was trying show us how often requirements are misunderstood even by technical people which often leads to making false assumptions.

Major of attendees suggested to turn 1) card #1 (card with 'A' vowel letter on one side) expecting even number on the other side and 2) card #3 (card with '4' even number on one side) expecting vowel on the other side. That was a trap. We can't say based on given requirements that if there is even number on one side there must vowel on the other side. It works only other way around: if there is vowel then there must be even number. It was pure false assumption.
According to presenter we should turn 1) card #1 (with 'A' vowel letter on one side) to verify if there is an even number on the other side as a positive test and 2) card#4 (with '7' odd number on one side) expecting consonant letter only on other side as negative test. This challenge however was not as easy at seemed because you can spot more false assumption in it.

First of all I think there is problem with statement: "Can you optimize testing and have only two cards that you might turn to test and verify that your system works correctly."
We can never ever be 100% sure that something works correctly when it comes to testing software application. And in this exercise according to presenter 2 simple tests were enough to prove that this system works! It's like with testing calculator summing function, when you put '2+2' and you get '4' which is correct but you can never be sure if this '4' was the result of summing function or maybe hardcoded result value for each operation.
Secondly let's focus on this requirement : "system returns playing cards with numbers or characters on both sides". English is not my mother language and maybe that is the problem but based on above statement I can't assume that for each card I will have letter on side and number on other side. I think that "letter or numbers on both sites" suggest that we can also have cards with only letters and only numbers on both sides.

Alek