2009-05-25

Test writing Guidelines

In one of his latest blog entries, a friend of mine writing a blog regarding system engineering discussed the basic design questions When thinking of how to start writing a test plan you need to make sure that the said test plan provides answers to certain questions which are very similar. 1. What: what is the system supposed to so? - Functionality in general 2. How: what are the steps for the above mentioned ? Functionality in detail 3. How much: basic estimation of capacities / throughput. - load and system benchmarking 4. When: when do you want the solution? * we need to cover the first 3 questions if we are to write a full testing specification. When approaching the task of writing these tests we should not contemplate about the feasibility of the tests but rather create what I like to refer to as a "Tests wish list" only after covering all relevant aspects of the feature / component / system and reviewing this can we ask and answer the last forth question - In an idealistic world time is not a factor but in the real world we need to take into consideration the following: * delivery commitments in respect to dates or content - which may affect the order of the tests * simulation capability - on systems handling external inputs ... can we simulate this type of input / how much will it cost us? and so on. how ever the pure test plan should not include these aspects since making allowances in the test writing phase will lead to not testing the system.

2009-04-27

Virtualization and its contribution to QA

There are many benefits to using virtual machines in testing. First, we need to bear in mind that applications change, and QA must test different applications in different conditions. for example testing different UI applications on different host types will potentially require a lot of resources which in these times of financial instability may pose a problem, since theoretically, it will require separate machines for each type of set-up, in the same manner when keeping a big version install the base you will need to maintain different machines running different versions of the SUT (System Under Test). In order to provide for this need one of the best cost-effective solutions would be running multiple virtual machines. This solution allows for running endless versions in parallel... this is effective for both QA and Support since one can simulate defects/ retest them knowing with full confidence that his system is identical to the one of the customer reporting the problem. further to this virtual machines can be used to allow easier/ simpler deployments or upgrades, since you can send the machines preconfigured in advance from the factory. As can be seen, there are many advantages to using virtual machines in QA.

2009-03-30

Formal QA Education - do we need it

I recently attended a conference regarding a new release of the HP Quality centre product.

One of the guest speakers was a person involve with ITCB organization.

The ITCB organization is the Israeli branch of the international ISTQB the organization is involved with certifying QA testers. and in trying to set criteria

  • which will mark the level of QA engineers.
  • Create a common language used by all.
  • Provide QA with tools and skills to perform their jobs more efficiently
  • grant official and formal acceptance if the QA as a credible occupation
after we returned from the conference an argument arose between us in respect to the actual need for such an organization and what are the advantages and dis advantages of such a certification.

On the Pro's side we noted the main goal of a formal language and developing better personal testing capabilities which young testers tend to lack.

On the Con's we were concerned that this course / certification will become mandatory and therefore block acceptance of otherwise capable engineers for QA positions

My opinion is that the general idea is sound and will contribute to QA professionalism where ever implemented and may put an end to low opinion of QA personnel which I mentioned in one of my previous blog's .

How ever I think the certification should be a part of the training and not a a threshold factor preventing the acceptance of otherwise capable engineers.

I can testify from personal experience that most of my QA team where technically capable people which were mentored and bloomed into good QA engineers.

Further to this from the following diagram












We can see that Quality Assurance skill are only part of the makings of a good QA engineer.
To conclude I think this is the proper way to approach this rather than using the certification as a filtering tool

2009-03-10

Version Releases - how do we make it comprehensible

One of the most obvious tasks the QA is responsible for is the version release at the end of the testing cycle.

First let me emphasis that I do not believe that the version release is solely under the QA's responsibility the release process is a lengthy process starting with the designer/ programmer and ending with the QA.

Next we must bare in mind what is the purpose of the release meeting?!?

I believe that this meeting is intended to provide a "Version Status Report" to all the relevant share holders, for example :
R&D Manager
Marketing / Sales
Project Managers
..

It is point less to expose the extensive list of people to a dry list of bugs and descriptions.

I believe it is up to the QA to make a clear presentation of the version status.

For example instead of presenting a table listing bugs , the first page in the presentation should be a "Grade" of sort to the version;

Version is GOOD , MEDIUM, BAD ... we can even colour code it for better presentation for example

GOOD will be marked in GREEN
Medium will be marked in ORANGE
Bad will be marked in RED

Those grade can be discussed and agreed upon in advance where each grade will be related to a certain amount of bugs, e.g.;

GOOD = 0 "Show Stoppers", X "High", Y "Medium", Z "Low"
MEDIUM = ...
BAD = ...

management will first get a certain idea as to the version quality.
and then if they wish to get more details they can drill down and see
...

Before going any further I'd like to discuss the platform on which we present the release ...

While I may be breaking some sort of TABOO I do not think word / any other form of document is the way to go.

I think web interfaces will serve us better providing better clarity and flexibility.

The Whole idea revolves around the concept of trying to avoid data overflow.

I believe we should not expose all the data flat out since it will confuse the higher levels of management.

I propose using hierarchical rather then flat description.

This is easily achieved using web pages.

The main page will contain a table with a list of the released Base Lines along side their grades.

Each BL will be a link allowing to view what is the BL content and what are the elements making up the grade (number of bugs, coverage rate)

Each number of bugs will be a link to a table listing what are the bugs exactly .

Last but not least each BL should have a version delta page listing what new features were added between the last released BL and the new one.

For Complex Systems a Components Table should be available detailing the different component versions and if needed installation instructions pages.


To be Continued...

2009-03-01

Testing Plans - Optimal vs. Real life

Up Until now I have discussed what we should strive for in the QA.

How ever one must bare in mind that there is a difference between an extensive testing plan and its actual implementation.

One of the best signs for a QA maturity is the ability to prioritize and plan a head taking into consideration the feasibility of an actual test plan.

In the normal flow the QA engineer will map out the entire test plan covering all the little bits and pieces needed to be tested.

The QA team leader will then take the raw plan and organize it according to time table limitations.

This re-organization can be consisted solely from re shuffling the tests order to allow a better , more effective release picture in a shorter time.

How ever in some case a little more joggling may be called for when the time frame needed for a full test cycle exceeds delivery commitments.

When faced with the second option we as QA will need to perform some risk assessment in order to consider which parts of the tests can be dropped.

The risk assessment process is a very critical phase since it calls for an ability to see the whole system ... a wrong assessment will lead directly to system problems on the costumer system.

The risk assessment can and should involve others except the QA team leader.

In any case the tests which were dropped and the risks taken should be voiced clearly in order to inform all of the interested parties to the compromises made in the test plan.

2009-02-10

Support and Deployment - What is the best way

There are several methods to implementing a deployment, I would like to mention but two which basically display the two far ends.

The All-In-One (my name for it) or the Specialists (my name for it).

The All-In-One approach basically means that the manufacturer handles all parts of the Installation/ Deployment/ Support through one person.

The merits for this method are that the costumer interacts with one person which is responsible for providing all of the answers to all of the different questions.

This type of approach is feasible with certain limitations:
  1. The person handling this role has the required capabilities and in-depth understanding of the system
  2. System complexity allows a "one man show".
How ever if these limitations are not met then basically what we get is embarrassment in front of the costumer since problems are bound to arise and when the "One Man Show" is unable to provide answers in time/at all this reflects badly on the provider.

The Second method usually works for high complexity systems, in which the manufacturer assigns a team to a project where each of the team members is responsible for a segment of the system and is capable to provide in-depth insight to his part of the system. This may also contribute to a certain amount of respect from the costumer since he receives exact answers to each questions with great detail.

There are obviously down sides to this type of costumer approach since it:
  1. requires greater resource allocation
  2. costumer may get a feeling of being directed from one person to the other according to the question which he brings up. - this can be handled by assigning a strong PM to the costumer.
Basically both method have advantages and disadvantages it is only a matter of deciding which is better for the system the manufacturer is providing.

If we choose the all-in-one method for a complex system we risk having the one man become both an embarrassment to the company and a "nuisance" to R&D since he is unable to handle requests/ queries from the costumer.

If we choose the specialist approach for a simple system we are wasting resources and reducing profitability un-necessary.

Therfore Choosing wisely is critical.

2009-02-02

Bug Status List

since one of the focal point in the QA is testing and its by product - the bug , I thought I should spend some time detailing the bug status list - the list below is one which I think is most efficient , mind you in my organization we do not use it in exactly the same manner :(

Open - The Initial state of the Bug when the QA has reported it for the first time.

Non Reproducible
- This state is marked in order to let the programmer know that this bug has been observed but can not be re- produced.

Fixed
- Once the Programmer has identified the source of the problem fixed it and released a version to the QA, the Bug status will be changed by the programmer to this state - pending QA re-test confirmation.

Re-Open
- Once QA has received the Fixed version it will re-test, if the bug re occurs QA will then mark the bug as re-opened

Closed - Once the bug has been retested and the fix is confirmed then the QA will update the bug status to Closed.

  • Next two status options are not with in the general bug life cycle for obvious reasons.

Duplicate - when the same bug has been reported twice ( by two QA engineers or when one problem report arrives from a costumer and another from the QA) the bug will be reported as duplicate and the fix will be marked only on one of the duplicate bugs.

Behaviour - Not a Bug - This status marks this certain "malfunction" is by design - basically indicating that QA may have miss understood the design and suggested solution according to the MRD/SRD ... (http://qa-regarding-qa.blogspot.com/2009/01/qa-methodology.html)

2009-01-28

QA and Support - what is the connection

In the general scheme of things as I see it there are two avenues between the manufacturer and the costumer.

The First is from the manufacturer to the costumer , in which a product is manufactured then tested and at the end released for the costumers use.

The Second is from the costumer to the manufacturer, in which the costumer will report a problem to the manufacturer and hopefully be presented with a solution by the support team.

obviously the two avenues must mix, the question is where and how often.

In the following caption I'd like to present what I think is the proper manner for interaction.

When a problem arises from the field (costumer) the first task is information gathering, if this is performed well then we are one step closer to resolving the problem.
In one of my previous entries I mentioned this when I noted you have to know what questions to ask and what to look for.

One of the most popular support method involves different TIER levels.

Each level is suppose to perform a sort of analysis and attempt to resolve the problem... only upon reaching a deadlock will the problem ticket be passed on to the next TIER level.

Each TIER has different capabilities and is intended to handle different sort of problems;

TIER I - will usually handle small operative problems : electrical and other communication problem should be ruled out, if the problem has not been resolved by this level it shall be passed to the next level.

TIER II
- this level will usually handle all configuration malfunctions in order to rule out problematic setup and system definitions. Once this option has been ruled out the next support level has to be involved

TIER III - in this level actual debugging of the problem must be done ... for this the support engineer will need the system configuration files and proper relevant logs ... all of these can be collected in advance by the previous TIER level in order to save time and face with the costumer.
Once configuration has been determined and an exact suspect scenario has been chosen the support engineer needs to attempt to reproduce the defect for the benefit of the R&D team which will need to handle the problem.

reproduction is critical because it will allow us to verify the fix once it has been implemented ... but in order to be able to do so the details mentioned above are a must!!

The next phase in the support cycle will be the TIER IV.

TIER IV - this support level is with in the R&D and its role is to co-ordinate the fix and its release.

Once the Fix has been completed by the R&D team it is up to the QA to retest according to the scenario detected by the TIER III engineer.

In some cases QA may also handle the original reproduction instead of the TIER III engineer but in order to do this effectively and successfully they must have all the relevant information prior to the reproduction attempt.

To summarize:
A support group MUST have an organized data/ information collection and distribution method!

Since with out it time is wasted in each support level from the beginning which in turn causes great over head for the fix.

2009-01-21

Integration - is it QA or not

During the past couple of weeks I've been involved knee deep in integrating between two of our system components.
On a personal note I do find it very interesting, since integration in it self does not follow all the strict QA methodology.
This in turn started me thinking as to whether or not we can consider Integration as part of QA.

On the Pro side:
  • System is being tested for functionality against pre-defined input scenarios.
  • Problems are reported to developers for fixes

On the Con side:
  • There are no regression testing.
  • there is no definition for test cycles/ version release cycles
  • There won't necessary be official releases but rather temporary patches.

My final conclusion is that I do believe Integration can be classified as a "Testing Cycle" of some sort.

My only concern in this respect is the following.
a given fact is that early integration basically translates to non working product.
a separate time table must be allocated, and at least two weeks should be added to the regular time allocated for testing since this is roughly the amount of time required for the integration to yield a working product.

Only upon meeting these conditions will we be able to complete a real system testing and thus include in our test reports a Integration Cycle.

2009-01-17

QA - The Human factor

When discussing QA we can not neglect talking about the people who will perform the task.

In my experience there are several types of people who do QA work, the types are affected by the attitude of the companies towards QA and their view in respect to its importance.

Here in Israel there are 4 types of Testers:

The Temp - This type of tester is usually in between, he will usually be a person fresh out of military and on his way to the the big trip/ university.

He is looking for a way to earn some money for the above purposes and can not be depended on in regards to long term plans



On His Way to Development - a close friend of the tester mentioned above , this one is usually already spent a couple of years studying and plans to start working in the R&D, he is basically looking for a foot hold in a company in hopes that it will open the door to a development position.



Can't find any thing Else - this type of tester has been forced into trying to find work in QA since he can't seem to be able to find work else where, the only reason he got hired as QA is that some one has very low standard for QA and very poor opinion as to their role in the development process. *



Serious QA - This is the type of person you would like to see handling your testing process.
This type of QA Engineer will usually have some formal technical education , he is here to do QA work and will focus in being efficient and productive, he will invest the time in researching QA methodolgy



* The Question the Pops to the mind is obvious why would any one hire for QA work any other then the forth type of tester.
The reason as always is complex, but the root cause is similar to the chicken and egg question, which came first.

basically what happens is this , due to lack of available type 4 testers mangers are forced to look at the other types ,this in turn harms the professional opinion in regards to QA engineers , which in turn causes HR in companies to lower the salary bar for QA which ... are you ready for this causes less people to be interested in careers in QA.

And here we go again.

Just for an example :
a few years back some one approached my QA Manager in regards to a vacant position in one of our teams, when my manager inquired in regards to relevant experience the developer which made the inquiry raised a brow saying "what do you mean he knows nothing... what are QA suppose to know"
The problem is that it is not just one person but the entire industry is plauged with this misconception

Managers and HR do not contribute by setting low standards and low wadges , another example, in our QA group which is in charge of testing a very telecom related system most of the QA do not hold any degree and those who do have had no relevant telecom education.

2009-01-08

Bug Severity Guide Lines

The Bug severity will display what is the impact on the SUT according to the QA tester.

In order to avoid miss understandings and inconsistencies it is best that a table of sort be presented and agreed upon between all QA testers and vs. the Development group.

please review such a table below:

Low
  • GUI ‘cosmetics’
  • General minor issues
Medium
  • Missing\wrong messages
  • Not perfect functionality
  • GUI friendliness
  • GUI - Spelling errors
  • Documentation errors
High
  • Feature malfunction
  • ‘Bad’ errors messages
  • Fields validation
  • Translation problems
  • Missing Install Shied
  • Missing documentation
  • Wrong version
Show Stopper
  • System\Component Crash
  • Data loss
  • Missing Feature
  • Security intrusion
  • Performance failures
  • Customer determination
QA Stopper
  • Stops Testing progress
One can argue about the table above the only thing critical is to have such a table, and that all the relevant parties agree on it.

In a similar manner there must be a way to report the bug's priority according to project requirements.

2009-01-07

QA Methodology

Once we have dispatched with most of the preliminaries it is finally time to get to the bottom of QA - Methodology which is directly linked to documentation.
The document trace usually follows along side the product life cycle which I described in a previous blog entry (product life cycle).

Once the Product manager has decided what are the main features for the product/ version he will then release a document describing these requirements called a MRD (Marketing Requirements Document).
Following this document System engineering will usually output an SRD/PRD (System/ Product Requirement Document) - this document will detail what of the requirements is to be implemented and what are the priorities.
the QA in turn will then review this document and start preparation for the testing.

First QA will generate a STP (System Test Plan) this will describe:
  • what is to be tested
  • testing set-up
  • testing requirements (resources)
  • General time table which should correlate with R&D releases
The STP is usually prepared by the QA manager/ Team leader
Once the general STP has been released and approved it is time to get down to the details of the testing. the STD (System/ Software Test Document)
The STD will usually be prepared by the QA assigned to preform the tests.

When the tester has finished writing the document then a DR (Design Review) will be held in order to approve the STD.
we must make sure the following attend the DR:
  • Product Manager
  • System Engineer
  • Developer / Developer team leader
  • QA tester
  • QA team leader
  • Project Manager - to provide costumer related info if necessary.
Once the version is released to QA and testing has begun, upon completion or at predefined time frame the QA tester will issue a STR (System Test Report).
The STR will detail what tests were actually done and from those tests which passed and which failed , for the tests that failed we will include a short description of the fault and list the relevant bug report.

At the end of the test cycles have completed a Release meeting will be called, in it QA will present the last segment of documentation the RN (Release Notes) these will include a list of the bugs fixed in the version and a list of the bugs still open at current according to bug severity's criteria.

It is vital to understand QA is not releasing the version but rather giving a status report as to its quality, the decision whether to release or not is at the hands of the R&D manager and the Operational Manager according to external constraints.