Saturday, November 1, 2008

The Art of Test Case Authoring - Part 1

Test case authoring is one of the complex and time-consuming activities for any test engineer.

The progress of the project depends on the quality of the test cases written. The test engineers need to take utmost care while developing the test cases and must ensure that they follow the standard rules of test cases authoring, so that they are easy to understand and implement.

The following module aims at providing an insight into the fundamentals of test case authoring and the techniques to be adopted while authoring such as writing test cases based up on functional specifications or using use cases.

Definition: A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

Atrributes of a good test case:

Accuracy: The test cases should test what their descriptions say they will do.

It should always be clear whether the tester is doing something or the system is doing it. If a tester reads, "The button is pressed," does that mean he or she should press the button, or does it mean to verify that the system displays it as already pressed? One of the fastest ways to confuse a tester is to mix up actions and results. To avoid confusion, actions should always be entered under the ‘Steps’ column, and results under the ‘Results’/’Expected Results’ column. What the tester does is always an action. What the system displays or does is always a result.

Economical

Test Cases should have only the steps or fields needed for their purpose. They should not give a guided tour of the software.

How long should a test case be?

Generally, a good length for step-by-step cases is 10-15 steps. There are several benefits to keeping tests this short:

1. It takes less time to test each step in short cases
2. The tester is less likely to get lost, make mistakes, or need assistance.
3. The test manager can accurately estimate how long it will take to test
4. Results are easier to track

We should not try to cheat on the standard of 10-15 steps by cramming a lot of action into one step. A step should be one clear input or tester task. You can always tag a simple finisher onto the same step such as click or press . Also a step can include a set of logically related inputs. You don't have to have a result for each step if the system doesn't respond to the step.

Repeatable, self-standing

A test case is a controlled experiment. It should get the same results every time no matter who tests it. If only the writer can test it and get the result, or if the test gets different results for different testers, it needs more work in the setup or actions.

Appropriate

A test case has to be appropriate for the testers and environment. If it is theoretically sound but requires skills that none of the testers have, it will sit on the shelf. Even if you know who is testing the first time, you need to consider down the road -- maintenance and regression.

Traceable

You have to know what requirement the case is testing. It may meet all the other standards, but if its result, pass or fail, doesn't matter, why bother?

The above list is comprehensive but not exhaustive. Based on individual requirements, more standards may be added to the above list.

Most Common mistakes while writing Test Cases

In each writer's work, test case defects will vary around certain writing mistakes. If you are writing cases or managing writers, don't wait until cases are all done before finding these mistakes. Review the cases every day or two, looking for the faults that will make the cases harder to test and maintain. Chances are you will discover that the opportunities to improve are clustered in one of the seven most common test case mistakes:

1. Making cases too long
2. Incomplete, incorrect, or incoherent setup
3. Leaving out a step
4. Naming fields that changed or no longer exist
5. Unclear whether tester or system does action
6. Unclear what is a pass or fail result
7. Failure to clean up

Test cases can be authored using Functional Requirements or Use Cases. Now let us walk through the test case authoring using Functional Requirements:

Authoring test cases using functional specifications

This means writing test cases for an application with the intent to uncover non conformance with functional specifications. This type of testing activity is central to most software test efforts as it tests whether an application is functioning in accordance with its specified requirements. Additionally, some of the test cases may be written for testing the nonfunctional aspects of the application, such as performance, security, and usability.

The importance of having testable, complete, and detailed requirements cannot be overemphasized. In practice, however, having a perfect set of requirements at the tester's disposal is a rarity. In order to create effective functional test cases, the tester must understand the details and intricacies of the application. When these details and intricacies are inadequately documented in the requirements, the tester must conduct an analysis of them.

Even when detailed requirements are available, the flow and dependency of one requirement to the other is often not immediately apparent. The tester must therefore explore the system in order to gain a sufficient understanding of its behavior to create the most effective test cases.

Effective test design includes test cases that rarely overlap, but instead provide effective coverage with minimal duplication of effort (although duplication sometimes cannot be entirely avoided in assuring complete testing coverage). Apart from avoiding duplication of work, the test team should review the test plan and design in order to:

•Identify any patterns of similar actions or events used by several transactions. Given this information, test cases should be developed in a modular fashion so that they can be reused and recombined to execute various functional paths, avoiding duplication of test-creation efforts.

•Determine the order or sequence in which specific transactions must be tested to accommodate preconditions necessary to execute a test procedure, such as database configuration, or other requirements that result from control or work flow.

•Create a test procedure relationship matrix that incorporates the flow of the test procedures based on preconditions and post conditions necessary to execute a test case. A test-case relationship diagram that shows the interactions of the various test procedures, such as the high-level test procedure relationship diagram created during test design, can improve the testing effort.

Another consideration for effectively creating test cases is to determine and review critical and high-risk requirements by testing the most important functions early in the development schedule. It can be a waste of time to invest efforts in creating test procedures that verify functionality rarely executed by the user, while failing to create test procedures for functions that pose high risk or are executed most often.

To sum up, effective test-case design requires understanding of system variations, flows, and scenarios. It is often difficult to wade through page after page of requirements documents in order to understand connections, flows, and interrelationships. Analytical thinking and attention to detail are required to understand the cause-and-effect connections within the system intricacies. It is insufficient to design and develop high-level test cases that execute the system only at a high level; it is important to also design test procedures at the detailed, gray-box level.

No comments: