Tuesday, November 4, 2008

The Triangle Test Exercise

In this exercise, you get your first chance to build a test case. Even if you think of yourself as an experienced tester, you might want to give it a try. It’s amazing how few people — even people who have been testing for years — get this exercise right.

Exercise: The Triangle Test

Suppose you’re told to test a very simple program. This program accepts three integers as inputs or arguments. These integers represent the lengths of a triangle’s sides. The program prints “Equilateral” (three equal sides), “Isosceles” (two equal sides), or “Scalene” (no equal sides), as shown in the below Figure


Three types of triangles

Your assignment is to write an effective and efficient set of test cases. By effective, I mean that the set of tests finds common bugs. By efficient, I mean that the set of tests finds those bugs with a reasonable amount of effort. By test case, I mean tester action, data, and expected result. Here the action is usually to input some data, but it might be other actions as well.

A template is provided for your solution. I suggest you format your solution to look the same as the template. I suggest 30 minutes as a time limit. When you’re done, continue through the rest of this chapter to see my answers.

Author’s Triangle Test Solution

This exercise is a famous one, found in the first book on software testing, Glenford Myers’s The Art of Software Testing. In it, Myers gives this seemingly trivial example and then shows many possible solutions to it.

To come up with my solution, shown in the below Table, I used a combination of equivalence partitioning and boundary value analysis. We have already seen both equivalence partitioning and boundary value analysis defined in my earlier posts.



My solution focuses on testing functionality and the user interface only. There may be other risks to the system quality that might concern you which can be discussed at a later stages.

Note that my solution is only one of a large set of possible correct answers. Some generic rules for determining a good set of tests are found in Myers’s book, and you’ll look at those rules later in this book. Myers also provides solutions for this exercise. Because he wrote his book in the days of greenscreen mainframe programs, he did not have to cover some of the user interface tests that are included in my solution.

So, how did you do? What did my solution test that you didn’t? What did you test that my solution didn’t?

If you missed a few of these, don’t feel bad. If you keep practicing you will find more and more scenarios and the way you look at the application you are testing will also change.

Doing test design and implementation without reference materials often leads to gaps in testing. Doing test design and implementation under the tight, often stressful time constraints that exist during the test execution makes such gaps even more likely, even for experienced testers. This underscores the need for careful analysis, design, and implementation of tests up front, which can be combined during test execution with reactive techniques to find even more bugs.

Here are number of additional questions one would want to be addressed by the tests: “Did you worry about order dependence in the input values? Maybe it reacts differently to a negative number if it’s in the first position versus the second position versus the third position versus all positions. Maybe we could check the requirements. Did you worry about certain special characters being allowed or disallowed? What if it correctly recognizes that a ‘*’ is a special character but thinks a ‘/’ is acceptable? Did you test for all possible special characters, in all positions? Capital letters? Zeros? Nulls?” The possibilities are really almost endless, aren’t they? Risk analysis helps us recognize when we’ve covered enough.

Saturday, November 1, 2008

The Art of Test Case Authoring - Part 2

In Part 1 we have seen how ot author test cases using Functional Specifications. Now let us see Authoring Test Cases Using Use Cases.

Authoring Test Cases Using Use Cases

A use case is a sequence of actions performed by a system, which combined together produce, a result of value to a system user. While use cases are often associated with object-oriented systems, they apply equally well to most other types of systems.

Use cases and test cases work well together in two ways: If the use cases for a system are complete, accurate, and clear, the process of deriving the test cases is straightforward. And if the use cases are not in good shape, the attempt to derive test cases will help to debug the use cases.

Advantages of Test Cases derived from Use Cases

Traditional test case design techniques include analyzing the functional specifications, the software paths, and the boundary values. These techniques are all valid, but use case testing offers a new perspective and identifies test cases which the other techniques have difficulty seeing.

Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system (that moment you realize “we can’t get there from here!”). They also help uncover integration bugs, caused by the interaction and interference of different features, which individual feature testing would not see. The use case method supplements (but does not supplant) the traditional test case design techniques.

What one should know before converting the Use cases into test cases?

 Business logic and terminologies of the vertical.
 Technical complexities and environment compatibilities of the application.
 Limitations of the application and its design.
 Software testing experience.

How to approach in deriving test cases from Use cases?

 Read and understand the objective of the use case.
 Identify the conditions involved in the use case.
 Identify the relations between the conditions within a use case.
 Identify the dependencies of a use case with another.
 Check the functional flow.
 If you suspect an issue in any manner, get it resolved from your client or designing team.
 Break down the positive and negative test scenarios from each condition.
 Collect test data for the identified scenarios.
 Prepare a high-level test case index with unique test id.
 If you have the prototype of the application, compare the test scenarios with the prototype and review the test case index.
 Convert the test scenarios into test cases.

The first step is to develop the Use Case topics from the functional requirements of the Software Requirement Specification. The Use Case topics are depicted as an oval with the Use Case name.

The Use Case diagram just provides a quick overview of the relationship of actors to Use Cases. The meat of the Use Case is the text description. This text will contain the following:

Name
Brief Description
SRS Requirements Supported
Pre & Post Conditions
Event Flow

In the first iteration of Use Case definition, the topic, a brief description and actors for each case are identified and consolidated. In the second iteration the Event Flow of each Use Case can be flushed out. The Event Flow may be the personification and role-playing of the requirements specification. The requirements in the Software Requirement Specification are each uniquely numbered so that they may be accounted for in the verification testing. These requirements should be mapped to the Use Case that satisfies them for accountability.

The Pre-Condition specifies the required state of the system prior to the start of the Use Case. This can be used for a similar purpose in the Test Case. The Post-Condition is the state of the system after the actor interaction. This may be used for test pass/fail criteria.

The event flow is a description (usually a list) of the steps of the actor’s interaction with the system and the system’s required response. Recall that system is viewed as a black box. The event flow contains exceptions, which may cause alternate paths through the event flow. The following is an example of a Use Case for telephone systems.

Test case Management and Test Case authoring tools

Once the Test cases are developed they need to be maintained in the proper way to avoid confusion as which test cases are executed and which are not and which have been passed or failed. That is the status of the test cases needs to be maintained.
So, the most important activity to protect the value of test cases is to maintain them so that they are testable. They should be maintained after each testing cycle, since testers will find defects in the cases as well as in the software.

Test cases lost or corrupted by poor versioning and storage defeat the whole purpose of making them reusable. Configuration management (CM) of cases should be handled by the organization or project, rather than the test management. If the organization does not have this level of process maturity, the test manager or test writer needs to supply it. Either the project or the test manager should protect valuable test case assets with the following configuration management standards:

 Naming and numbering conventions
 Formats, file types
 Versioning
 Test objects needed by the case, such as databases
 Read only storage
 Controlled access
 Off-site backup

Test Case Authoring Tools

Improving productivity with test management software:

Software designed to support test authoring is the single greatest productivity booster for writing test cases. It has these advantages over word processing, database, or spreadsheet software:

 Makes writing and outlining easier
 Facilitates cloning of cases and steps
 Easy to add, move, delete cases and steps
 Automatically numbers and renumbers
 Prints tests in easy-to-follow templates

Test authoring is usually included in off-the-shelf test management software, or it could be custom written. Test management software usually contains more features than just test authoring. When you factor them into the purchase, they offer a lot of power for the price. If you are shopping for test management software, it should have all the usability advantages listed just above, plus additional functions:

 Exports tests to common formats
 Multi-user
 Tracks test writing progress, testing progress
 Tracks test results, or ports to database or defect tracker
 Links to requirements and/or creates coverage matrixes
 Builds test sets from cases
 Allows flexible security

There are many test case authoring tools available in the market today. Here we will limit the discussion to Mercury’s Test Director and the in-house tool TestLinks.

Mercury Interactive’s Test Director

By far the most familiar tool for maintaining Test Cases is Mercury Interactive’s Test Director. Test Director helps organizations deploy high-quality applications more quickly and effectively. It has four modules—Requirements Manager, Test Plan, Test Lab and Defects Manager. These allow for a smooth information flow between various testing stages. The completely web-enabled Test Director supports high levels of communication and collaboration among distributed testing teams.

Features & benefits of Test Director

Supports the Entire Testing Process Test Director incorporates all aspects of the testing process— requirements management, planning, scheduling, running tests, issue management and project status analysis—into a single browser-based application.

Provides Anytime, Anywhere Access to Testing Assets

Using Test Director’s Web interface, testers, developers and business analysts can participate in and contribute to the testing process by collaborating across geographic and organizational boundaries.

Provides Traceability Throughout the Testing Process

Test Director links requirements to test cases, and test cases to issues, to ensure traceability throughout the testing cycle. When requirement changes or the defect is fixed, the tester is notified of the change.

Integrates with Third-Party Applications

Whether you're using an industry standard configuration management solution, Microsoft Office, or a homegrown defect management tool, any application can be integrated into Test Director. Through its open API, Test Director preserves your investment in existing solutions and enables you to create an end-to-end lifecycle-management solution.

Manages Manual and Automated Tests

Test Director stores and runs both manual and automated tests, and can help jumpstart your automation project by converting manual tests to automated test scripts.

Accelerates Testing Cycles

Test Director’s Test Lab Manager accelerates the test execution cycles by scheduling and running tests automatically—unattended, even overnight. The results are reported into Test Director’s central repository, creating an accurate audit trail for analysis.

Facilitates Consistent and Repetitive Testing Process

By providing a central repository for all testing assets, Test Director facilitates the adoption of a more consistent testing process, which can be repeated throughout the application lifecycle or shared across multiple applications or lines of business (LOBs).

Provides Analysis and Decision Support Tools

Test Director’s integrated graphs and reports help analyze application readiness at any point in the testing process. Using information about requirements coverage, planning progress, run schedules or defect statistics, QA managers are able to make informed decisions on whether the application is ready to go live.

The Art of Test Case Authoring - Part 1

Test case authoring is one of the complex and time-consuming activities for any test engineer.

The progress of the project depends on the quality of the test cases written. The test engineers need to take utmost care while developing the test cases and must ensure that they follow the standard rules of test cases authoring, so that they are easy to understand and implement.

The following module aims at providing an insight into the fundamentals of test case authoring and the techniques to be adopted while authoring such as writing test cases based up on functional specifications or using use cases.

Definition: A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

Atrributes of a good test case:

Accuracy: The test cases should test what their descriptions say they will do.

It should always be clear whether the tester is doing something or the system is doing it. If a tester reads, "The button is pressed," does that mean he or she should press the button, or does it mean to verify that the system displays it as already pressed? One of the fastest ways to confuse a tester is to mix up actions and results. To avoid confusion, actions should always be entered under the ‘Steps’ column, and results under the ‘Results’/’Expected Results’ column. What the tester does is always an action. What the system displays or does is always a result.

Economical

Test Cases should have only the steps or fields needed for their purpose. They should not give a guided tour of the software.

How long should a test case be?

Generally, a good length for step-by-step cases is 10-15 steps. There are several benefits to keeping tests this short:

1. It takes less time to test each step in short cases
2. The tester is less likely to get lost, make mistakes, or need assistance.
3. The test manager can accurately estimate how long it will take to test
4. Results are easier to track

We should not try to cheat on the standard of 10-15 steps by cramming a lot of action into one step. A step should be one clear input or tester task. You can always tag a simple finisher onto the same step such as click or press . Also a step can include a set of logically related inputs. You don't have to have a result for each step if the system doesn't respond to the step.

Repeatable, self-standing

A test case is a controlled experiment. It should get the same results every time no matter who tests it. If only the writer can test it and get the result, or if the test gets different results for different testers, it needs more work in the setup or actions.

Appropriate

A test case has to be appropriate for the testers and environment. If it is theoretically sound but requires skills that none of the testers have, it will sit on the shelf. Even if you know who is testing the first time, you need to consider down the road -- maintenance and regression.

Traceable

You have to know what requirement the case is testing. It may meet all the other standards, but if its result, pass or fail, doesn't matter, why bother?

The above list is comprehensive but not exhaustive. Based on individual requirements, more standards may be added to the above list.

Most Common mistakes while writing Test Cases

In each writer's work, test case defects will vary around certain writing mistakes. If you are writing cases or managing writers, don't wait until cases are all done before finding these mistakes. Review the cases every day or two, looking for the faults that will make the cases harder to test and maintain. Chances are you will discover that the opportunities to improve are clustered in one of the seven most common test case mistakes:

1. Making cases too long
2. Incomplete, incorrect, or incoherent setup
3. Leaving out a step
4. Naming fields that changed or no longer exist
5. Unclear whether tester or system does action
6. Unclear what is a pass or fail result
7. Failure to clean up

Test cases can be authored using Functional Requirements or Use Cases. Now let us walk through the test case authoring using Functional Requirements:

Authoring test cases using functional specifications

This means writing test cases for an application with the intent to uncover non conformance with functional specifications. This type of testing activity is central to most software test efforts as it tests whether an application is functioning in accordance with its specified requirements. Additionally, some of the test cases may be written for testing the nonfunctional aspects of the application, such as performance, security, and usability.

The importance of having testable, complete, and detailed requirements cannot be overemphasized. In practice, however, having a perfect set of requirements at the tester's disposal is a rarity. In order to create effective functional test cases, the tester must understand the details and intricacies of the application. When these details and intricacies are inadequately documented in the requirements, the tester must conduct an analysis of them.

Even when detailed requirements are available, the flow and dependency of one requirement to the other is often not immediately apparent. The tester must therefore explore the system in order to gain a sufficient understanding of its behavior to create the most effective test cases.

Effective test design includes test cases that rarely overlap, but instead provide effective coverage with minimal duplication of effort (although duplication sometimes cannot be entirely avoided in assuring complete testing coverage). Apart from avoiding duplication of work, the test team should review the test plan and design in order to:

•Identify any patterns of similar actions or events used by several transactions. Given this information, test cases should be developed in a modular fashion so that they can be reused and recombined to execute various functional paths, avoiding duplication of test-creation efforts.

•Determine the order or sequence in which specific transactions must be tested to accommodate preconditions necessary to execute a test procedure, such as database configuration, or other requirements that result from control or work flow.

•Create a test procedure relationship matrix that incorporates the flow of the test procedures based on preconditions and post conditions necessary to execute a test case. A test-case relationship diagram that shows the interactions of the various test procedures, such as the high-level test procedure relationship diagram created during test design, can improve the testing effort.

Another consideration for effectively creating test cases is to determine and review critical and high-risk requirements by testing the most important functions early in the development schedule. It can be a waste of time to invest efforts in creating test procedures that verify functionality rarely executed by the user, while failing to create test procedures for functions that pose high risk or are executed most often.

To sum up, effective test-case design requires understanding of system variations, flows, and scenarios. It is often difficult to wade through page after page of requirements documents in order to understand connections, flows, and interrelationships. Analytical thinking and attention to detail are required to understand the cause-and-effect connections within the system intricacies. It is insufficient to design and develop high-level test cases that execute the system only at a high level; it is important to also design test procedures at the detailed, gray-box level.