Tuesday, October 28, 2008

Intruduction to Software Testing - Part 2

Testing Techniques

We have seen different types of testing in Part 1. Now let us see what are the different techniques of testing.

Black Box testing

Black box testing (data driven or input/output driven) is not based on any knowledge of internal design or code. Tests are based on requirements and functionality. Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:

1. Incorrect or missing functions,
2. Interface errors,
3. Errors in data structures or external database access,
4. Performance errors, and
5. Initialization and termination errors.

Range testing:Equivalence Testing

This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:

1.If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2.If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3.If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4.If an input condition is boolean, then one valid and one invalid equivalence class are defined.

Testcase Design for Equivalence partitioning

1.Good test case reduces by more than one the number of other test cases which must be developed
2.Good test case covers a large set of other possible cases
3.Classes of valid inputs
4.Classes of invalid inputs

Boundary testing

This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:

1.For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2.If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3.Apply guidelines 1 and 2 to the output.
4.If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.

Test case Design for Boundary value analysis:

Situations on, above, or below edges of input, output, and condition classes have high probability of success.

Error Guessing

Error Guessing is the process of using intuition and past experience to fill in gaps in the test data set. There are no rules to follow. The tester must review the test records with an eye towards recognizing missing conditions. Two familiar examples of error prone situations are division by zero and calculating the square root of a negative number. Either of these will result in system errors and garbled output.
Other cases where experience has demonstrated error proneness are the processing of variable length tables, calculation of median values for odd and even numbered populations, cyclic master file/data base updates (improper handling of duplicate keys, unmatched keys, etc.), overlapping storage areas, overwriting of buffers, forgetting to initialize buffer areas, and so forth. I am sure you can think of plenty of circumstances unique to your hardware/software environments and use of specific programming languages.

Error Guessing is as important as Equivalence partitioning and Boundary Analysis because it is intended to compensate for their inherent incompleteness. As Equivalence Partitioning and Boundary Analysis complement one another, Error Guessing complements both of these techniques.

White Box testing

White box testing (logic driven) is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. Guarantee that all independent paths within a module have been exercised at least once
2. Exercise all logical decisions on their true and false sides,
3. Execute all loops at their boundaries and within their operational bounds, and
4. Exercise internal data structures to ensure their validity.

Path Testing

A path-coverage test allow us to exercise every transition between the program statements (and so every statement and branch as well).
•First we construct a program graph.
•Then we enumerate all paths.
•Finally we devise the test cases.

Possible criteria:

1. Exercise every path from entry to exit;
2. Exercise each statement at least once;
3. Exercise each case in every branch/case.

Condition testing

A condition test can use a combination of Comparison operators and Logical operators.
The Comparison operators compare the values of variables and this comparison produces a boolean result. The Logical operators combine booleans to produce a single boolean result that is the result of the condition test.

e.g. (a == b) Result is true if the value of a is the same as the value of b.

Myers: take each branch out of a condition at least once.
White and Cohen: for each relational operator e1 < e2 test all combinations of e1, e2 orderings. For a Boolean condition, test all possible inputs (!).

Branch and relational operator testing---enumerate categories of operator
values.


B1 || B2: test {B1=t,B2=t}, {t,f}, {f,t}
B1 || (e2 = e3): test {t,=}, {f,=}, {t,<}, {t,>}.

Loop Testing

1. For single loop, zero minimum, N maximum, no excluded values:
2. Try bypassing loop entirely.
3. Try negative loop iteration variable.
4. One iteration through loop.
5. Two iterations through loop---some initialization problems can be uncovered only by two iterations.
6. Typical number of cases;
7. One less than maximum.
8. Maximum.
9. Try greater than maximum.

Data Flow Testing

Def-use chains:

1. Def = definition of variable
2. Use = use of that variable;
3. Def-use chains go across control boundaries.
4. Testing---test every def-use chain at least once.

Stubs for Testing

A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system.

Stubs for Top-Down Testing
4 basic types:
o Display a trace message
o Display parameter value(s)
o Return a value from a table
o Return table value selected by parameter

Drivers for Testing

Test Harness or a test driver is supporting code and data used to provide an environment for testing part of a system in isolation.

No comments: