Tuesday, November 4, 2008

The Triangle Test Exercise

In this exercise, you get your first chance to build a test case. Even if you think of yourself as an experienced tester, you might want to give it a try. It’s amazing how few people — even people who have been testing for years — get this exercise right.

Exercise: The Triangle Test

Suppose you’re told to test a very simple program. This program accepts three integers as inputs or arguments. These integers represent the lengths of a triangle’s sides. The program prints “Equilateral” (three equal sides), “Isosceles” (two equal sides), or “Scalene” (no equal sides), as shown in the below Figure


Three types of triangles

Your assignment is to write an effective and efficient set of test cases. By effective, I mean that the set of tests finds common bugs. By efficient, I mean that the set of tests finds those bugs with a reasonable amount of effort. By test case, I mean tester action, data, and expected result. Here the action is usually to input some data, but it might be other actions as well.

A template is provided for your solution. I suggest you format your solution to look the same as the template. I suggest 30 minutes as a time limit. When you’re done, continue through the rest of this chapter to see my answers.

Author’s Triangle Test Solution

This exercise is a famous one, found in the first book on software testing, Glenford Myers’s The Art of Software Testing. In it, Myers gives this seemingly trivial example and then shows many possible solutions to it.

To come up with my solution, shown in the below Table, I used a combination of equivalence partitioning and boundary value analysis. We have already seen both equivalence partitioning and boundary value analysis defined in my earlier posts.



My solution focuses on testing functionality and the user interface only. There may be other risks to the system quality that might concern you which can be discussed at a later stages.

Note that my solution is only one of a large set of possible correct answers. Some generic rules for determining a good set of tests are found in Myers’s book, and you’ll look at those rules later in this book. Myers also provides solutions for this exercise. Because he wrote his book in the days of greenscreen mainframe programs, he did not have to cover some of the user interface tests that are included in my solution.

So, how did you do? What did my solution test that you didn’t? What did you test that my solution didn’t?

If you missed a few of these, don’t feel bad. If you keep practicing you will find more and more scenarios and the way you look at the application you are testing will also change.

Doing test design and implementation without reference materials often leads to gaps in testing. Doing test design and implementation under the tight, often stressful time constraints that exist during the test execution makes such gaps even more likely, even for experienced testers. This underscores the need for careful analysis, design, and implementation of tests up front, which can be combined during test execution with reactive techniques to find even more bugs.

Here are number of additional questions one would want to be addressed by the tests: “Did you worry about order dependence in the input values? Maybe it reacts differently to a negative number if it’s in the first position versus the second position versus the third position versus all positions. Maybe we could check the requirements. Did you worry about certain special characters being allowed or disallowed? What if it correctly recognizes that a ‘*’ is a special character but thinks a ‘/’ is acceptable? Did you test for all possible special characters, in all positions? Capital letters? Zeros? Nulls?” The possibilities are really almost endless, aren’t they? Risk analysis helps us recognize when we’ve covered enough.

Saturday, November 1, 2008

The Art of Test Case Authoring - Part 2

In Part 1 we have seen how ot author test cases using Functional Specifications. Now let us see Authoring Test Cases Using Use Cases.

Authoring Test Cases Using Use Cases

A use case is a sequence of actions performed by a system, which combined together produce, a result of value to a system user. While use cases are often associated with object-oriented systems, they apply equally well to most other types of systems.

Use cases and test cases work well together in two ways: If the use cases for a system are complete, accurate, and clear, the process of deriving the test cases is straightforward. And if the use cases are not in good shape, the attempt to derive test cases will help to debug the use cases.

Advantages of Test Cases derived from Use Cases

Traditional test case design techniques include analyzing the functional specifications, the software paths, and the boundary values. These techniques are all valid, but use case testing offers a new perspective and identifies test cases which the other techniques have difficulty seeing.

Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system (that moment you realize “we can’t get there from here!”). They also help uncover integration bugs, caused by the interaction and interference of different features, which individual feature testing would not see. The use case method supplements (but does not supplant) the traditional test case design techniques.

What one should know before converting the Use cases into test cases?

 Business logic and terminologies of the vertical.
 Technical complexities and environment compatibilities of the application.
 Limitations of the application and its design.
 Software testing experience.

How to approach in deriving test cases from Use cases?

 Read and understand the objective of the use case.
 Identify the conditions involved in the use case.
 Identify the relations between the conditions within a use case.
 Identify the dependencies of a use case with another.
 Check the functional flow.
 If you suspect an issue in any manner, get it resolved from your client or designing team.
 Break down the positive and negative test scenarios from each condition.
 Collect test data for the identified scenarios.
 Prepare a high-level test case index with unique test id.
 If you have the prototype of the application, compare the test scenarios with the prototype and review the test case index.
 Convert the test scenarios into test cases.

The first step is to develop the Use Case topics from the functional requirements of the Software Requirement Specification. The Use Case topics are depicted as an oval with the Use Case name.

The Use Case diagram just provides a quick overview of the relationship of actors to Use Cases. The meat of the Use Case is the text description. This text will contain the following:

Name
Brief Description
SRS Requirements Supported
Pre & Post Conditions
Event Flow

In the first iteration of Use Case definition, the topic, a brief description and actors for each case are identified and consolidated. In the second iteration the Event Flow of each Use Case can be flushed out. The Event Flow may be the personification and role-playing of the requirements specification. The requirements in the Software Requirement Specification are each uniquely numbered so that they may be accounted for in the verification testing. These requirements should be mapped to the Use Case that satisfies them for accountability.

The Pre-Condition specifies the required state of the system prior to the start of the Use Case. This can be used for a similar purpose in the Test Case. The Post-Condition is the state of the system after the actor interaction. This may be used for test pass/fail criteria.

The event flow is a description (usually a list) of the steps of the actor’s interaction with the system and the system’s required response. Recall that system is viewed as a black box. The event flow contains exceptions, which may cause alternate paths through the event flow. The following is an example of a Use Case for telephone systems.

Test case Management and Test Case authoring tools

Once the Test cases are developed they need to be maintained in the proper way to avoid confusion as which test cases are executed and which are not and which have been passed or failed. That is the status of the test cases needs to be maintained.
So, the most important activity to protect the value of test cases is to maintain them so that they are testable. They should be maintained after each testing cycle, since testers will find defects in the cases as well as in the software.

Test cases lost or corrupted by poor versioning and storage defeat the whole purpose of making them reusable. Configuration management (CM) of cases should be handled by the organization or project, rather than the test management. If the organization does not have this level of process maturity, the test manager or test writer needs to supply it. Either the project or the test manager should protect valuable test case assets with the following configuration management standards:

 Naming and numbering conventions
 Formats, file types
 Versioning
 Test objects needed by the case, such as databases
 Read only storage
 Controlled access
 Off-site backup

Test Case Authoring Tools

Improving productivity with test management software:

Software designed to support test authoring is the single greatest productivity booster for writing test cases. It has these advantages over word processing, database, or spreadsheet software:

 Makes writing and outlining easier
 Facilitates cloning of cases and steps
 Easy to add, move, delete cases and steps
 Automatically numbers and renumbers
 Prints tests in easy-to-follow templates

Test authoring is usually included in off-the-shelf test management software, or it could be custom written. Test management software usually contains more features than just test authoring. When you factor them into the purchase, they offer a lot of power for the price. If you are shopping for test management software, it should have all the usability advantages listed just above, plus additional functions:

 Exports tests to common formats
 Multi-user
 Tracks test writing progress, testing progress
 Tracks test results, or ports to database or defect tracker
 Links to requirements and/or creates coverage matrixes
 Builds test sets from cases
 Allows flexible security

There are many test case authoring tools available in the market today. Here we will limit the discussion to Mercury’s Test Director and the in-house tool TestLinks.

Mercury Interactive’s Test Director

By far the most familiar tool for maintaining Test Cases is Mercury Interactive’s Test Director. Test Director helps organizations deploy high-quality applications more quickly and effectively. It has four modules—Requirements Manager, Test Plan, Test Lab and Defects Manager. These allow for a smooth information flow between various testing stages. The completely web-enabled Test Director supports high levels of communication and collaboration among distributed testing teams.

Features & benefits of Test Director

Supports the Entire Testing Process Test Director incorporates all aspects of the testing process— requirements management, planning, scheduling, running tests, issue management and project status analysis—into a single browser-based application.

Provides Anytime, Anywhere Access to Testing Assets

Using Test Director’s Web interface, testers, developers and business analysts can participate in and contribute to the testing process by collaborating across geographic and organizational boundaries.

Provides Traceability Throughout the Testing Process

Test Director links requirements to test cases, and test cases to issues, to ensure traceability throughout the testing cycle. When requirement changes or the defect is fixed, the tester is notified of the change.

Integrates with Third-Party Applications

Whether you're using an industry standard configuration management solution, Microsoft Office, or a homegrown defect management tool, any application can be integrated into Test Director. Through its open API, Test Director preserves your investment in existing solutions and enables you to create an end-to-end lifecycle-management solution.

Manages Manual and Automated Tests

Test Director stores and runs both manual and automated tests, and can help jumpstart your automation project by converting manual tests to automated test scripts.

Accelerates Testing Cycles

Test Director’s Test Lab Manager accelerates the test execution cycles by scheduling and running tests automatically—unattended, even overnight. The results are reported into Test Director’s central repository, creating an accurate audit trail for analysis.

Facilitates Consistent and Repetitive Testing Process

By providing a central repository for all testing assets, Test Director facilitates the adoption of a more consistent testing process, which can be repeated throughout the application lifecycle or shared across multiple applications or lines of business (LOBs).

Provides Analysis and Decision Support Tools

Test Director’s integrated graphs and reports help analyze application readiness at any point in the testing process. Using information about requirements coverage, planning progress, run schedules or defect statistics, QA managers are able to make informed decisions on whether the application is ready to go live.

The Art of Test Case Authoring - Part 1

Test case authoring is one of the complex and time-consuming activities for any test engineer.

The progress of the project depends on the quality of the test cases written. The test engineers need to take utmost care while developing the test cases and must ensure that they follow the standard rules of test cases authoring, so that they are easy to understand and implement.

The following module aims at providing an insight into the fundamentals of test case authoring and the techniques to be adopted while authoring such as writing test cases based up on functional specifications or using use cases.

Definition: A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

Atrributes of a good test case:

Accuracy: The test cases should test what their descriptions say they will do.

It should always be clear whether the tester is doing something or the system is doing it. If a tester reads, "The button is pressed," does that mean he or she should press the button, or does it mean to verify that the system displays it as already pressed? One of the fastest ways to confuse a tester is to mix up actions and results. To avoid confusion, actions should always be entered under the ‘Steps’ column, and results under the ‘Results’/’Expected Results’ column. What the tester does is always an action. What the system displays or does is always a result.

Economical

Test Cases should have only the steps or fields needed for their purpose. They should not give a guided tour of the software.

How long should a test case be?

Generally, a good length for step-by-step cases is 10-15 steps. There are several benefits to keeping tests this short:

1. It takes less time to test each step in short cases
2. The tester is less likely to get lost, make mistakes, or need assistance.
3. The test manager can accurately estimate how long it will take to test
4. Results are easier to track

We should not try to cheat on the standard of 10-15 steps by cramming a lot of action into one step. A step should be one clear input or tester task. You can always tag a simple finisher onto the same step such as click or press . Also a step can include a set of logically related inputs. You don't have to have a result for each step if the system doesn't respond to the step.

Repeatable, self-standing

A test case is a controlled experiment. It should get the same results every time no matter who tests it. If only the writer can test it and get the result, or if the test gets different results for different testers, it needs more work in the setup or actions.

Appropriate

A test case has to be appropriate for the testers and environment. If it is theoretically sound but requires skills that none of the testers have, it will sit on the shelf. Even if you know who is testing the first time, you need to consider down the road -- maintenance and regression.

Traceable

You have to know what requirement the case is testing. It may meet all the other standards, but if its result, pass or fail, doesn't matter, why bother?

The above list is comprehensive but not exhaustive. Based on individual requirements, more standards may be added to the above list.

Most Common mistakes while writing Test Cases

In each writer's work, test case defects will vary around certain writing mistakes. If you are writing cases or managing writers, don't wait until cases are all done before finding these mistakes. Review the cases every day or two, looking for the faults that will make the cases harder to test and maintain. Chances are you will discover that the opportunities to improve are clustered in one of the seven most common test case mistakes:

1. Making cases too long
2. Incomplete, incorrect, or incoherent setup
3. Leaving out a step
4. Naming fields that changed or no longer exist
5. Unclear whether tester or system does action
6. Unclear what is a pass or fail result
7. Failure to clean up

Test cases can be authored using Functional Requirements or Use Cases. Now let us walk through the test case authoring using Functional Requirements:

Authoring test cases using functional specifications

This means writing test cases for an application with the intent to uncover non conformance with functional specifications. This type of testing activity is central to most software test efforts as it tests whether an application is functioning in accordance with its specified requirements. Additionally, some of the test cases may be written for testing the nonfunctional aspects of the application, such as performance, security, and usability.

The importance of having testable, complete, and detailed requirements cannot be overemphasized. In practice, however, having a perfect set of requirements at the tester's disposal is a rarity. In order to create effective functional test cases, the tester must understand the details and intricacies of the application. When these details and intricacies are inadequately documented in the requirements, the tester must conduct an analysis of them.

Even when detailed requirements are available, the flow and dependency of one requirement to the other is often not immediately apparent. The tester must therefore explore the system in order to gain a sufficient understanding of its behavior to create the most effective test cases.

Effective test design includes test cases that rarely overlap, but instead provide effective coverage with minimal duplication of effort (although duplication sometimes cannot be entirely avoided in assuring complete testing coverage). Apart from avoiding duplication of work, the test team should review the test plan and design in order to:

•Identify any patterns of similar actions or events used by several transactions. Given this information, test cases should be developed in a modular fashion so that they can be reused and recombined to execute various functional paths, avoiding duplication of test-creation efforts.

•Determine the order or sequence in which specific transactions must be tested to accommodate preconditions necessary to execute a test procedure, such as database configuration, or other requirements that result from control or work flow.

•Create a test procedure relationship matrix that incorporates the flow of the test procedures based on preconditions and post conditions necessary to execute a test case. A test-case relationship diagram that shows the interactions of the various test procedures, such as the high-level test procedure relationship diagram created during test design, can improve the testing effort.

Another consideration for effectively creating test cases is to determine and review critical and high-risk requirements by testing the most important functions early in the development schedule. It can be a waste of time to invest efforts in creating test procedures that verify functionality rarely executed by the user, while failing to create test procedures for functions that pose high risk or are executed most often.

To sum up, effective test-case design requires understanding of system variations, flows, and scenarios. It is often difficult to wade through page after page of requirements documents in order to understand connections, flows, and interrelationships. Analytical thinking and attention to detail are required to understand the cause-and-effect connections within the system intricacies. It is insufficient to design and develop high-level test cases that execute the system only at a high level; it is important to also design test procedures at the detailed, gray-box level.

Wednesday, October 29, 2008

Difference between Verification and Validation

Verification and Validation are the basic ingredients of Software Quality Assurance (SQA) activities.

Verification” checks whether we are building the right system, and
Validation” checks whether we are building the system right.

Verification Strategies

Verification Strategies comprise of the following:

1. Requirements Review.
2. Design Review.
3. Code Walkthrough.
4. Code Inspections.

Validation Strategies

Validation Strategies comprise of the following:

1. Unit Testing.
2. Integration Testing.
3. System Testing.
4. Performance Testing.
5. Alpha Testing.
6. User Acceptance Testing (UAT).
7. Installation Testing.
8. Beta Testing.

Now lets see the Verification Strategies in detail:



Validation Strategies in detail:





When Testing should occur?

Testing can and should occur throughout the phases of a project.

Requirements Phase

* Determine the test strategy.
* Determine adequacy of requirements.
* Generate functional test conditions.

Design Phase

* Determine consistency of design with requirements.
* Determine adequacy of design.
* Generate structural and functional test conditions.

Program (Build) Phase

* Determine consistency with design.
* Determine adequacy of implementation.
* Generate structural and functional test conditions for programs/units.

Test Phase

* Determine adequacy of the test plan.
* Test application system.

Installation Phase

* Place tested system into production.

Maintenance Phase

* Modify and retest.

Tuesday, October 28, 2008

All About White Box and Black Box Testing

White Box Testing

What is WBT?

White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised. In other word WBT tends to involve the coverage of the specification in the code.

Code coverage is defined in six types as listed below.

• Segment coverage – Each segment of code b/w control structure is executed at least once.
• Branch Coverage or Node Testing – Each branch in the code is taken in each possible direction at least once.
• Compound Condition Coverage – When there are multiple conditions, you must test not only each direction but also each possible combinations of conditions, which is usually done by using a ‘Truth Table’
• Basis Path Testing – Each independent path through the code is taken in a pre-determined order. This point will further be discussed in other section.
• Data Flow Testing (DFT) – In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code i.e., those based on each piece of code chosen to be tracked. Even though the paths are considered independent, dependencies across multiple paths are not really tested for by this approach. DFT tends to reflect dependencies but it is mainly through sequences of data manipulation. This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on.
• Path Testing – Path testing is where all possible paths through the code are defined and covered. This testing is extremely laborious and time consuming.
• Loop Testing – In addition top above measures, there are testing strategies based on loop testing. These strategies relate to testing single loops, concatenated loops, and nested loops. Loops are fairly simple to test unless dependencies exist among the loop or b/w a loop and the code it contains.

What do we do in WBT?

In WBT, we use the control structure of the procedural design to derive test cases. Using WBT methods a tester can derive the test cases that
• Guarantee that all independent paths within a module have been exercised at least once.
• Exercise all logical decisions on their true and false values.
• Execute all loops at their boundaries and within their operational bounds
• Exercise internal data structures to ensure their validity.

White box testing (WBT) is also called Structural or Glass box testing.

Why WBT?

We do WBT because Black box testing is unlikely to uncover numerous sorts of defects in the program. These defects can be of the following nature:

• Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. Error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program
• The logical flow of the program is sometimes counterintuitive, meaning that our unconscious assumptions about flow of control and data may lead to design errors that are uncovered only when path testing starts.
• Typographical errors are random, some of which will be uncovered by syntax checking mechanisms but others will go undetected until testing begins.

Skills Required

Talking theoretically, all we need to do in WBT is to define all logical paths, develop test cases to exercise them and evaluate results i.e. generate test cases to exercise the program logic exhaustively.

For this we need to know the program well i.e. We should know the specification and the code to be tested; related documents should be available too us .We must be able to tell the expected status of the program versus the actual status found at any point during the testing process.

Limitations

Unfortunately in WBT, exhaustive testing of a code presents certain logistical problems. Even for small programs, the number of possible logical paths can be very large. For instance, a 100 line C Language program that contains two nested loops executing 1 to 20 times depending upon some initial input after some basic data declaration. Inside the interior loop four if-then-else constructs are required. Then there are approximately 1014 logical paths that are to be exercised to test the program exhaustively. Which means that a magic test processor developing a single test case, execute it and evaluate results in one millisecond would require 3170 years working continuously for this exhaustive testing which is certainly impractical. Exhaustive WBT is impossible for large software systems. But that doesn’t mean WBT should be considered as impractical. Limited WBT in which a limited no. of important logical paths are selected and exercised and important data structures are probed for validity, is both practical and WBT. It is suggested that white and black box testing techniques can be coupled to provide an approach that that validates the software interface selectively ensuring the correction of internal working of the software.

Tools used for White Box testing:

Few Test automation tool vendors offer white box testing tools which:
1) Provide run-time error and memory leak detection;
2) Record the exact amount of time the application spends in any given block of code for the purpose of finding inefficient code bottlenecks; and
3) Pinpoint areas of the application that have and have not been executed.

Basis Path Testing

Basis path testing is a white box testing technique first proposed by Tom McCabe. The Basis path method enables to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test Cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.

Flow Graph Notation

The flow graph depicts logical control flow using a diagrammatic notation. Each structured construct has a corresponding flow graph symbol.

Cyclomatic Complexity

Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program. When used in the context of a basis path testing method, the value computed for Cyclomatic complexity defines the number for independent paths in the basis set of a program and provides us an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.

An independent path is any path through the program that introduces at least one new set of processing statements or a new condition.

Computing Cyclomatic Complexity

Cyclomatic complexity has a foundation in graph theory and provides us with extremely useful software metric. Complexity is computed in one of the three ways:

1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.

2. Cyclomatic complexity, V(G), for a flow graph, G is defined as
V (G) = E-N+2
Where E, is the number of flow graph edges, N is the number of flow graph nodes.

3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as:
V (G) = P+1
Where P is the number of predicate nodes contained in the flow graph G.

Graph Matrices

The procedure for deriving the flow graph and even determining a set of basis paths is amenable to mechanization. To develop a software tool that assists in basis path testing, a data structure, called a graph matrix can be quite useful.

A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow graph. Each row and column corresponds to an identified node, and matrix entries correspond to connections between nodes.

Control Structure Testing

Described below are some of the variations of Control Structure Testing.

Condition Testing

Condition testing is a test case design method that exercises the logical conditions contained in a program module.

Data Flow Testing

The data flow testing method selects test paths of a program according to the locations of definitions and uses of variables in the program.

Loop Testing

Loop Testing is a white box testing technique that focuses exclusively on the validity of loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops, nested loops, and unstructured loops.

Simple Loops

The following sets of tests can be applied to simple loops, where ‘n’ is the maximum number of allowable passes through the loop.

1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. ‘m’ passes through the loop where m5. n-1, n, n+1 passes through the loop.

Nested Loops

If we extend the test approach from simple loops to nested loops, the number of possible tests would grow geometrically as the level of nesting increases.

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration parameter values. Add other tests for out-of-range or exclude values.
3. Work outward, conducting tests for the next loop, but keep all other outer loops at minimum values and other nested loops to “typical” values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested using the approach defined for simple loops, if each of the loops is independent of the other. However, if two loops are concatenated and the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not independent.

Unstructured Loops

Whenever possible, this class of loops should be redesigned to reflect the use of the structured programming constructs.

Black Box Testing

Black box is a test design method. Black box testing treats the system as a "black-box", so it doesn't explicitly use Knowledge of the internal structure. Or in other words the Test engineer need not know the internal working of the “Black box”.

It focuses on the functionality part of the module.

Some people like to call black box testing as behavioral, functional, opaque-box, and closed-box. While the term black box is most popularly use, many people prefer the terms "behavioral" and "structural" for black box and white box respectively. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged.

Personally we feel that there is a trade off between the approaches used to test a product using white box and black box types.

There are some bugs that cannot be found using only black box or only white box. If the test cases are extensive and the test inputs are also from a large sample space then it is always possible to find majority of the bugs through black box testing.

Tools used for Black Box testing:

Many tool vendors have been producing tools for automated black box and automated white box testing for several years. The basic functional or regression testing tools capture the results of black box tests in a script format. Once captured, these scripts can be executed against future builds of an application to verify that new functionality hasn't disabled previous functionality.

Advantages of Black Box Testing
- Tester can be non-technical.
- This testing is most likely to find those bugs as the user would find.
- Testing helps to identify the vagueness and contradiction in functional specifications.
- Test cases can be designed as soon as the functional specifications are complete
Disadvantages of Black Box Testing
- Chances of having repetition of tests that are already done by programmer.
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult chances of having unidentified paths during this testing.

Graph Based Testing Methods

Software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each objects and their relationships and then devising a series of tests that will cover the graph so that each object and relationship is exercised and error is uncovered.

Error Guessing

Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation: Either when reading the functional documents or when you are testing and find an error that you have not documented.

Boundary Value Analysis

Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing technique) where the extreme values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The hope is that, if a system works correctly for these special values then it will work correctly for all values in between.

>> Extends equivalence partitioning
>> Test both sides of each boundary
>> Look at output boundaries for test cases too
>> Test min, min-1, max, max+1, typical values
>> BVA focuses on the boundary of the input space to identify test cases
>> Rational is that errors tend to occur near the extreme values of an input variable

There are two ways to generalize the BVA techniques:

1. By the number of variables
> For n variables: BVA yields 4n + 1 test cases.
2. By the kinds of ranges
> Generalizing ranges depends on the nature or type of variables
>> NextDate has a variable Month and the range could be defined as {Jan, Feb, …Dec}
>> Min = Jan, Min +1 = Feb, etc.
>> Triangle had a declared range of {1, 20,000}
>> Boolean variables have extreme values True and False but there is no clear choice for the remaining three values

Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
4. For strongly typed languages robust testing results in run-time errors that abort normal execution

Limitations of Boundary Value Analysis
BVA works best when the program is a function of several independent variables that represent bounded physical quantities.

1. Independent Variables

> NextDate test cases derived from BVA would be inadequate: focusing on the boundary would not leave emphasis on February or leap years
> Dependencies exist with NextDate's Day, Month and Year
> Test cases derived without consideration of the function

2. Physical Quantities

> An example of physical variables being tested, telephone numbers - what faults might be revealed by numbers of 000-0000, 000-0001, 555-5555, 999-9998, 999-9999?

Equivalence Partitioning

Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
EP can be defined according to the following guidelines:

1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing

There are situations where independent versions of software be developed for critical applications, even when only a single version will be used in the delivered computer based system. It is these independent versions which form the basis of a black box testing technique called Comparison testing or back-to-back testing.

Orthogonal Array Testing

The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing pair-wise interactions by deriving a suitable small set of test cases (from a large number of possibilities).

Verification Strategies

What is ‘Verification’?

Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.

What is the importance of the Verification Phase?

Verification process helps in detecting defects early, and preventing their leakage downstream. Thus, the higher cost of later detection and rework is eliminated.

Review

A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.

The main goal of reviews is to find defects. Reviews are a good compliment to testing to help assure quality. A few purposes’ of SQA reviews can be as follows:

• Assure the quality of deliverable’s before the project moves to the next stage.

• Once a deliverable has been reviewed, revised as required, and approved, it can be used as a basis for the next stage in the life cycle.

What are the various types of reviews?

Types of reviews include Management Reviews, Technical Reviews, Inspections, Walkthroughs and Audits.

Management Reviews

Management reviews are performed by those directly responsible for the system in order to monitor progress, determine status of plans and schedules, confirm requirements and their system allocation.

Therefore the main objectives of Management Reviews can be categorized as follows:

• Validate from a management perspective that the project is making progress according to the project plan.
• Ensure that deliverables are ready for management approvals.
• Resolve issues that require management’s attention.
• Identify any project bottlenecks.
• Keeping project in Control.

Support decisions made during such reviews include Corrective actions, Changes in the allocation of resources or changes to the scope of the project.

In management reviews the following Software products are reviewed:
Audit Reports
Contingency plans
Installation plans
Risk management plans
Software Q/A

The participants of the review play the roles of Decision-Maker, Review Leader, Recorder, Management Staff, and Technical Staff.

Technical Reviews

Technical reviews confirm that product Conforms to specifications, adheres to regulations, standards, guidelines, plans, changes are properly implemented, changes affect only those system areas identified by the change specification.

The main objectives of Technical Reviews can be categorized as follows:

• Ensure that the software confirms to the organization standards.
• Ensure that any changes in the development procedures (design, coding, testing) are implemented per the organization pre-defined standards.

In technical reviews, the following Software products are reviewed:
• Software requirements specification
• Software design description
• Software test documentation
• Software user documentation
• Installation procedure
• Release notes

The participants of the review play the roles of Decision-maker, Review leader, Recorder, Technical staff.

What is Requirement Review?

A process or meeting during which the requirements for a system, hardware item, or software item are presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include system requirements review, software requirements review.

Who is involved in Requirement Review?

• Product management leads Requirement Review. Members from every affected department participates in the review

Input Criteria

Software requirement specification is the essential document for the review. A checklist can be used for the review.

Exit Criteria

Exit criteria include the filled & completed checklist with the reviewers’ comments & suggestions and the re-verification whether they are incorporated in the documents.

What is Design Review?

A process or meeting during which a system, hardware, or software design is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include critical design review, preliminary design review, and system design review.

Who involve in Design Review?

• QA team member leads design review. Members from development team and QA team participate in the review.

Input Criteria

Design document is the essential document for the review. A checklist can be used for the review.

Exit Criteria

Exit criteria include the filled & completed checklist with the reviewers’ comments & suggestions and the re-verification whether they are incorporated in the documents.

What is Code Review?

A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.

Who is involved in Code Review?

• QA team member (In case the QA Team is only involved in Black Box Testing, then the Development team lead chairs the review team) leads code review. Members from development team and QA team participate in the review.

Input Criteria

The Coding Standards Document and the Source file are the essential documents for the review. A checklist can be used for the review.

Exit Criteria

Exit criteria include the filled & completed checklist with the reviewers’ comments & suggestions and the re-verification whether they are incorporated in the documents.


Walkthrough


A static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a segment of documentation or code, and the participants ask questions and make comments about possible errors, violation of development standards, and other problems.

The objectives of Walkthrough can be summarized as follows:
• Detect errors early.
• Ensure (re)established standards are followed:
• Train and exchange technical information among project teams which participate in the walkthrough.
• Increase the quality of the project, thereby improving morale of the team members.

The participants in Walkthroughs assume one or more of the following roles:
a) Walk-through leader
b) Recorder
c) Author
d) Team member

To consider a review as a systematic walk-through, a team of at least two members shall be assembled. Roles may be shared among the team members. The walk-through leader or the author may serve as the recorder. The walk-through leader may be the author.

Individuals holding management positions over any member of the walk-through team shall not participate in the walk-through.

Input to the walk-through shall include the following:
a) A statement of objectives for the walk-through
b) The software product being examined
c) Standards that are in effect for the acquisition, supply, development, operation, and/or maintenance of the software product
Input to the walk-through may also include the following:
d) Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected
e) Anomaly categories

The walk-through shall be considered complete when
a) The entire software product has been examined
b) Recommendations and required actions have been recorded
c) The walk-through output has been completed

Inspection

A static analysis technique that relies on visual examination of development products to detect errors, violations of development standards, and other problems. Types include code inspection; design inspection, Architectural inspections, Test ware inspections etc.

The participants in Inspections assume one or more of the following roles:
a) Inspection leader
b) Recorder
c) Reader
d) Author
e) Inspector

All participants in the review are inspectors. The author shall not act as inspection leader and should not act as reader or recorder. Other roles may be shared among the team members. Individual participants may act in more than one role.
Individuals holding management positions over any member of the inspection team shall not participate in the inspection.

Input to the inspection shall include the following:
a) A statement of objectives for the inspection
b) The software product to be inspected
c) Documented inspection procedure
d) Inspection reporting forms
e) Current anomalies or issues list
Input to the inspection may also include the following:
f) Inspection checklists
g) Any regulations, standards, guidelines, plans, and procedures against which the software product is to be inspected
h) Hardware product specifications
i) Hardware performance data
j) Anomaly categories

The individuals may make additional reference material available responsible for the software product when requested by the inspection leader.

The purpose of the exit criteria is to bring an unambiguous closure to the inspection meeting. The exit decision shall determine if the software product meets the inspection exit criteria and shall prescribe any appropriate rework and verification. Specifically, the inspection team shall identify the software product disposition as one of the following:
a) Accept with no or minor rework. The software product is accepted as is or with only minor rework. (For example, that would require no further verification).
b) Accept with rework verification. The software product is to be accepted after the inspection leader or
a designated member of the inspection team (other than the author) verifies rework.
c) Re-inspect. Schedule a re-inspection to verify rework. At a minimum, a re-inspection shall examine the software product areas changed to resolve anomalies identified in the last inspection, as well as side effects of those changes.

When Testing should occur?

Wrong Assumption

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product .Test data sets must be derived and their correctness and consistency should be monitored throughout the development process. If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Testing Activities in Each Phase

The following testing activities should be performed during the phases
• Requirements Analysis - (1) Determine correctness (2) Generate functional test data.
• Design - (1) Determine correctness and consistency (2)Generate structural and functional test data.
• Programming/Construction - (1) Determine correctness and consistency (2) Generate structural and functional test data (3) Apply test data (4) Refine test data.
• Operation and Maintenance - (1) Retest.

Now we consider these in detail.

Requirements Analysis

The following test activities should be performed during this stage.

• Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis an d test data generation.

The requirements statement should record the following information and decisions:
1. Program function - What the program must do?
2. The form, format, data types and units for input.
3. The form, format, data types and units for output.
4. How exceptions, errors and deviations are to be handled.
5. For scientific computations, the numerical method or at least the required accuracy of the solution.
6. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).

Deciding the above issues is one of the activities related to testing that should be performed during this stage.
• Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data. In addition, following should also be included in the data set: (1) boundary values (2) any non-extreme input values that would require special handling.
The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.

• The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

Design

The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.

The design document should contain:
• Principal data structures.
• Functions, algorithms, heuristics or special techniques used for processing.
• The program organization, how it will be modularized and categorized into external and internal interfaces.
• Any additional information.

Here the testing activities should consist of:
• Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.

• Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.

• Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.

• Reexamination and refinement of the test data set generated at the requirements analysis phase.

The first two steps should also be performed by some colleague and not only the designer/developer.

Programming/Construction

Here the main testing points are:

• Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.

• Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.

• Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.

• Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.

• Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.

• Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.

• Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.
The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.
The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.

Operations and maintenance

Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.

Intruduction to Software Testing - Part 2

Testing Techniques

We have seen different types of testing in Part 1. Now let us see what are the different techniques of testing.

Black Box testing

Black box testing (data driven or input/output driven) is not based on any knowledge of internal design or code. Tests are based on requirements and functionality. Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:

1. Incorrect or missing functions,
2. Interface errors,
3. Errors in data structures or external database access,
4. Performance errors, and
5. Initialization and termination errors.

Range testing:Equivalence Testing

This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:

1.If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2.If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3.If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4.If an input condition is boolean, then one valid and one invalid equivalence class are defined.

Testcase Design for Equivalence partitioning

1.Good test case reduces by more than one the number of other test cases which must be developed
2.Good test case covers a large set of other possible cases
3.Classes of valid inputs
4.Classes of invalid inputs

Boundary testing

This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:

1.For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2.If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3.Apply guidelines 1 and 2 to the output.
4.If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.

Test case Design for Boundary value analysis:

Situations on, above, or below edges of input, output, and condition classes have high probability of success.

Error Guessing

Error Guessing is the process of using intuition and past experience to fill in gaps in the test data set. There are no rules to follow. The tester must review the test records with an eye towards recognizing missing conditions. Two familiar examples of error prone situations are division by zero and calculating the square root of a negative number. Either of these will result in system errors and garbled output.
Other cases where experience has demonstrated error proneness are the processing of variable length tables, calculation of median values for odd and even numbered populations, cyclic master file/data base updates (improper handling of duplicate keys, unmatched keys, etc.), overlapping storage areas, overwriting of buffers, forgetting to initialize buffer areas, and so forth. I am sure you can think of plenty of circumstances unique to your hardware/software environments and use of specific programming languages.

Error Guessing is as important as Equivalence partitioning and Boundary Analysis because it is intended to compensate for their inherent incompleteness. As Equivalence Partitioning and Boundary Analysis complement one another, Error Guessing complements both of these techniques.

White Box testing

White box testing (logic driven) is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. Guarantee that all independent paths within a module have been exercised at least once
2. Exercise all logical decisions on their true and false sides,
3. Execute all loops at their boundaries and within their operational bounds, and
4. Exercise internal data structures to ensure their validity.

Path Testing

A path-coverage test allow us to exercise every transition between the program statements (and so every statement and branch as well).
•First we construct a program graph.
•Then we enumerate all paths.
•Finally we devise the test cases.

Possible criteria:

1. Exercise every path from entry to exit;
2. Exercise each statement at least once;
3. Exercise each case in every branch/case.

Condition testing

A condition test can use a combination of Comparison operators and Logical operators.
The Comparison operators compare the values of variables and this comparison produces a boolean result. The Logical operators combine booleans to produce a single boolean result that is the result of the condition test.

e.g. (a == b) Result is true if the value of a is the same as the value of b.

Myers: take each branch out of a condition at least once.
White and Cohen: for each relational operator e1 < e2 test all combinations of e1, e2 orderings. For a Boolean condition, test all possible inputs (!).

Branch and relational operator testing---enumerate categories of operator
values.


B1 || B2: test {B1=t,B2=t}, {t,f}, {f,t}
B1 || (e2 = e3): test {t,=}, {f,=}, {t,<}, {t,>}.

Loop Testing

1. For single loop, zero minimum, N maximum, no excluded values:
2. Try bypassing loop entirely.
3. Try negative loop iteration variable.
4. One iteration through loop.
5. Two iterations through loop---some initialization problems can be uncovered only by two iterations.
6. Typical number of cases;
7. One less than maximum.
8. Maximum.
9. Try greater than maximum.

Data Flow Testing

Def-use chains:

1. Def = definition of variable
2. Use = use of that variable;
3. Def-use chains go across control boundaries.
4. Testing---test every def-use chain at least once.

Stubs for Testing

A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system.

Stubs for Top-Down Testing
4 basic types:
o Display a trace message
o Display parameter value(s)
o Return a value from a table
o Return table value selected by parameter

Drivers for Testing

Test Harness or a test driver is supporting code and data used to provide an environment for testing part of a system in isolation.

Intruduction to Software Testing - Part 1

Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

Life Cycle of Testing Process

The following are some of the steps to consider:
•Obtain requirements, functional design, and internal design specifications and other necessary documents
•Obtain schedule requirements
•Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
•Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
•Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
•Determine test environment requirements (hardware, software, communications, etc.)
•Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
•Determine test input data requirements
•Identify tasks, those responsible for tasks
•Set schedule estimates, timelines, milestones
•Determine input equivalence classes, boundary value analyses, error classes
•Prepare test plan document and have needed reviews/approvals
•Write test cases
•Have needed reviews/inspections/approvals of test cases
•Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
•Obtain and install software releases
•Perform tests
•Evaluate and report results
•Track problems/bugs and fixes
•Retest as needed
•Maintain and update test plans, test cases, test environment, and testware through life cycle

Levels of Testing

Following are the various levels of testing:

1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing

Now lets see what each of the above levels mean:

1. Unit Testing:

The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

2. Integration Testing

Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Integration can be top-down or bottom-up:

• Top-down testing starts with main and successively replaces stubs with the real modules.
• Bottom-up testing builds larger module assemblies from primitive modules.
• Sandwich testing is mainly top-down with bottom-up integration and testing applied to certain widely used components

3. System Testing

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.

4. Acceptance Testing

Final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

User Acceptance Testing is often the final step before rolling out the application. Usually the end users who will be using the applications test the application before ‘accepting’ the application.

This type of testing gives the end users the confidence that the application being delivered to them meets their requirements. This testing also helps nail bugs related to usability of the application.

Types of Testing

There are varioud types of testing performed on a software to ensure better quality. Below are some of the testing types:

Incremental integration testing

Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Sanity testing

Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Compatability testing

Testing how well software performs in a particular hardware/software/operating
system/network/etc. environment.

Exploratory testing

Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Ad-hoc testing

Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

Comparison testing

Comparing software weaknesses and strengths to competing products.

Load testing

Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

System testing

Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Functional testing

Black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

Volume testing

Volume testing involves testing a software or Web application using corner cases of "task size" or input data size. The exact volume tests performed depend on the application's functionality, its input and output mechanisms and the technologies used to build the application. Sample volume testing considerations include, but are not limited to:

If the application reads text files as inputs, try feeding it both an empty text file and a huge (hundreds of megabytes) text file.

If the application stores data in a database, exercise the application's functions when the database is empty and when the database contains an extreme amount of data.

If the application is designed to handle 100 concurrent requests, send 100 requests simultaneously and then send the 101st request.

If a Web application has a form with dozens of text fields that allow a user to enter text strings of unlimited length, try populating all of the fields with a large amount of text and submit the form.

Stress testing

Term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Sociability Testing

This means that you test an application in its normal environment, along with other standard applications, to make sure they all get along together; that is, that they don't corrupt each other's files, they don't crash, they don't consume system resources, they don't lock up the system, they can share the printer peacefully, etc.

Usability testing

Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Recovery testing

Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing

Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Performance Testing

Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

End-to-end testing

Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Regression testing

Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Parallel testing

With parallel testing, users can easily choose to run batch tests or asynchronous tests depending on the needs of their test systems. Testing multiple units in parallel increases test throughput and lower a manufacturer's

Install/uninstall testing

Testing of full, partial, or upgrade install/uninstall processes.

Mutation testing

A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Alpha testing

Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing

Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.