Tuesday, October 28, 2008

Intruduction to Software Testing - Part 1

Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

Life Cycle of Testing Process

The following are some of the steps to consider:
•Obtain requirements, functional design, and internal design specifications and other necessary documents
•Obtain schedule requirements
•Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
•Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
•Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
•Determine test environment requirements (hardware, software, communications, etc.)
•Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
•Determine test input data requirements
•Identify tasks, those responsible for tasks
•Set schedule estimates, timelines, milestones
•Determine input equivalence classes, boundary value analyses, error classes
•Prepare test plan document and have needed reviews/approvals
•Write test cases
•Have needed reviews/inspections/approvals of test cases
•Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
•Obtain and install software releases
•Perform tests
•Evaluate and report results
•Track problems/bugs and fixes
•Retest as needed
•Maintain and update test plans, test cases, test environment, and testware through life cycle

Levels of Testing

Following are the various levels of testing:

1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing

Now lets see what each of the above levels mean:

1. Unit Testing:

The most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

2. Integration Testing

Testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Integration can be top-down or bottom-up:

• Top-down testing starts with main and successively replaces stubs with the real modules.
• Bottom-up testing builds larger module assemblies from primitive modules.
• Sandwich testing is mainly top-down with bottom-up integration and testing applied to certain widely used components

3. System Testing

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.

4. Acceptance Testing

Final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

User Acceptance Testing is often the final step before rolling out the application. Usually the end users who will be using the applications test the application before ‘accepting’ the application.

This type of testing gives the end users the confidence that the application being delivered to them meets their requirements. This testing also helps nail bugs related to usability of the application.

Types of Testing

There are varioud types of testing performed on a software to ensure better quality. Below are some of the testing types:

Incremental integration testing

Continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Sanity testing

Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Compatability testing

Testing how well software performs in a particular hardware/software/operating
system/network/etc. environment.

Exploratory testing

Often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Ad-hoc testing

Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

Comparison testing

Comparing software weaknesses and strengths to competing products.

Load testing

Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

System testing

Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Functional testing

Black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

Volume testing

Volume testing involves testing a software or Web application using corner cases of "task size" or input data size. The exact volume tests performed depend on the application's functionality, its input and output mechanisms and the technologies used to build the application. Sample volume testing considerations include, but are not limited to:

If the application reads text files as inputs, try feeding it both an empty text file and a huge (hundreds of megabytes) text file.

If the application stores data in a database, exercise the application's functions when the database is empty and when the database contains an extreme amount of data.

If the application is designed to handle 100 concurrent requests, send 100 requests simultaneously and then send the 101st request.

If a Web application has a form with dozens of text fields that allow a user to enter text strings of unlimited length, try populating all of the fields with a large amount of text and submit the form.

Stress testing

Term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Sociability Testing

This means that you test an application in its normal environment, along with other standard applications, to make sure they all get along together; that is, that they don't corrupt each other's files, they don't crash, they don't consume system resources, they don't lock up the system, they can share the printer peacefully, etc.

Usability testing

Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Recovery testing

Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing

Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Performance Testing

Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

End-to-end testing

Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Regression testing

Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Parallel testing

With parallel testing, users can easily choose to run batch tests or asynchronous tests depending on the needs of their test systems. Testing multiple units in parallel increases test throughput and lower a manufacturer's

Install/uninstall testing

Testing of full, partial, or upgrade install/uninstall processes.

Mutation testing

A method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Alpha testing

Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing

Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

No comments: