Entry / Exit Criteria in Software TestingEntry and exit criteria can be defined for testing as these can be defined for other phases in the life cycle of a software product.
Entry and exit criteria are a must for the success of any project. If you do not know where to start and where to finish then your goals are not clear.
The entry criteria are specific, measurable conditions that must be met before the process can be started. Similarly the exit criteria are specific, measurable conditions that must be met before the process can be completed.
[Source: The Certified Software Quality Engineer Handbook by Linda Westfall, Software Testing by S. Koirala, S. Sheikh]
Below are the examples of entry and exit criteria:Example 1:
- 100% statements, branch or decision coverage by the executed tests. This ensures a technical coverage and identifies the areas of the software that have not yet been executed.
- reaching a specific detection ratio for new defects per period of time, but this ratio must be function of the test effort and of the severity of the detected defects;
- the defects detected at this level correspond to the type of defects expected. This confirms the maturity level of the design and test processes preceding the current phase. This also enables us to determine the maturity of the tested product.
- all tests planned for this test level have been successfully executed.
- all tests for "catastrophic" and "critical" (or even "marginal") integrity levels have been designed, implemented and executed successfully on the last version of the software or system.
Some exit criteria that should be avoided:
Stopping testing when the planned test termination date is reached.
Stopping testing when the planned test effort has been reached.
Stopping when all the test cases have been executed without finding new defects.
Potential exit criteria for a system test level as provided by Rex Black:
No modification (design, code or characteristics) during the last 3 weeks, except to deal with the defects identified during system tests;
No stopping, crash, or inexplicable end of processes on any software servers or systems during the last 3 weeks;
No customer systems have become unstable or unusable following an installation failure during system testing;
The testing team has run all tests planned on the delivery candidate version of the software;
The development team has solved all the “to be fixed” defects, as planned by sales, marketing or customer services;
[Source: Fundamentals of Software Testing by Bernard Homès]
Customer should provide requirements document or acceptance test plan.
Customer has successfully executed the acceptance test plan.
[Source: Software Testing by S. Koirala, S. Sheikh]
Performance tests require a stable product due to its complexity and accuracy that is needed. Changes to the product affect performance numbers and may mean that the tests have to be repeated. It will be counter-productive to execute performance test cases before the product is stable or when changes are being made. Hence, the performance test execution normally starts after the product meets a set of criteria. The set of criteria to be met are defined well in advance and documented as part of the performance test plan. Similarly, a set of exit criteria is defined to conclude the results of performance tests.
[Source: Software Testing: Principles and Practice by Srinivasan Desikan, Gopalaswamy Ramesh]
Some of the entry criteria for system test may be:
Definition of all test scenario and test cases is completed.
Clarification of all queries is available.
Availability of hardware and software for initiating testing is ensured.
All review comments and unit testing defects are closed.
Installers are available if an application needs installations.
Installation testing, if applicable, is successful.
Smoke testing/ sanity testing is successful.
Some of the exit criteria may be defined as follows:
The number of defects found with respect to the targeted number of defects. If the target is achieved at the end of testing iterations, one may declare that testing is completed successfully.
Coverage in terms of requirements, functionalities and features is achieved as defined in the test plan.It needs completion of requirements traceability matrix with test results to declare adequate coverage.
Test cases defined in the test suite are completed.
[Source: Software Quality Assurance by Milind Limaye]
Few more examples of entry and exit criteria
[Source: Achieving Software Quality Through Teamwork by Isabel Evans]
Are the necessary documentation, design and requirements information available that will allow testers to operate the system and judge correct behavior.
Is the system ready for delivery, in whatever form is appropriate for the test phase in questions?
Are the supporting utilities, accessories, and prerequisites available in forms that testers can use?
Is the system at the appropriate level of quality? Such a question usually implies that some or all of a previous test phase has been successfully completed, although it could refer to the extent to which code review issues have been handled. Passing a smoke test is another frequent measure of sufficient quality to enter a test phase.
Is the test environment - lab, hardware and system administration support - ready?
One exit criteria might be that that all the planned test cases and the regression tests have been run.
[Source: Managing the Testing Process: Practical Tools and Techniques for Managing by Rex Black]
Integration testing entry criteria:
Integration testing exit criteria:
System testing entry criteria:
System testing exit criteria:
[Source: Critical Testing Processes: Plan, Prepare, Perform, Perfect by Rex Black]