Reliability Testing
Reliability testing is done to evaluate the product's ability to perform its required functions under stated conditions for a specified period of time or for a large number of iterations. Examples of Reliability include querying a data base continuously for 48 hours and performing login operations 10,000 times. [Source:Software Testing: Principles and Practice By Srinivasan Desikan, Gopalaswamy Ramesh]Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software. Risk of using software can also be assessed based on reliability information. [Hamlet94] advocates that the primary goal of testing should be to measure the dependability of tested software. [Source]
Testing for Reliability
Testing for reliability is about exercising an application so that failures are discovered and removed before the system is deployed. Because the different combinations of alternate pathways through an application are high, it is unlikely that you can find all potential failures in a complex application. However, you can test the most likely scenarios under normal usage conditions and validate that the application provides the expected service. As time permits, you can apply more complicated tests to reveal subtler defects.
This post contains a selection of testing concepts and recommendations that are especially relevant to creating reliable applications.