Tuesday, July 10, 2007

Software Test Automation

Because of the shrinking schedules and less number of resources and to survive in this competitive software industry (do more in less time) software project managers or leaders are left with no choice but to introduce software test automation tools in their projects.

Software Automation is a testing of a software program by another software program.

How to Choose Automation software?

For choosing a tool ask yourself the following questions:

The tool which you are going to buy supports the particular technology on which you are supposed to work or it does not.

What is the learning curve of that tool? Will it go with your project schedule,
will all your people be able to learn the tool within the specified time.

Will automation with this particular tool save you money or not.

Does the tool have all the features you need and how many bugs are there in the tool itself.

And there are many more things you need to look into depending on the kind of work you are automating.

If you ask which tests should be automated then we should consider automating tests which are of repeatable nature, tests which you have to run time and again with or without little modifications, regression tests, performance tests, smoke tests, if you automate small portions of a test that can also help to speed up the execution time, non user interface functions and if the test are to be run on different operating systems.

Cost of Automation:

First and foremost is the cost of buying the automation software itself.

When we start automation there is always a need for extra endeavor to set up and maintain automation framework.

In automation it is comparatively difficult to discover and recover from error or in other words error discovery and recovery is easy in manual testing than in automation.

Sometimes instead of reducing the cost what automation does is that it introduces more cost. It may reduce certain costs of operational processes but at the same time it increases other costs like cost required to set up and maintain automation process, cost of managing of automation failures etc.

For automation, companies need more skilled people (automation experts, we can say) to design and implement automated test cases, to develop automation environment. If automation experts are not available in the market then companies need to train their own manual testers or developers on automation. It obviously increases investments by the company. And some people say that these are fixed costs which can be recovered as the time passes by but what happens if a process which is going to be automated is of a limited lifetime.

There will be costs like cost of maintaining automation as the product changes, cost of any new tasks directly or indirectly associated with the introduction of automation etc.

Points to be considered during automation:

We have to see which tests CAN be automated and which tests CANNOT be automated. Most of the time effort is spend on automating tests which should not be automated. We need to understand that everything cannot be automated and we should not even try that. The best candidates for automation are regression tests, any tests which have to executed several times, Tests which are manually very expensive, smoke tests etc.

One of the most commonly seen factors is that testing tool training is often given late in the SDLC. Training on the testing tool should be given early in the SDLC so that if there are any issues with the tool those can be brought to light early and their resolution is provided as soon as possible otherwise it will be too late to take care of the issues at later stages of the development life cycle. And sometimes the learning curve of the testing tool is too long to extract benefits out of it if it is incorporated at later stages.

There should be a proper process or strategy in place which should describe the steps involved in using the tool effectively and productively and which should make the implementation of the tool easy. Lack of this results in test cases which are good for one time (cannot accommodate software changes). Due to this the schedule of the project slips and there are delays and due to these delays results the over costs and most important results in the loss of market. So proper analysis of the automation tool is essential and time should be spent on the introduction of this tool to make it long term success. You can't build up test suites that will continue to exist and be useful in the next release without clear and realistic planning.

There is a need to define clear and concise goals for doing test automation. There will be frustration and dissatisfaction if there are no clear cut goals for test automation. There should be an agreement on test requirements for automation. Several reasons are there for automation like to speed up test execution, to improve test coverage, to reduce cost, to make testing more reliable. There might be different objectives of the development, management and testing staff and these objectives should come to some kind of agreement in order to make automation a success. The basic idea here is that we don't have to try automating each and every part of the application e.g. just try automating the execution and do verification manually or vice versa like focus on the most mundane tasks and set the requirements appropriately for those. These requirements in written will force the goals of different parties to converge leading towards a successful automation project.

There is a need to examine/evaluate the existing testing/development processes in the organization. This analysis is essential to determine if the testing process fulfills the following criterion:

Objectives of the testing have to properly defined.

Testing approach/strategy has to be in place.

There should be proper communication and documentation of the testing process.

The process of testing should be measured and audited effectively.

If possible test team should be involved from the beginning of the SDLC.

Evaluating/examining the organization's test process helps to recognize the goals and objectives of testing. These above items are the foundation for the better test automation.

There are different phases of software development life cycle like analysis, design, testing and most of the time what happens is that a different tool is used during each phase of SDLC like a tool for designing, a different tool for test management (like Extended Test Plan, Trac, QADirector, TestDirector) and different tool for business analysis and there is always a need that output of one tool should be fed as input to the other tool but since different tools belong to different vendors they sometimes don't communicate or we have to write long codes to make them compatible. This thing needs to be taken care of at the very beginning of the project.

Stress is also given to the documentation of the testing process which will communicate it to others. Obviously an undocumented and not-communicated test process cannot be implemented. Documenting the testing process helps in measuring it and improving it otherwise how could you think of that?

Manual testing can easily adjust to changes and deal with the complications as compared to automation. While designing test cases we need to think about future. Test cases and test designs should be maintained and expanded so that they remain functional and relevant with the new releases of the build. The developer/programmer delivered the code -> the testers made automated and manual tests -> with the help of those tests they found bugs and reported problems about the programs UI. The developer/programmer made the necessary changes to the build but now the problem is that the test cases have to be revised and updated as the user interface has changed. This often becomes a loop thereby increasing the effort spend on testing with the result of extending the testing period and the cost of testing also increases by many folds. Developer/programmer cannot be blamed for this saying that they should lock the external design before writing any code. The automation process should be effective enough to deal with late changes. We should design a series of tests that are easy to maintain. A best example of this is data driven design which reduces the maintenance effort:


(Many thanks to CEM KANER
http://www.kaner.com/ for allowing me to use this below example)

A Calendar-Based Example of Software Architecture

Imagine testing a program that creates calendars like the ones you can buy in bookstores. Here are a few of the variables that the program would have to manage:


The calendar might be printed in portrait (page is taller than wide) or landscape orientation.

The calendar might have a large picture at the top or bottom or side of the page.

If there is a picture, it might be in TIF, PCX, JPEG, or other graphics format.

The picture might be located anywhere on the local drive or the network.

The calendar prints the month above, below, or beside the table that shows the 30 (or so) days of the month.

Which month?

What language (English?, French?, Japanese?) are we using to name the month and weekdays?

The month is printed in a typeface (any one available under Windows), a style (normal, italic, bold, etc.), a size (10 points? 72 points?).

The calendar might show 7 or 5 or some other number of days per week.

The first day of the week might be Sunday, Monday, or any other day.

The first day of the month might be any day of the week.

The days of the week might run from left to right or from top to bottom.

The day names are printed in a typeface, style, or size and in a language.

The days are shown in a table, and are numbered (1, 2, 3, etc.) Each day is in one cell (box) in the table. The number might be anywhere (top, bottom, left, right, center) in the cell.

The cell for a day might contain a picture or text (“Mom’s birthday”) or both.

In a typical test case, we specify and print a calendar.

Suppose that we decided to create and automate a lot of calendar tests. How should we do it?

One way (the approach that seems most common in GUI regression testing) would be to code each test case independently (either by writing the code directly or via capture/replay, which writes the code for you). So, for each test, we would write code (scripts) to specify the paper orientation, the position, network location, and type of the picture, the month (font, language), and so on. If it takes 1000 lines to code up one test, it will take 100,000 lines for 100 tests and 1,000,000 lines for 1000 tests. If the user interface changes, the maintenance costs will be enormous.

Here’s an alternative approach:

Start by creating a table. Every column covers a different calendar attribute (paper orientation, location of the graphic, month, etc.). Every row describes a single calendar. For example, the first row might specify a calendar for October, landscape orientation, with a picture of children in costumes, 7-day week, lettering in a special ghostly typeface, and so on.

For each column, write a routine that implements the choice in that column. Continuing the October example, the first column specifies the page orientation. The associated routine provides the steps necessary to set the orientation in the software under test. Another routine reads “October” in the table and sets the calendar’s month to October. Another sets the path to the picture. Perhaps the average variable requires 30 lines of code. If the program has 100
variables, there are 100 specialized routines and about 3000 lines of code.

Finally, write a control program that reads the calendar table one row at a time. For each row, it reads the cells one at a time and calls the appropriate routine for each cell.

In this setup, every row is a test case. There are perhaps 4000 lines of code counting the control program and the specialized routines. To add a test case, add a row. Once the structure is in place, additional test cases require 0 (zero) additional lines of code.

Note that the table that describes calendars (one calendar per row) is independent of the software under test. We could use these descriptions to test any calendar-making program. Changes in the software user interface won’t force maintenance of these tests.

The column-specific routines will change whenever the user interface changes. But the changes are limited. If the command sequence required for printing changes, for example, we change the routine that takes care of printing. We change it once and it applies to every test case.

Finally, the control program (the interpreter) ties table entries to column-specific routines. It is independent of the calendar designs and the design of the software under test. But if the scripting language changes or if we use a new spreadsheet to store the test cases, this is the main routine that will change.

In sum, this example reduces the amount of repetitious code and isolates different aspects of test descriptions in a way that minimizes the impact of changes in the software under test, the subject matter (calendars) of the tests, and the scripting language used to define the test cases. The design is optimized for maintainability.

Which monthPaper OrientationLocation of graphicPath of the imageWhich language---------
Calendar 1---------
Calendar 2---------
-------------------
Calendar n---------

If this one variable (Which month, paper orientation, language) requires 30 lines of code and there are 100 variables (columns) it requires 3000 lines of code for a test case.

A control program will be there that will reads the calendar table one row at a time. For each row, it reads the cells one at a time and calls the appropriate routine for each cell. In this setup, every row is a test case. There are perhaps 4000 lines of code counting the control program and the specialized routines. To add a test case, add a row. Once the structure is in place, additional test cases require 0 (zero) additional lines of code.

For now this is more than enough to digest for Software Test Automation. Below are the links (I will be adding more) where you can find tons of articles on Software Test Automation.