Software Engineering The various components that were associated with the process o Software Engineering have already been discussed at length. Alse in the previous unit, we came to know about some basic concepte of the Software design process like architectural design, low-level designs, Pseudo codes, flow charts, coupling and cohesion measures etc. Apart from that, few other topics relating to software measurement and metrics that enable us to gain insight by providing a mechanism for objective evaluation were alse discussed at length. In this unit, we get introduced to the concept of Software testing which is a very important phase in the process of any kind of software development process in order to ensure the successful production of a fully functional and error-free final product Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design, and code generation. The increasing visibility of software as a system element and the attendant "costs" associated with a software failure are motivating forces for well-planned, thorough testing, In this unit we discuss software testing fundamentals and techniques for software test case design. Software testing fundamentals define the overriding objectives for software testing.
Testing presents an interesting anomaly for the software engineer. During the earlier software engineering activities, the engineer attempts to build software from an abstract concept to a tangible product. Then comes the testing process where the engineer creates a series of test cases that are intended to "demolish" the software that has been built. In fact, testing is the one step in the software process that could be viewed as destructive rather than constructive. Software engineers are by their nature constructive people. Testing requires that the developer discard preconceived notions of the "correctness" of software just developed and overcome a conflict of interest that occurs when errors are uncovered. Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put into use. When we test software, we execute a program using artificial data. We check the results of the test run for errors, anomalies, or information about the program's non-functional attributes.
The testing process has two distinct goals:
1. To demonstrate to the developer and the customer that the software meets its requirements. For custom software, trnis means that there should be at least one test for evely requirement in the requirements document. For geneno software products, it means that there should be tests for systematically uncover different classes of errors and to do so with counter to the commonly held view that a successful test is one in all of the system features, plus combinations of these features, that will be incorporated in the product release. finally, all error handling paths are tested. successfully, it will uncover errors in the software. As a secondary which no errors are found, Our objective is to design tests that
2. To discover situations in which the behavior of the software e incorrect, undesirable, or does not conform to its specification. These are a consequence of software defects. Defect testing is concerned with rooting out undesirable system behavior such as system crashes, unwanted interactions with other systems, incorrect computations, and data corruption. rce objectives imply a dramatic change in viewpoint. They move a minimum amount of time and effort. If testing is conducted benefit, testing demonstrates that software functions appear to be working according to specification, that behavioral and nerformance requirements appear to have been met. In addition, lata collected as testing is conducted provide a good indication of coftware reliability and some indication of software quality as a whole. But testing it can show only that software errors and defects are present. It is important to keep this statement in mind as testing is being cannot show the absence of errors and defects, conducted.
Unit testing is the process of testing program components, such as methods or object classes. Individual functions or methods are the simplest type of component. Unit testing focuses verification effort on the smallest unit of software design-the software component or module. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module. The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing.
UNIT TEST CONSIDERATIONS
The tests that occur as part of unit tests are illustrated Schematically. The module interface is tested to ensure that information properly flows into and out of the program unit under test. The Jocal data structure is examined to ensure that la stored temporarily maintains its integrity during all steps in an gorithm's execution. Boundary conditions are tested to ensure Tests of data flow across a module interface are required before any other test is initiated. If data do not enter and exit properly, ll other tests are doubtful. In addition, local data structures should be exercised and the local impact on global data should be ascertained during unit testing. Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to erroneous computations, improper comparisons, or improper control flow. Basis path and loop testing are effective techniques for uncovering a broad array of path errors. Among the more common errors in computation are misunderstood or incorrect arithmetic precedence mixed mode operations 3. 1. 2. incorrect initialization, 4. precision inaccuracy, 5. incorrect symbolic representation of an expression Comparison and control flow are closely coupled to one another (ie , change of flow frequently occurs after a comparison). Test cases should uncover errors such as 1. comparison of different data types 2. incorrect logical operators or precedence 3. expectation of equality when precision error makes equality unlikely 4. incorrect comparison of variables 5. improper or nonexistent loop termination 6. failure to exit when divergent iteration is encountered, and 1. Error description is unintelligible. incorporate error handling into software and then never test it. when an error does occur. Unfortunately, there is a tendency to Among the potential errors that should be tested when error 2. Error noted does not correspond to error encountered. 7. improperly modified loop variables Boundary testing is the last (and probably most important) task of Good design dictates that error conditions be anticipated and error- handling paths set up to reroute or cleanly terminate processing handling is evaluated are . Frror condition causes system intervention prior to error handling. 4. Exception-condition processing is incorrect. 6. Error description does not provide enough information to assist in the location of the cause of the error. Boundary testing is the last (and probably most important) task of the unit test step. Software often fails at its boundaries. That is, arrors often occur when the n"" element of an n-dimensional array is processed, when the i" repetition of a loop with i passes is invoked, when the maximum or minimum allowable value is encountered. Test cases that exercise data structure, control flow, and data values just below, at, and just above maxima and minima are very likely to uncover errors.