Documentation and Acceptance Testing
Documentation and Acceptance Testing
Documentation
Many IS organizations are faced with the ongoing issue of good product documentation. Documentation has been classically defined as having two components: user documentation and technical documentation. User documentation consists of the necessary instructions required for users to operate and maintain the system. Technical documentation, on the other hand, contains detailed information about the inner components of the product itself. Technical documentation should be designed to provide developers with the ability to support and maintain the system from a programming and engineering perspective.
Once analysis and design are completed, user documentation can be developed as a parallel function with the rest of the product life cycle compo- nents. This means that the screens and reports can be used from the design phase to build the documentation on the inputs, queries and output reports of the system. If the software is a GUI product, then the user documentation must also adhere to the standard Help facility which is included in these types of products. Although analysts may be involved with providing information to the documentation team, it is not their responsibility to produce user documentation.
A major part of the technical documentation should be the product of the analyst’s work. All of the tools used by the analyst to formulate the logical equivalent must remain as the schematic or blueprint of the product. It is not advisable to try to provide other documentation. First, there rarely is enough time, and second, it should not be necessary. Remember, the concept of using modeling tools was compared to the creation and maintenance of an architect’s blueprint in which the schematic had to be self-documenting. There are, however, other components of technical documentation. This technical documentation relates to the physical software development itself. Programming source code, product libraries and version control are examples of technical product documentation that should be the responsibility of the programming team.
Acceptance Test Plans
Acceptance Test Plans can be defined as the set of tests that if passed will establish that the software can be used in production. Acceptance tests need to be established early in the product life cycle and should begin during the analysis phase. It is only logical then that the development of acceptance test plans should involve analysts. As with requirements development, the analyst must participate with the user community. Only users can make the final decision about the content and scope of the test plans. The design and development of acceptance test plans should not be confused with the testing phase of the software development life cycle. Testing should be defined as the carrying out or execution of the acceptance test plans themselves.
The analysis and design of acceptance test plans is often overlooked in many IS organizations. This is because it is viewed inappropriately as a testing method rather than as a way of developing better systems. The question then is: Why and how do acceptance test plans improve software quality?
Quality During Analysis
If acceptance test planning is conducted as a step in analysis, then the issue of how to best test the requirements becomes part of making decisions about overall system requirements. Specifically, if a user wants something that can be difficult to test and maintain, it may force them to rethink the requirement and alter its focus. What better time to do this than when the requirement itself is being discussed? Remember, a strong part of quality software is how easy that software is to maintain.
How Much Can Be Tested?
One must work with the understanding that no new product will ever be fault-free. The permutations of testing everything would make the timetable for completion unacceptable and the costs prohibitive. The acceptance test plan is a strategy to get the most important components tested completely enough for production. The testing of software can be compared to the auditing of a company. Accounting firms that conduct an audit for a public company must sign a statement that the books and records of their client are materially correct, meaning that there are no significant discrepancies in the stated numbers. Accounting firms know that they cannot test everything in their client’s books and records to be 100 % confident that the numbers are correct. Therefore, auditors apply strategic methods like statistical sampling in order to be “comfortable” that the risk of a significant difference is improbable. Software verification is no different. Analysts and users must together decide on the minimum tests necessary to provide comfort to going live with the system. It is unfair to leave this responsibility solely with the user. Having the analyst or programmer do this alone is equally unfair. The wise strategy is to have acceptance test plans developed by the analyst along with input from the user and verification by programming as follows:
1. As each part of a system is functionally decomposed using the various modeling tools, the analyst should develop generic test plans that are based on typical standard logic tests (e.g., Account_Number must be numeric). The analyst and users should then meet to refine the test plans by focusing on how many permutations are required for each logical test. Users should also be encouraged to establish new tests that are missing from the plan. It is highly recommended that the analyst not wait for the entire analysis to be completed before working with acceptance test plan generation Tailoring an acceptance test plan specifically to the needs of a group of users is a good technique. This means that as the analyst completes the interviews with each user or user group, the acceptance test plans should be completed for the specifications developed for them. This philosophy is also good because it avoids the process of meeting again and rehashing old information.
2. The analyst develops interpart tests necessary to ensure that each component properly links with the other.
3. Once the acceptance test plans are approved by users, development must design tests that focus on product-specific validation. These include operating system and program tests to ensure that the operating environment is working correctly. Users and analysts need not be involved with this step as it represents the testing from an engineering perspective. Quality assurance personnel who are part of the devel- opment team should be involved in these test designs as these professionals are specifically trained to provide intricate testing that involves the mathematical aspects of finding errors. Once again, quality assurance and testers should not be confused or combined with the role the analyst and users play in the design of acceptance test plans.
More Efficient Development
Providing development personnel with the acceptance test plans is a somewhat controversial idea. Many IS professionals would object to doing so, arguing that the testing of software quality should not be carried out by programmers. However, programmers who are given the test plans are not being asked to do the testing but rather to understand what the user perceives as the most important operational qualities of the system. If the programmer is aware of the specific focus of the tests, then he/she should direct the development to ensure that such errors do not occur. In essence, we are trying to focus the programmer on the most important aspects of the system. To ask programmers to treat each component of a program in an equal manner is unfair, especially since they have no perspective on how to make the decision. If the programmer is especially focused on the tests, there should be fewer errors detected during the test review, thus supporting a more efficient development effort. Focusing on the tests does not suggest, however, that the programmer is free to ignore the other quality areas of the program.
Now that we have established the reasons and processes of developing acceptance test plans, the analyst must provide a format for its use. The format must be user-friendly so that users can participate. The format must include a record of each iteration of the test, allowing for better documentation and audit trail. The test plans should be in a machine readable format so that changes can be made. Figure 12.1 contains a sample acceptance test plan.
The acceptance test plan in Figure 12.1 reflects a group of tests to be applied to the contact screen shown in Figure 12.2. This particular test plan assumes that no data is on the screen and that the operator will only use the enter key (as opposed to a mouse). Each condition to be tested and its expected result must be listed. The tester will then execute each test number and fill in the results along with any comments. The test plan is then reviewed for completeness. If the test plan fails, the test will be performed again after the software is fixed. Each test iteration is part of the documentation of the testing process. Users and analysts should periodically review the test plans when the system goes live. This procedure will allow the test plans to be “fine-tuned” should any critical errors occur when the system goes into production.
Figure 12.1 Sample acceptance test plan.
Figure 12.2 Contact Management screen.
Designing acceptance test plans is not a trivial task, for either the user or the analyst; however, good test plans can make the difference in the ultimate quality of the software. When GUI products are tested, the number of test iterations increases substantially. For example, the test plan in Figure 12.1 is focused only on data entry (as opposed to changing existing data), and when the operator uses the enter key. Because this is GUI, operators can enter data by using the enter key, a mouse or the tab key. Therefore, the analyst will need to repeat the same test plan two more times, once when the operator uses a mouse and the other when a tab key is used.
Budget Process
We have continued to expand the role of the analyst in the life cycle of systems development. Analysts must develop the ability to view an upcoming project and submit an accurate budget for their time and effort. The best approach here is to list the generic tasks that are typically performed by an analyst and then attempt to provide an estimate of the requirements not only for each task, but also for each system component within the task. Below is a step-by-step process that can be followed.
Establish the Task List
The analyst should begin the budget by listing the three standard tasks:
• interviewing;
• modeling;
• acceptance test planning.
Each of these tasks will then need to be expanded depending on the scope of the project.
Interviewing
The analyst will need to assess the number of users on the project and whether there will be a need for JAD sessions. Once this schedule is put together, the analyst should arrange for pre-meetings to assess the user skill sets, as this will affect the strategy and time frame of the user interface process. Analysts should employ a weighted criterion when budgeting time. Although there is no exact science to doing this, Figure 12.3 contains a suggested template to follow.
This table does not include the specific budget for each session, as this would require the analyst to have pre-meetings to assess the number of functions in each component and then budget the time necessary. The user interviewing tasks should take into consideration an estimate for the number of iterations. More detailed sessions can require three to four iterations before final approval is accomplished. The number of hours (or whatever time period is chosen) should be based on a novice user. Note that if the user(s) is knowledgeable, there may actually be a reduction in the budgeted time whereas amateurs (those that know a little) may substantially increase the time frame. The latter situation is due to the likelihood that amateur users will tend to get off track and in general have more questions. The ultimate budget for JAD sessions must be based on a composite of the levels of involved users for each JAD session.
Figure 12.3 User interview budget worksheet.
Modeling
Once the interviewing budget has been determined and there is an established understanding of the scope, objectives, constraints and assumptions of the project, the analyst can begin to determine the time period for modeling the system requirements. Some guidelines are listed in Figure 12.4.
Analysts will need to get a sense during the pre-interviews of the likely number of processes (or data elements with respect to the repository) and how many diagrams might be included. Once this is established, then the budget can be developed. Note that the weight factors are based on the level of integrated automation. Integrated automation refers to the extent of computerization (e.g., a CASE tool) that will be used to develop the models. Non-integrated, which is the non-weighted factor, represents computerized diagramming without intelligent interfaces (which would exist with a CASE product).
Acceptance Test Plans
Acceptance test plan budgeting must take into consideration two key factors: the user interviews and the type of product. The user interview portion should be based on the original budget for user interviews applying a factor based on the modeling budget (see Figure 12.5).
Essentially the above matrix reflects that budget hours for acceptance test plans are approximately 10 % of the original interview budget hours factored by the estimate of the number of diagrams and the level of automation. The 10 % represents the portion of the interview time that is perceived to be spent discussing testing issues with the user. The final factor is the GUI. Because of the event-driven nature of GUI screens, the number of tests to be performed is dramatically increased. Many IS professionals would feel that a 2.00 factor is too low, since many favor 3.00.
Figure 12.4 Modeling budget worksheet.
Figure 12.5 Acceptance test plan budget worksheet.
Problems and Exercises
1. How does modeling provide the input of the system’s documentation?
2. What is an acceptance test plan?
3. Comment on the statement, “Cannot test 100 % of everything.”
4. How does strategic testing relate to risk management?
5. What is meant by the concept of self-documentation and quality?
6. How do acceptance test plans facilitate productivity and quality of the programming phase?
7. Why is it important for the analyst to provide a budget for his/her tasks?
8. How can acceptance test plans strengthen the relationship with users?
9. Why does acceptance test planning assist the budget process?
10. When should acceptance test plans be developed and by whom?
Comments
Post a Comment