Rick Clements
rick_clements@yahoo.com
Many new companies find themselves in the same position our team was in. They have been developing a good product but their success makes change necessary. Success means more products need to be developed in less time.
This paper looks at the process of moving from an ad-hoc testing process to a process to a defined process. In an ad-hoc process, there isn't time to collect data for metrics. How can the process be improved without data or metrics? The good news is that since there are big problems the problems are easy to identify.
Key words and phrases: process improvement, software / firmware testing, getting started, embedded real-time systems
Rick Clements is a Software Quality Assurance (SQA) Engineer with 17 years of experience in the software industry. He is currently an SQA engineer at FLIR Systems, Inc.. He has 7 years in SQA at the Color Printing and Imaging Division of Tektronix. In addition to SQA, he has worked in real-time systems development, device drivers and software to test hardware. He has a B.S. in Computer Engineering from The University of Michigan.
The initial process was an ad-hoc testing procedure. There was continuous testing during the duration of the project. The test cycles were about two weeks long.
The first improvement was written test procedures in the form of a Test Item Specification (TIS). This resulted in being able to test the printer in a single week leaving a week for the vendor to fix defects. It also made defect easier to recreate. It has the long term advantage of being able to reuse some of the tests.
The next improvement was up front planning. Up front planning came in two steps. The first was test plans. Test plans allowed planning of resources. Before tests were written, we could plan what areas would be tested, what areas would not be tested and the risks involved. This allowed management to accept the risks or provide additional resources.
The other up front planning tool came later. That was the work break down structure. The work breakdown structure (WBS) described each task, how long it would take, the resources required, other tasks it depended on and how to tell when it was completed. This detail of planning was important with multiple overlapping projects. It allowed the essential tasks to be completed on the first project. Then, the resources could be shifted to the second project to complete the essential tasks. The resources could then be moved back to the first project to give better coverage by completing additional tasks on the first project.
The better planning allowed better use of personnel. This was also an incremental improvement. First, test fixture operators were brought into run tests. This freed engineers to do more planning and test design on following projects. As the process became better defined and the test operators became more experienced they could take on additional tasks. Several of the operators became test technicians. These technicians became capable of analyzing tests logs, test output, submit some defects and flag other defects for the engineer to analyze. They also began modify some of the manual test procedures under an engineers guidance.
There were mistakes made. One mistake was moving the writing and maintenance of the interface specification out of the QA group. The QA group spent more effort in getting enough detail to be testable and in keeping the specification up to date. Fortunately, changes to the specification were recorded in FAX's until the specification was updated. FAX's are a benefit of working with an outside vendor.
This paper takes a chronological description of the process improvement. This is done to highlight the fact the improvement was incremental and not a single step.
This division of Tektronix produces color printers for the office, commercial artist and engineering environment. Part of the print engines is developed by other companies. The print engine is part of the system that puts marks on paper. The team discussed in this paper tests the software that controls the real time processes in the print engine.
Each version of software arrives in the from of firmware burned into PROM's. The software is refereed to as firmware because new software can't be shipped to the customer on a floppy if a problem is found after it's released.
This paper focuses on the team testing the print engines from OEM vendors. Their approach was somewhat different than the team testing internally developed print engines. For example, the team testing the internal print engine firmware could get access to sensors and could talk with the design engineers but the specifications they got were less complete. A description of the internal print engine firmware team's approach can be found in "Mother2 and Moss: Automated Test Generation from Real-time Requirements" by Joe Maybee in the 1993 proceedings of the Pacific Northwest Software Quality Conference.
The printers were doing well very well against the competition. This success brought two problems. First, selling more printers made it too expensive to send a technician to each customer to replace the PROM's containing firmware. Second, more products needed to be tested more quickly.
The process, at this time, was to push a print engine into an engineer's cubical. Testing would continue until a new firmware release was ready in two weeks. Nine months later you have a brand new printer product. This engineer wrote the interface specification, interfaced with the vendor and did the testing. No time as allocated to write the tests before the printer arrived. This meant all the tests was done ad-hoc.
Ad-hoc testing has a number of problems. One problem, it's sometimes hard to recreate failures. Without the testing sequence written down, it's easy to leave out a step that got you to that failure. A second problem is there is no test procedure to build from for the next printer. A third problem is with no preplanning, you can't optimize the testing strategy. This made for a very long testing process.
This long testing cycle was something that the manager very much wanted to change. The question was how. There was never time to improve the process. In fact, there wasn't an organized process. The team was mostly mechanical engineers responsible for evaluating potential print engines in different technologies. There was only person with software testing experience in the team. He was too busy testing to improve the process.
During a re-organization, the print engine team and system software QA teams came under the same manager. He added half an engineer from the QA team to the engine firmware team to speed up the testing process. The engineers took the opportunity to design the tests before testing began.
The Test Item Specification (TIS) used in the software QA team was used. This had three main sections what will be tested, a list of test cases and test procedures. Each section could be reviewed before proceeding to the next. This allowed corrections to the earlier sections of the TIS before time was wasted on the more detailed later sections. Also, the earlier sections were shorter making it easier to get the design engineers to spend time from their schedule to do a review. The TIS format is shown in Appendix 1.
The first section of the TIS described what was going to be tested. In the software QA team this section was reviewed by the design engineer who wrote the code that would be tested. Since the design engineer for the engine firmware was in a different company, the design engineer who was the customer for the engine firmware reviewed this section and the other QA engineer reviewed this section.
The second section of the TIS was a list of the test cases. The test cases referenced requirements in the interface specification. The engine firmware QA team was fortunate in having one of the best specifications. This was partly because we were dealing with an external company. It was also do to the fact that we wrote the specification. This specification meant designing the test cases could go more quickly. It also meant we could tie the cases back to requirements in the specification.
The third section of the TIS was the actual test procedures. The procedures were optimized to team tests with like configurations together. For example, detecting letter size paper, being able to feed the letter paper though the printer and printing on letter size paper in the correct location could all be tested with a single step. While, being able to detect the absence and presence of the second feeder was done in two different steps because all the single feeder tests were teamed at the end of the procedure.
The first benefit was the testing could be done in a single week. This gave the vendor a week to make changes before sending a new version. As the code became more stable and had fewer failures, the test cycle was reduced to two days.
A second benefit was isolation was easier. Each procedure step was self contained and written down. This meant the failure could almost always be duplicated by repeating the procedure. A few failures were timing related and hard to reproduce manually.
A third benefit came when we started the next project. We could re-use some of the work we had done on the last project. Even though the new printer was a different technology from a different company a large part of the areas to be tested and test cases could be used as a starting point for the new project.
The fourth benefit we hoped to get was freeing up more of the QA engineer's time by having test fixture operators run the tests. We exceeded our expectations. We designed the test procedures for a test fixture operator with a good attitude and minimal computer experience. However, we often found people who were much more capable. As the test operators became more experienced and became test technicians, the QA engineers were freed to work on additional tasks. While we had hoped that would give the QA engineers a chance to work on better tools, it often meant it gave the team a chance to do more overlapping projects.
As the test operators became familiar with the tests could soon check the output and log files. Anything that they weren't sure of or that needed investigation, they could flag for the engineer to look at. The next logical step was for them to submit the defects they found.
On the first project, the test planning had been done by the software QA team. This had one major problem. They did the test plan based on their schedule. When they put together the test plan, the engine firmware QA team had finished test design and was already testing.
Having the test plan done before test development began. Gave the chance to get the necessary resources lined up ahead of time. The biggest benefit was being able to show the need for a test operator. This would allow one of the two QA engineers to start on the next project earlier. The advantage management saw was some of the work could be shifted to less skilled people. An example of a test plan is in Appendix 2.
The test plan described what areas would and wouldn't be tested. This listed what tests would or wouldn't be developed compared to the TIS which listed individual test cases that would or wouldn't be tested. (A test here is a test procedure or a set of test cases. A test case is check which can either pass or fail.) The test plan also included what equipment and personal was required for the level of testing described. This allowed management to compare the coverage against the cost.
An other important section was the risks section. This described what could go wrong and how it would be dealt with. For example, the risk that an other project schedule slide which would cause a conflict in resources would be handled by putting the resources on the highest priority project and delaying the other project.
The importance of up front planning is it allows holes in the process to be identified early enough to solve. A section of the test plan lists personnel and the training they will need. This identified the need for documentation for the new test fixture operators to learn the process. This was a more effective and complete method than having the lead test technician tell the new operators everything they can think of when they arrive. Our test technicians were a logical choice to do that documentation. Having completed the testing on a project, they had the best idea what the new operators needed to know. Also, the more tasks the QA engineers didn't have to do, the more they could concentrate on improving the process.
The TIS's and test plans had eliminated enough problems that the effort to port the tools from one project to the next became the biggest stumbling block that management saw. The engineers saw the lack of test logs and the scripts ability to handle unexpected events as a major problem.
The new test fixture is shown in figure 1. The fixture was easier to move to new printers because the logic on the minimal image processor (MinIP) was in an LCA instead of discrete logic on a wire wrapped board. Log files were added to test fixture. The assembler like format was retained in the scripting language. Branching, symbolic constants and macros were added.
Figure 1 - Test Fixture
The ease of modifying will be a benefit on future projects. The additional features in the scripting language made it easier to handle complex conditions and errors. The tests were able to report failures to the log then attempt to return the system to an expected state. This made it easier for the test operators. For example, they didn't need to run an other script to eject paper from the printer.
When this tool was being developed, the engineers wanted to be more aggressive and make bigger changes to the tool. Management wasn't sold on the benefits were worth the risk of that radical a change. At this point in the team's evolution, it would have been effective to present data quantifying the benefits. There was now sufficient process and data to support a more systematic approach. But out of habit, the team took what they could get. The incremental approach had severed them well up to now. They knew that they could add the MOSS tool after the test fixture was proven. As it turns out the biggest reason for taking the more aggressive approach wouldn't have shown up in the data. The move aggressive approach would have made the tools used by the internal engine firmware QA team and external engine QA team more common. It wasn't known at the start of the project that the two teams would be combined by the end of the project.
The MOSS tool takes requirements in the form of a state table. Allows the specification of the initial state, stimulus, response and the terminal state. Being a real-time system, the specifications include timing information. For example, a complex response may list several statuses and the time interval for each. A complete description of the system can be found in "Mother2 and Moss: Automated Test Generation from Real-time Requirements" by Joe Maybee in the 1993 proceedings of the Pacific Northwest Software Quality Conference.
About the time the test plan was completed for two new projects, the team was moved from the engine group to the same group the internally developed engine firmware was tested in. The new leader expected more formal processes instead of allowing it to happen. He wanted a work breakdown structure document (WBS). The new leader wanted to use the WBS because of it's effectiveness in planning and because it gave him a way to learn how his new team worked. The processes and tools developed by his two teams were significantly different.
The first section of the WBS is the test taxonomy. This section describes the different phase of the project and lists the tasks in those phases. The remainder of the WBS describes each task, how long it would take, the resources required, other tasks it depended on and how to tell when it was completed. An example of the WBS can be found in Appendix 3.
The WBS proved its self most valuable when trying to make tradeoff between two simultaneous projects. The most important tasks on the highest priority project could be identified and people applied to them. The important tasks on the other project could be worked on when resources freed up.
The people assigned to the tasks in the WBS estimated the time required. Because the project schedules shifted in relationship to each other, the people originally assigned to the tasks weren't always available. When other people were assigned to the tasks, the times had to be adjusted. For example when an inexperienced engineer from an other group was assigned to a task that was originally estimated for an experienced QA engineer, the task requires more time. When one project was put on hold, the resources could be shifted back to other projects. They could then pick up the WBS for the other project and have a record of what tasks still needed to be done.
A lot of progress has been made. Tests, scheduling and processes are now documented. This allows tasks to be done by people with the best skill set for that task. Test development and testing is taking less time. However, improvement in the process is still occurring.
With test procedures documented, it required less of the QA engineer's time to develop tests for new project because they could leverage off of past projects. However, QA engineers are in short supply and projects aren't. Could some of the tests be modified by the test technicians? Yes. On the manual tests, the engineer could provide them with a list of things that needed to change for the new project and review the tests when they were complete for simple changes. On tests which required more changes, additional direction was required during the work on the tests. However, tests were now being developed by an engineer and a technician that had required two or more engineers in the past.
This freeing of the QA engineers by the Test Technicians, gave the QA engineers to start using a more data driven approach. For example, the metric which tests are finding the most bugs produced useful data. It pointed out the functional tests find the most errors in early testing cycles while the user based tests find more errors when the product becomes more stable. This allowed the amount of functional testing to be reduced freeing test fixture operators to work for an other team within the QA department.
This is an example of a TIS. Each section provides a short sample from the TIS.
XXX
Engine Firmware
Test Item Specification
Printer Name
Test Suite (Test Set): Devices (Printer)
Author: XXX
Reviewers: XXX, XXX
Date of last revision: 28 August, 1996
For use during testing:
Tester's Name _______________________________________
Project ID __________________________________________
Engine ID __________________________________________
Code Version (PS/Eng) ______________ / ________________
Date: Test Run / Results Entered ____________ /____________
Time to: Setup / Run / Evaluate ________ /________ /________
Checkpoints: Total / Run / Failed _______ /________ /________
TEST ITEM OVERVIEW
This introductory section makes it easier to identify which TIS covers which features.
TEST CONDITIONS
The engine has a serial command and status protocol. Tests of the protocol include error conditions.
The engine commands are tested except for the executive size, black legal request set, black legal request reset, dummy print and dummy print reset. The excluded commands aren't used by the image processor.
Error status and the status commands are tested.
NOT TESTED
Most of the aaa and bbb work is now being done in the image processor instead of the engine. The images that must be created are being defined by the design team. These features will be tested as it is being designed.
TEST CASES
Numbers of the form c# are case numbers. Numbers of the form p# are procedure step numbers. Numbers of the form s# are references to sections in the EIS containing the specified requirement being tested. [The EIS is an engineering specification that defines the product electrically and mechanically. It also defines the interface between software designed by Tektronix and software designed by the vendor.]
c1.(p1, s6.4.1, s6.4.3.3.1) To check command protocol, the firmware version is requested.
c2.(p1, s6.4.1, s6.4.3.3.1) To check command protocol, the firmware version is requested with a parity error on the first byte.
Reference |
Command |
In Tray |
Letter Size |
letter paper |
|
c4.(p2, s6.4.2, s6.4.4) |
Letter Size |
A4 paper |
c5.(p2, s6.4.2, s6.4.4) |
A4 Size |
letter paper |
A4 Size |
A4 paper |
TEST RESOURCE REQUIREMENTS
The following equipment is required:
·
1 PC (AT or better) with a serial port·
1 XXX printer with optional feeder and a MinIP·
1 Each type of media tray·
1 each color Empty toner cartridge·
1 each color Mostly empty toner cartridgeThe person running this test needs to be familiar with
DOS PC applications, and cabling between PC and printer. A second person is needed to move the print on and off the optional feeder unit. A QA engineer needs to run the tests the first time because some tests will require the runner to determine what is the correct response, and provide input to the EIS.TEST PROCEDURES & CHECKPOINTS
Place a
Ö in each checkpoint the passes. Place an X in each checkpoint that fails and indicate near the checkpoint the condition that failed. (This can be a status message, description of the print or other indication.) Generally, passing checkpoints will be indicated by a White background box saying "Match" while failed checkpoints will be indicated by a Red background box saying "mismatch". If a mismatch occurs, note the 1st 2 digits and the last 2 digits in the green background box that follows the Red background mismatch box.Note: Each procedure step is independent of other procedure steps. However, each procedure step should be treated as a unit.
p1.(c1) Run the script version in min_ip.
_______ No errors are reported.
_______ The version number matches the version on the
Appendix 2 - Test Plan Format
This is an example of a test plan. Each section provides a short sample from the TIS.
XXX (Phaser XXX):
Engine Firmware Testplan
Rick Clements
Test plan for release of the XXX Project.
Based on IEEE/ANSI Std 829-1983
This is a Configuration Management Item
1. Test Plan Identifier
Project ZZZ
2. Introduction
This test plan covers the ZZZ project for the Phaser XXX. It doesn't include PostScript controller or network cards; they are covered in the
XXX PostScript Test Plan.2.1 Objectives
This software test plan is intended to support the following objectives:
·
Detail the activities required to prepare for and conduct the software acceptance test.·
Describe the areas of functionality to be tested.·
Communicate the responsibilities, tasks and schedules to the concerned parties.·
Define sources of information used to prepare this plan.·
Define the test tools and test environment needed to conduct the software test.·
To define the human resources needed to conduct the software test.2.2. Background
This section describes the major features and changes for the project. It also lists the priorities from the test management document. This allows the QA engineers to make tradeoffs based on what's important on this project.
3. Test Items
The items to be tested are:
·
List of TIS's4. Features To Be Tested
This section provides a brief overview of what printer features will be tested. See section 6 for more information.
5. Features Not To Be Tested
This section provides a brief overview of what printer features will not be tested. See section 6 for more details.
·
The print engine itself (mechanical reliability, etc.) will not be specifically tested. The OEM engines team and ORT test the engine.6 Approach
Each testing cycle will contain the following steps.
1. When the firmware is received, one copy of the firmware will be burned. The Smoke Test TIS will be run to verify we have a good build. Time: 1 hr target, 2 hr maximum
2. Additional copies of the firmware will be burned. The Low Level Engine Firmware TIS, High Level Engine Firmware TIS, Engine Diagnostics TIS, and Speed and Pipelining TIS will be run. Time: 2 days ramping down to 1 day.
3. The firmware is made available to be distributed to the design team and solutions QA.
6.1. Tests
·
The Smoke Test TIS verifies the firmware is ready for general testing.·
The Low Level Engine Firmware TIS covers the signals and timing. These tests require the MinIP, a logic analyzer and an oscilloscope.·
The High Level Engine Firmware TIS the interface commands and status. These tests require the MinIP.·
The Speed and Pipelining TIS test the increased speed and error handling in a pipelined environment. These tests require the MinIP.·
The Engine Diagnostics TIS test the engine's ability to detect and indicate service level information on various engine problems. These tests require the MinIP.6.2. Tools
The following test tools are used by OEM Engine Firmware QA:
·
The MinIP is used send engine commands, read engine status, and provide stimulus for timing and signals.·
Logic Analyzer and an oscilloscope are used to verify timing and signals.7 Test Pass/Fail Criteria
Each test in the
TIS passes if, and only if there are no classification 1, 2 or 3 defects discovered. (See table 1).
Classification |
Description |
0 |
Dangerous - Cause injury to a person or damage to equipment! (Example: not stopping moving parts when a cover is opened.) |
1 |
Critical - Catastrophic and unrecoverable! (Example: system crash or lost user data.) |
2 |
Severe - Severely broken and no work around. (Example: can't use major product function.) |
3 |
Moderate - A defect that needs to be fixed but there is a work around. (Example: user data must be modified to work.) |
4 |
Minor - A defect that causes small impact. (Example: error messages aren't very clear.) |
5 |
Enhancements, suggestions and informational notes. |
Table 1 - Failure Classifications
8 Suspension Criteria and Resumption Requirements
The testing of a test item will be suspended if:
·
there is no valid specification for the item·
the item fails in such a way that further testing will provide little or no new information.Testing will resume when the reason for suspension no longer applies. The QA Smoke Test is designed to find such catastrophic errors before each testing cycle begins.
9 Test Deliverables
OEM Engine Firmware QA will generate the following:
·
Report per release that indicates the number of test checkpoints planned and run for each test item, the number of bugs submitted, resolved, and postponed, and the 5 worst bugs, according to the gut feel of the QA lead. (Submitted bugs will be categorized by priority.)·
Defect reports will be submitted into the DDSs tracking system under the project names XXX.engfw and XXX.mech. In each report, QA will attempt to describe the user-visible symptom, the technical problem, and the estimated customer impact. High priority defects will be summarized by the OEM vender communications liaison then Faxed to the supplier to be fixed.10 Testing Tasks
This section lists the tasks required.
11 Equipment Needs
This section shows the equipment needs.
12 Assumptions, Risks, and Contingencies
·
Documentation - QA's test preparation depends heavily on all the Section 6 of the XXX EIS. If it is finished late, the quality of our testing will decrease.·
No Impact From Other Projects Rolling releases on YYY will cause testing on XXX to be late or incomplete. This is further impacted because XXX test development is now competing for resources with YYY.Appendix 3 - WBS format
This is an example of a WBS. Each section provides a short sample from the TIS.
XXX: Engine Firmware QA Work Breakdown Structure
Rick Clements, Lead XXX Engine Firmware QA Engineer
A concise description of Print Engine Firmware QA tasks for the XXX Project.
1.0 Introduction
This document is intended to specify Print Engine Firmware QA tasks for the XXX project in sufficient detail to allow planning with a reasonable degree of confidence.
This document outlines:
·
Task Taxonomy - A listing of task groups in outline form.·
Detailed Task Specifications - A detailed description of the tasks, the resources required for the task, entry and exit criteria and estimated time to accomplish the task.This document is not placed under configuration management, it is under the control of the Print Engines QA group.
2.0 Task Taxonomy
The outline below does not imply an order to the tasks.
1000 XXX QA project planning
1100 Engine Firmware QA Inception Documents
1101 Engine Firmware QA Plan
1110 Engine Firmware QA Work Breakdown Structure
1111 Preliminary Work Breakdown Structure
1112 Review Work Breakdown Structure
1113 Final Work Breakdown Structure
2000 Definition Phase
2100 Engine Firmware Requirements Specification
2101 Review existing requirements
2102 Analyze for missing requirements
2110 Modify requirements database
2111 Fixing existing requirements
2112 Create new requirements for next generation
3.0 Detailed Task Specifications
Task: 1101 Engine Firmware QA Plan
Task Definition
: Describe the documents that go into the XXX Engine Firmware QA schedule.Deliverables: "XXX Engine Firmware QA Plan".
Exit Criteria: Final version of document.
Resources Req'd: Purpose and names of documents.
Assumptions: None.
Estimated Effort: 1 week.
Constraints: None.
Assigned To: XXX
Task: 2101 Review Existing Requirements
Task Definition:
Verify that the existing requirements are correct for XXX.Deliverables: List of requirements that need correction.
Exit Criteria: List of all requirements that need to be corrected.
Resources Req'd: Requirements.
Assumptions: None.
Estimated Effort: 1 week.
Constraints: None.
Assigned To: XXX