Software Testing 101

Rick Clements
cle@cypress.com
Cypress Semiconductors
9125 SW Gemini Dr, Suite 200
Beaver ton, OR 97008

Abstract

This paper covers the basics of software validation.  It's aimed at someone who is new to software validation regardless of his or her experience developing software.  It covers knowing what you are testing (requirements and configuration control) and how you will test it (selecting the test cases and test procedures). It covers the basics of a number of areas.  It isn't a detailed workshop in any of the areas.

The validation of the software is the testing it against the system requirements and requirements derived from the system requirements.  Validation of the software occurs after the software has been debugged, unit tested and integrated.

This paper covers what you need before you can design your tests. Unfortunately, since we live in the real world, it will also talk about some of the things you can do when you don't get everything you need.

This paper doesn't cover QA, verification or test automation. Some QA topics (requirement management and configuration control) are discussed only as they are needed to do effective validation.  Verification of software is a cost-effective method for finding defects before testing begins and reducing the overall development cycle.  Test automation can decrease the people require to run tests. These are all important topics, but they are outside the scope of this paper.

Biography

Rick Clements is a senior software quality assurance engineer at Cypress Semiconductor where his duties include the common release of software from different divisions.  He has 20 years of software experience including built in test software, embedded software design, and testing, and process improvement.

Glossary

Integration testing or
Interface testing

The testing of pre-tested modules and hardware to determine that they work together.

Black box testing

Testing by looking at the requirements to develop test cases.  System level testing is often black box testing.

Quality assurance (QA)

The management of quality.  The encouragement of best practices.  QA is about preventing defects.  (The phrase "QA the software" is a misuse of the term QA to mean simply test the software.)

System testing

The testing of the entire product to see that it meets its requirements.  This occurs after the product has been integrated.

Quality control (QC)

Validation and verification of the product. This includes reviews, inspections and testing.

Test cases

Specific data used to test the product.

Test plan (or validation plan)

The plan describing the general approach, the people required, equipment required and schedule for testing and other validation activities 

Test procedures

The procedures to follow in testing the product.  These may be manual or automated.

Unit testing

The process of running small parts of the software.  The design teams often hand unit testing. 

Validation

The process of evaluating software at the end of the software development process to ensure compliance with software requirements.

Verification

The process of process of determining whether or not the products of a given phase of the software development cycle fulfil the requirements established during the previous phase.

White box testing

Testing by looking at the structure of the program to develop test cases.  White box testing often occurs in unit testing.

Documentation for Software Testing

The level of documentation depends on the scope of the software, the type of software, possibly government regulations and possibly customer requirements.  A large project with mission critical software requires more documentation than small changes to software in office equipment.  The amount of documentation needs to fit the task at hand.

The minimum requirements for the documentation are: needs to be sufficient for project planning, allow for planning what cases will be used to test the product features, clear enough for the people doing the testing, and provide a record of what was tested and what the results were.

The appendices at the end of this paper provide templates that can be used. The templates provide a checklist of what needs to be included in the document.  They also eliminate the need to create a new format and allow people to focus on producing good tests for the product.  The IEEE has collected a set of ANSI/IEEE standards that make good templates.  These can be found in Software Engineering Standards.  Any of these templates should be tailored to fit the needs of your company.

Requirements

A complete coverage of requirements can fill an entire course.  This section covers requirements as they are important to testing.

Why are requirements important to testing?  If you don't know what the requirements are, how do you know what to test?  How do you know if the product is working correctly?  The system level validation tests are based on the requirements.

How are the requirements documented?  The requirements may be in different levels of detail and take different forms. Companies use a requirement specification, functional specification, a requirements database and user's guide to document requirements. If the software is being developed under contract, that contract should spell out the system requirements.

Requirement documents will often start off being general and become more defined as part of the project development.  The requirements start out at the level of a data sheet.  This allows the company a chance to decide if the project is worth spending the time to develop real requirements. There may be more detail requirements that define specific requirements for modules or what requirements the software has to support the system requirements.  However, it's the system level requirements that are important for the validation of the system.

The requirements may take the form of a requirement specification, a functional specification, a database or be managed by a requirements tool.  Each form has its own benefits.  However, the most important things about requirements are:

Assuming you have requirements in some form they need to be unambiguous and testable.  A requirements document can be produced and everyone begins developing software and tests.  It's not until testing begins that it's discovered that not everyone had the same understanding of what the requirements meant.  If you are reviewing a requirements document, look at each requirement and ask, "How will I test it?"  If you can't answer that question, the requirement isn't testable.

It's important to have all the requirements from all of the stake holders included. For example, the need for manufacturablity and testability may result in a remote procedure call interface (a method for external software to access internal features in the software under test).  This interface needs to be designed into the product up front.  It directly affects the level of testing required and the level of testing that's possible.  If only the final customer is focused on, these requirements will probably be missed.

Requirements will change during the product cycle.  Everyone - including testing - needs to know there is a proposed change to the requirement.  If management knows the full cost of the change in time and resources, they can make an informed choice about whether to make the change or not.  Otherwise, they may ask a developer and be told it's a 30 minute change.  They may never find out until the change is made that it require two more man weeks of test preparation and one more week of testing time.  Once the change is made, everyone needs to know about it.  This may be a requirements data base, or e-mail telling people that a change was made and to check "The Spec" on the network. "The Spec" should have a version or date to help keep everyone current.

What if you don't have any requirements?  How do the developers know what they are building?  If they "just know", it's time to be concerned.  If you can't convince anyone that the requirements need to be recorded, you have no choice but to find them yourself.  Find out what the manager, the developers, the marketing department (or other customer representative) and any domain experts you have in the company think the requirements are.  Once you find out what they are, write them down in some fashion. 

Let people know that is what you are testing to.  It's better to have the discussion now then after you have written your tests, made the first pass though your tests, you are arguing about what's a bug and it's three weeks until the trade show where the product will be unveiled.  By this time, fixing the software (and possibly the hardware) is difficult and expensive.  With a large amount of changes to make and a deadline approaching, there is a strong temptation put a Band-Aid on a major problem and ship it.  Many projects fail because you can't test quality in at the end of the project.

If you are creating a requirements document in an environment where none has existed before, start simple, build support and experience. As you show the benefit of requirements, additional detail and tools can be added. It will allow you to determine which features are most beneficial in your environment. There are tools that help track requirements and requirement changes. However, they won't provide any benefit if the practices don't exist to make use of them.

Configuration Control

Configuration control is a part of quality assurance that directly affect testing.  Good configuration control will prevent the following problems:

What is the minimum level of configuration control you need?  Before you start testing, the software should be checked into the revision control system.  Then, it should be checked out onto a clean directory (a clean system is better) then built.  That final candidate needs to have a unique version number that can be displayed by the system in an initial splash screen or an about box.  The software needs to be stored in a secure location.  If after testing, that final candidate is released, that binary file needs to be released. If your software will be run in the Windows environment, the install procedures are complex enough that this or an other person needs to create the install scripts.

If the software is rebuilt after testing, there is a chance that new bugs will be introduced.  A new file may have been checked into the revision control system and accidentally included.  Some people want to change the version number after the software has been tested.  Just changing the version number (specifically making it longer) has introduced errors.  Use an extra digit for the build or an additional build number, but don't change the software once it has been tested.

A simple system needs to exist for creating the version and build number. Some systems use "majorNumber.minorNumber.buildNumber" or "majorNumber.minorNumber build buildNumber". Other systems use the build date as the version number. It's preferable to have a single number. However, marketing may require their own number for advertising reasons. The number just needs to be:

The application code is under configuration control, but what about the test procedures, test scripts and test programs?  If the tests are changed for a new feature, then that feature is found to be buggy to ship on schedule, can the earlier tests be recovered?  If a fix needs to be added to an older version of software (or a feature added to old software for a big customer), can the earlier tests be recovered and updated.  The fact that this will never happen is of no consequence when it does.  In addition to placing the tests under revision control, mark them with the same label as that version of software.

Test Plan

The test plan needs to cover what areas you will test and any areas you won't test.  This gives the developers and manager a chance to assess the level of testing.  The developer's feedback is important because a feature that appears to be a minor variation of a feature that's being tested toughly may turn out to be a completely different algorithm.  The manager needs to know where the testing will be focused so s/he can manage the risks.

Different features require different approaches.  Automated tests require more resources up front to create the tests.  Manual tests require more people at the end of the project to run the tests.  Can all the tests be functional tests?  Does performance need to be measured also?  Does there need to be some form of stress testing?  Once the type of the tests are known, the type of people and when they are needed can be scheduled.

It's important to identify any areas that won't be tested. The project manager needs to know what the risks are of not doing the testing. For example, the test plan may say, "the temperature tests are not being run because there aren't any changes to temperature sensing code so there is little risk in not running them and they require several days of time in a thermal chamber." The design engineers may know of a change that affects that area. Continuing the example, the review of the plan may turn up the fact that a change to non-uniformity correction algorithm will make it dependent on the ambient temperature.

If the tests require hooks into the software or hardware, identifying them in the planning stage allows the hooks to be designed in.  A hook that is easy to provide when the product is being designed may be difficult to provide later. This is the time to balance test coverage against the cost of providing the required hooks.

Schedule planning needs to take into account the following needs: equipment than need to be built, products to be purchased and tools that needs to be created.

Test Cases

Bugs have been compared to land mines.  Testing clears a path though those land mines.  Test cases need to be chosen well so the user will be likely to stay on the path that has been cleared for them.  It's not possible to test everything.

What will the users do most often?  Testing needs to focus on what the user will use.  "If there's in the code and no one finds it, is it a bug?"  If we can't test everything, we want to cover what the user will use  We can't ignore lesser-used areas.  I once heard someone who worked on a tape backup system say they focused most of their testing on writing the tapes because that's what the customer does most often.  I think about that statement every time I can't get a file restored.  While we heavily test the areas the user will most use often, we can't ignore the others.

Common system usage will help you to catch an entire case of errors.  For example, a printer might correctly detect an empty paper tray when it's empty.  It may correctly detect the tray has been refilled and continue printing.  What happens if a non-empty tray is removed?  Why would someone ever do that?  The user may start a long print job then check the paper tray before going off to lunch.

What is the most serious if it fails?  If something is done incorrectly, could it cause a major financial impact or hurt someone?  Multiplying the subtotal by the tax rate then using that value for the total instead of adding it to the subtotal will have a big impact on the bottom line.  If the product has moving parts, does the software stop all the motion when cover is opened?

Test boundary conditions.  If you have the requirement to display a temperature, what are the minimum and maximum temperatures?  Select an invalid value one below the minimum, the minimum value, minus one, zero, one, a typical value, the maximum value and one above the maximum value.  These cases check the handling of valid and invalid values.  They check the both the boundaries created by the requirements and boundaries that require the software to do something different.

What are the system interfaces?  Anywhere there is an interface in the system, data has to be transferred correctly and unexpected values must be handled.  It may be between the user and the product, your product and a different product or components within your own product.

To the user, the user interface is the product.  It is how the user interacts with the product. Because errors in the user interface will be noticed every time the user uses the product, they need to be tested.

The user interface's error handling needs to be well tested for two reasons.  A keyboard has been referred to as a device for entering errors into the computer.  Second, error handling is often missed by developers.  The developer is focusing on making the system work. They often miss the handling of unexpected input.

If your program has to exchange data with another program, there is possibility for error.  If your product receives data from another program, what is the correct response if the data doesn't conform to the standard?  Unfortunately, if the other program is popular enough with users, your product may need to work with it anyway.  This may result in a set of tests to see if your product works with popular programs not just that it works with proper input.

The Mars Climate Orbiter spacecraft was lost because of interface errors between different modules.  One module was using distance measurements in miles while the other was using kilometers.  The interfaces between different modules need to be verified.  This is more important if products can be configured with different modules using this common interface.

Where have other errors been found?  There are likely to be errors that have occurred in the past.  If there are errors of a specific type, there are likely to be more errors of the same type.

If do what you've always done, you will get what you've always gotten.  Unless the design team has done some sort of root cause analysis, they are likely to continue with the same procedures that produced the original errors.  Root cause analysis is outside of the scope of this paper.  It's a good idea to prepare tests for the problems you see often. For example, if one of the text files on the UNIX disk has PC carriage returns and one of the text files on the Macintosh disk has UNIX carriage returns, check the PC disk for the proper carriage returns.

Developers often don't enter defects into the bug tracking system. A useful way of determining which modules have the most errors is to look at the configuration control system. These systems track the number of versions that have been checked in and the number of changes in each version.

Complex modules will tend to have more errors. Talk to the developers. Find out what modules were most challenging. Another indication of complexity is size. The longer modules tend to be more complex.

Usability - The testers are the first people after the developers to see the software. They are in the best position to advocate for better usability.

The user interface may highlight features the developers are proud of. This may make the product harder to use. For example, a when inserting a picture into a popular word processor, the picture floats on top of the text by default. Advanced placement features can be use to flow the text around the picture. However, most users don't use these features. If the picture was placed simply on the page, it would be easier for most users.

The interface may be easy to use if you know the internal structure and possibly the internal state of the software. For example, the original caller ID feature had a method to toggle sending of the caller ID. It had no way to tell what the current state was.

The test engineer can provide valuable feedback on the usability. However, it does require that you develop a feeling for the typical user of your system. In many cases, you won't be a typical user.

This is also an area where you may find the greatest resistance from the design engineers. It is an interface that makes sense to them. You may need to enlist the aid of your marketing or customer representative. They are the people who should have the final word about how the typical user will react to the interface.

What is unique to your product's environment?  Each environment has a unique set of problems that must be dealt with.

With a web-based application, you have to deal with browsers from different companies, browsers on different platforms and different versions of those browsers.  The surveys supporting a page and its links may be running from different operating systems.  The combinations can explode very quickly.  You need to determine what is important to the users of your product.

If you are dealing with a data based application, you likely have an existing database the program needs to be compatible with.  Getting a recent copy of the database before you go live can prevent a number of unexpected errors.

If you are dealing with embedded software, the system needs to be stress tested.  Does the control loop software compensate for the hardware being in extreme environmental conditions?  The system will be more sluggish at -30oC.  Does the system correctly handle stimulus from all of the inputs while the system is busy doing something else?

Does the software correctly handle a failure of the system hardware?  Depending on your environment, the software may shut the system down to prevent further damage or it may warn the user but operate to complete system failure.  For this type of testing it's desirable to be able to simulate input values. 

Figure (1) shows a fault may be simulated by replacing the sensor, by having a hardware switch that allows simulating the value or by a remote procedure call that instructs the driver to return a simulated value.  If you need to simulate values, these are requirements you need to get into the system early in design.

Figure - Simulating Fault Conditions

If the product is running on a multi-use computer instead of an embedded system, there are different failures that need to be tested. Does the software correctly handle a shortage of disk space or memory? Does it correctly handle an inaccessible server? If you need tools to create these conditions, resources needed to be considered in the decision to test those cases. If you need to buy or rent equipment, your manager is needs to know the risks if this testing isn't done and the cost of the equipment.

Why document the test cases?  The test cases are what will be tested. It's beneficial to review the test cases before starting on the test. Reviewing the test cases first allow the focus to be adjusted before the work of creating the test procedures and test scripts. It's useful to include at least one other test engineer and one design engineer on the review team. The test engineer will tend to catch common errors and standard patterns that are missed by the test cases. The design engineer will tend to catch internal limits and conditions that are missed by the test cases.

Test Procedures

How the test procedures are documented depend on the environment.  The type of software, the experience of the testers, the tools being used, customer requirements and government requirements all affect the way tests are documented.

Performance tests will go into detail on how tests are set up and measurements taken.  Functional tests may be a list of test cases.  Tests for combinations of features may give instructions for choosing the combination to test and provide a location for recording what was tested.  On the next test pass, a different combination of features may be tested to increase coverage.

The contract or government regulations may call out the detail needed to document tests.

The amount of detail that's required for a beginning test operator will frustrate an experienced tester or someone with domain experience.  Too little detail will cause the inexperienced tester to continually ask you questions.  Experienced people don't need a lot of detail on how to setup the test, they need to know what to test.

Can the testing be automated?  Test automation is often used in testing.  It's useful for tests that will be run often. They excel in testing detail that is mind numbing for a person.

If automation is being used, the test tools and test scripts need to be solid before the testing begins.  If there isn't any confidence in the tests, the task of isolating failures is magnified.  If test software and hardware need to be created, this is a development task of its own.  It needs to be planned, managed and tested just like the application software. It needs to be well designed for the same reasons the product software does. It needs to be given adequate time and resources. If there isn't time to do it right, it won't be complete or reliable.

If the tests will be run in the Windows, UNIX or Internet environments, there are a number of tools available.  If the tests are for the embedded environment, there are few tools and they need to be customized to work with the system. 

Regardless of the environment, start simple. Simple may mean doing it by hand the first time. As you start to automate tests, you will gain experience at what is important. For example, automated tests can generate a lot of data. You need an automated way of checking and reporting the results.

Bug Tracking and the Ship Decision

Bug tracking is best done with a bug tracking program.  Programs don't forget about bugs.  They allow you to sort the bugs by assigned engineer, importantance and other useful fields.  There are a number of bug tracking programs.  These programs will need to be customized for your company.

Figure - Defect States

Figure (2) shows the states in a typical bug tracking system. When a bug is first found, the person who found it creates a ticket.  This could be a test engineer, a design engineer, a manufacturing engineer using a preliminary version to write build procedures or a marketing representative.  The record usually includes a one-line title and a description.  The ticket is sent to a reviewer.  This is often the lead design engineer.  If more information is needed, the ticket is sent back to the submitter.  Otherwise, it's give an initial priority and assigned to a design or test engineer.  The assignment of who will do isolation will depend on who has the best tools and the current workloads.

A well-written bug report is important to provide smooth communication between the test and design engineers. Defects in different areas of the software require different amounts of information. It's important to understand what information is needed to be reported with the different defects. The description needs to include the environment, the configuration of the system, any symptoms, what was being done at the time of the failure and if the defect is repeatable.

Figure (3) shows an example defect report. Some of the details are include in specific fields. The rest of the information is in the description field.

Title: Page fault when closing the about box

Description: 3 of 4 times program Y crashed with a page fault when launched from program X when the about box was closed. The program crashed 0 of 3 times when launched from the start menu.

Sequence:
Launch from program X
Open the about box from the "Help" menu
Close the about box

Impact: If this happens after the user has been editing, the user will lose edits.

Severity: Critical

Version: 6.0 build 3

System: All Windows

State: New

Figure - Example Defect Report

On larger teams, it's useful to include a work log.  This allows the work on the bug to be documented even if several people work on it.  A test engineer may do some initial isolation.  It may then be assigned to a software design engineer to fix.  It may be determined that the problem is actually an electrical problem and assigned to an electrical design engineer to fix.  It may be given back to the software engineer to work around the bug in the hardware.

After the bug has been "fixed", it's given to a test engineer to validate.  If it is fixed, the bug is closed out.  If it isn't, it's sent back to the design engineer to be worked on again.

Can we ship it yet?  At the end of the project, the lead test engineer, the lead design engineer, the project manager, and the marketing or customer representative need to go though the open bugs.  There needs to be a decision as to which bugs will be fixed and which ones are minor or enhancements.  The bugs should have an initial priority already.  It's useful to sort the list by priority so the most important bugs are at the top of the list.  This way the important bugs get the most attention.

Figure - Open Defects Over Time

Tom DeMarco said, if you can only collect one metric it should be defect count. With not much more work, the defect rate can be graphed against time. This is simply the bugs that have been found but not fixed over the life of the project as shown in figure (4).  Two lines can be plotted the showstoppers and the non-show stoppers.  It's the show stopper line that will be used to make decisions.  However, if there are too many non-show stoppers, there is reason to question the readiness for release.  When the average slope of the show stopper line is positive, you aren't approaching the point where the software is ready to ship.

Is customer service ready for the new release?  Companies have different ways of preparing their customer service and help desks for a new release.  As a test engineer, you have some very valuable information for them.  You have a list of known bugs.  The list should be summarized.  They don't need all of the symptoms and guesses that were added to the bug as it was isolated.  A list of possible workarounds is useful.  A good "Bugs and Workarounds" list will give them the benefit of the weeks or months the test engineers have spent learning the way the software works.

When assessing the risk of not fixing a defect, you need to know how likely the user is to come across it and what is the effect is when they see it. If you can express the chance of failure and the cost of failure then Risk = ChanceOfFailure * CostOfFailure. Many times the chance of failure and cost of failure are only measured in relative terms. A defect that can cause a loss of data, silently report incorrect results or create a dangerous condition has a high risk even though it might not be seen often. A defect that requires the user to access a feature though a menu instead of a short cut would only have a large risk if users will experience it frequently.

Test Reports

The test report is structured as a technical report. It starts with an introduction to provide a description of what the report covers. The next section describes recommendations and high level results. The rest of the report provides the supporting data for the recommendations and more detail regarding the results.

The first section details what has been tested, what the results were and if any part of the test plan couldn't be carried out for some reason.  The report should include how many cases have major defects, how many minor defects, any cases that couldn't be run and how many total cases. There may also be a section for improvement to the tests.

The amount of detail depends on the audience for the test report. If the customer will see a copy of the test report, it should contain information that will describe proprietary algorithms. If a government agency will receive the test report, there may be requirements for specific levels of detail.

A test report should cover one subject for a single audience. For example, automatic functional tests may be reported separately form manual tests based on common user sequences. If a government agency is only interested in one aspect of the product, other areas of testing shouldn't be included in the same report.

Appendices

These appendices include an example set of templates for common documents.  Many of these documents started from the IEEE standards and have be tailored one or more times.  If you don't have a standard template for these documents, they can serve as a starting point.

The example text in the templates is italicized.

Testing Strategy Worksheet

This document is used for initially scoping the testing effort.  It is a one page document that's done before the test plan.  This document is modified from a worksheet presented by Randall Rice of Rice Consulting at the PNSQC workshop on "Effective Web-Based Testing Techniques".

 

 

 

Application Name:
Gimbal Software

Author: Bob Smith
Date: 1 Jan. 2000

Type of Software:
Object oriented C++ code running on a CPU
C code running on a DSP

Development Methodology:
Waterfall

Scope of Testing:
Selection of features
Stability of gimbal

Critical Success Factors:
Correct control of payloads
Image stability when subjected to vibration
Correct movement under temperature extremes

Tradeoffs:
Schedule - yes
Scope - no
Cost - yes
Performance - no
Quality - no

Testers:
Image stability - designers
Control of payloads - testers
Environmental extremes - testers
Flight Test - testers, designers

Timelines:
Requirements - Jan. 31
Software design - Feb. 28
Test cases - Feb. 28
Coding complete - Mar. 31
Hardware available - Mar. 31
Debug complete - Apr. 30
Test procedures - Apr. 30
Trade Show - 1 May
Testing complete - May 30

Risks:
Prototype shown at trade show before testing is complete.

The type of hand controller may affect the user's ability to control the system.

Constraints:
Lack of customer input to provide feedback for feel of the controls.  Hardware won't be available for integration testing before system tests begin.

Assumptions:
Type of hand controller won't affect system performance.

Test Approach:
Debugging and unit testing will occur before formal testing.  Formal testing will include both integration testing and system testing.

Tools:
Automated scripts to run shaker profiles.
Automated scripts to simulate the hand controller.

Deliverables:
Test plan
Test cases & procedures
Bug list
Test Reports

Test Plan

1. Identifier

XYZ_TPL

2. Introduction

2.1 Purpose

This plan covers the software and system testing done on the XYZ project.

2.2 Scope

This plan covers the software and system functionality.  It doesn't performance testing or flight testing of the system.

3. Test Items

4. Features to be Tested

5. Features Not to be Tested

6. Approach

6.1 Design and Code Reviews

6.2 Functional Testing

6.3 Exception Testing

6.4 Regression Testing

6.5 Qualification Testing

7. Item pass/fail criteria

Completion of the Spectrum Core CEU validation process as outlined in this test plan, acceptance of documented testing issues, and release notes entries will constitute a condition for the Spectrum Core CEU to pass.

8. Suspension criteria and Resumption requirements

Suspension of testing may exhibit best use of personal if testing results lead to a decision to make significant software correction and these corrections with a subsequent recompilation of the software would require retesting of features still to be tested.

When a new release of software is available and resumption of testing is indicated, the appropriate testing approach(s) will be used.

9. Test Deliverables

10. Testing Tasks

List of tasks and who will be conducting the tests.

11. Equipment Needs

12. Schedule

What resources are needed and when are they needed.

13. Risks / Contingencies
 

Risk

Contingency

Because the memory access for the processor is controlled by the FPGA, it's possible to starve the processor if the FPGA becomes busy.

Load testing will be conducted where both the FPGA and processor are heavily loaded.

Test Cases and Procedures

This template combines the test cases and procedures.  This keeps them together and makes the easier to update.  The test cases are reviewed before completing the later sections.

1. Introduction

1.1 Scope

This document covers the user interface.

1.2 Purpose

Verify the proper function of the buttons and icons.

1.3 Overview

1.4 Applicable Documents

1.5 Definitions & Acronyms

TBD - Too Bloody Difficult
TLA
- Three Letter Acronym

2. Features and Test Cases

Temperature Icons

3. Features Not To Be Tested

Identify specific system features that will not be exercised in this test. Identify the reasons the features will not be tested and any potential risks or impacts this might have on the release.   This section is more detailed than the test plan.  The test plan will list entire requirements that aren't tested.  This section lists cases that won't be tested for requirements that are partially tested.

4. Dependencies

Identify all dependencies that are required for testing to begin the tests with in this specification. The test plan will be the union of all the test case specifications it references.

5. Testing Instructions/Approach

Describe the general process for testing.  Include instructions for recording test results and reporting test failures.   This section is focused toward what the test operator needs to know as opposed to the test plan that is more of a test architecture.

6. Testing Procedures  
 

Procedure

Expected Results

Actual Results

Run script to simulate temperature.  Set the temperature to 60.1oC.

Yellow thermometer on the bottom of the screen.  "Temperature Warning" displayed beside the thermometer.

 

Test Report

1. Introduction

1.1 Purpose and Scope

The Test Results Report summarizes the software testing performed during the development of a release.  It also points to the storage location of the test log files, input data, scripts and pertinent documents used to design and conduct the testing.  It explains where the actual testing deviated from the planned activity, why this was necessary and the risk involved.

1.2 Applicable Documents

1.3 Definitions & Acronyms

BOB - Best of the best
WOW - Worst of the worst

2. Executive Summary

This section contains a recommendation for the ship decision.  If the recommendation is to not ship, the list of open high severity problems, or problems identified by the project team as high priority for future attention.

3. Testing Synopsis

Describe the history of testing for this release.  Include:

4. New System Software Problems

Bug number, severity and single line description of the defect sorted by severity.

5. System Hardware Discrepancies

Include any hardware problems discovered during testing. Even if the scope of the testing doesn't include hardware, there may have been hardware issues that impacted testing.

6. Test Cases Not Run

List which test cases were not run (if any), the reason the test case was not run and the risk assessment.

7. Test Case Issues

7.1 Errors in Existing Cases

List any issues with the test cases discovered during testing. Some examples of test case issues are the test case turns out not to be meaningful, or a typo in the spec shows the expected outputs as X, Y when Y, X are returned, etc. Refer to the test case identifier in which the problem exists.

7.2 New Test Cases Needed

List areas of functionality that were found to be either not tested or inadequately tested in the current Test Case Specification. Be sure to include the items from Section 4, New System Software Problems.

8. Beta Testing

This is a section that may or may not be needed.  If a formal beta testing period is defined and exercised before release of the product it may be included.  If the release is for the purpose of an extended beta test in the field then a separate document to record the results better serves the purpose.