Thursday, April 24, 2003
[4/14/2003
10:44:08 AM | Cristina Rodrigues]
Heuristic evaluation
Summary
Heuristic evaluation is a form of usability inspection where usability
specialists judge whether each element of a user interface follows a
list of established usability heuristics. Expert evaluation
is similar, but does not use specific heuristics.
Usually two to three analysts evaluate the system with reference to
established guidelines or principles, noting down their observations
and often ranking them in order of severity. The analysts are usually
experts in human factors or HCI, but others, less experienced have also
been shown to report valid problems.
A heuristic or expert evaluation can be conducted at various stages
of the development lifecycle, although it is preferable to have already
performed some form of context analysis to help the experts focus on
the circumstances of actual or intended product usage.
Benefits
The method provides quick and relatively cheap feedback to designers.
The results generate good ideas for improving the user interface. The
development team will also receive a good estimate of how much the user
interface can be improved.
There is a general acceptance that the design feedback provided by the
method is valid and useful. It can also be obtained early on in the
design process, whilst checking conformity to established guidelines
helps to promote compatibility with similar systems.
It is beneficial to carry out a heuristic evaluation on early prototypes
before actual users are brought in to help with further testing.
Usability problems found are normally restricted to aspects of the interface
that are reasonably easy to demonstrate: use of colours, lay-out and
information structuring, consistency of the terminology, consistency
of the interaction mechanisms. It is generally agreed that problems
found by inspection methods and by performance measures overlap to some
degree, although both approaches will find problems not found by the
other.
The method can seem overly critical as designers may only get feedback
on the problematic aspects of the interface as the method is normally
not used for the identification of the ‘good’ aspects.
Method
This method is to identify usability problems based on established human
factors principles. The method will provide recommendations for design
improvements. However, as the method relies on experts, the output will
naturally emphasise interface functionality and design rather than the
properties of the interaction between an actual user and the product.
Planning
The panel of experts must be established in good time for the evaluation.
The material and the equipment for the demonstration should also be
in place. All analysts need to have sufficient time to become familiar
with the product in question along with intended task scenarios. They
should operate by an agreed set of evaluative criteria.
Running
The experts should be aware of any relevant contextual information relating
to the intended user group, tasks and usage of the product. A heuristics
briefing can be held to ensure agreement on a relevant set of criteria
for the evaluation although this might be omitted if the experts are
familiar with the method and operate by a known set of criteria.
The experts then work with the system preferably using mock tasks and
record their observations as a list of problems. If two or more experts
are assessing the system, they should not communicate with one another
until the assessment is complete. After the assessment period, the analysts
can collate the problem lists and the individual items can be rated
for severity and/or safety criticality.
Reporting
A list of identified problems, which may be prioritised with regard
to severity and/or safety criticality is produced.
In terms of summative output the number of found problems, the estimated
proportion of found problems compared to the theoretical total, and
the estimated number of new problems expected to be found by including
a specified number of new experts in the evaluation can also be provided.
A report detailing the identified problems is written and fed back
to the development team. The report should clearly define the ranking
scheme used if the problem lists have been prioritised.
More Information
Nielsen,
Jakob. How to Conduct a Heuristic Evaluation
Variations
Three to five experts are recommended for a thorough evaluation. A quick
review by one expert (often without reference to specific heuristics)
is usual before a user-based evaluation to identify potential problems.
If usability experts are not available, other project members can be
trained to use the method, which is useful in sensitising project members
to usability issues.
Background Reading
Bias, R.G. and Mayhew, D.J. (Eds.). Cost justifying usability. Academic
Press, 1994, pp.251-254.
Nielsen, J. (1992). Finding usability problems through heuristic evaluation.
Proc. ACM CHI'92 (Monterey, CA, 3-7 May), pp. 373-380.
Nielsen, J. & Landauer, T. K. (1993). A Mathematical Model of Finding
of Usability Problems. Proc. INTERCHI '93 (Amsterdam NL 24-29 April),
posted by Cristina | 10:30
AM
|