Calendar

October 2014
M T W T F S S
« May    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Links


Categories


Archives


How to Conduct a Heuristic Evaluation

January 12, 2013

Heuristic evaluation (Nielsen and Molich, 1990; Nielsen 1994) is a usability engineering method for finding the usability problems in a user interface design so that they can be attended to as part of an iterative design process. As mentioned in a previous post, Heuristic evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the “heuristics”).

It’s unlikely that one person by him or herself will ever find all the usability problems in an interface. Research has shown that different people tend to find different usability problems and that just a few evaluators can identify most of the problems. The figure below shows an example from a case study of heuristic evaluation where 19 evaluators were used to find 16 usability problems in a voice response system allowing customers access to their bank accounts (Nielsen 1992). Each row represents one of the 19 evaluators and each column represents one of the 16 usability problems. Each of the black squares in the figure indicates the finding of one of the usability problems by one of the evaluators. The rows have been sorted in such a way that the most successful evaluators are at the bottom and the least successful are at the top. The columns have been sorted in such a way that the usability problems that are the easiest to find are to the right and the usability problems that are the most difficult to find are to the left.

The figure clearly shows that there is a substantial amount of non-overlap between the sets of usability problems found by different evaluators. While some usability problems are so easy to find that they are found by almost everybody, other problems are found by very few evaluators. Furthermore, one cannot just identify the best evaluator and rely solely on that person’s findings. The same person will not always find the most, or the most severe problems when evaluator performance is compared across different instances of heuristic evaluation. In addition, as you can see in the figure, some of the hardest-to-find usability problems (represented by the leftmost columns in the figure) are found by evaluators who do not otherwise find many usability problems. Thus, it is necessary to involve multiple evaluators in any heuristic evaluation. The recommendation is normally to use between three and five evaluators in a heuristic evaluation and we’ll see why in a later post on that topic.

NewImage

The best way to do evaluation is to get the evaluators to do their work independently so that they do not bias each other. The results are aggregated only after everyone has completed his or her evaluations. Heuristic evaluation probably works best when the expert evaluator has a good understanding of the domain that the user interface applies to. But a usability expert is typically an expert in human factors and usability and may not know much about the domain of application for the software that is being evaluated. Is the usability expert competent to make the evaluation in that situation? This is a hard question and I don’t believe that sufficient research has been carried out to answer it definitively. One approach that has been recommended in this situation is to use an “observer” who is familiar with the domain. The observer can then assist with note-taking (recording observations made about usability problems) and can also explain aspects of the domain that are unclear to the expert and that may influence the detection or interpretation of a usability problem. A possible problem here is that the observer may bias the evaluation. Typically domain experts will overlook usability problems that they have learned to overcome through training or experience. Thus a heuristic evaluation that uses teams of usability experts and domain experts (observers) is likely to underestimate the number and severity of usability problems.

It’s a complex situation, however. In specific domains like air traffic control, or nuclear power plant operation, operators are trained to use equipment and there is often inherent complexity in the task that no amount of careful user interface design can completely remove. In these cases of complex systems design a desirable approach is to make complex relationships visible as simple visual relationships in the user interface. This has led to the development of a branch of interface design known as ecological interface design that focuses on the specific methodology of cognitive work domain analysis to try and create specialized interface artifacts and widgets that reflect the complex properties of the work domain while making sense to the operators. Our view is that even in this more theoretically-motivated approach usability engineering should still be carried out. So while ecological interface design may motivate a particular design approach, once the designs are formulated they should then be subjected to usability engineering and user-centred design with the iterative design approach, just as with any other user interface undergoing development.

NewImage

An interesting question arises as to who should carry out usability engineering and heuristic evaluation for user interfaces in complex work domains. One approach is to use usability experts, but another approach is to use domain experts who reserve some training in how to carry out usability engineering. I once faced this issue in writing a book about expert systems. My co-author and I agreed at the time that it made more sense to train domain experts on how to do knowledge elicitation for expert systems development than it did to train knowledge engineers (experts in artificial intelligence and expert systems) about the domain. The reasoning is quite simple. While the core skills of knowledge elicitation and usability engineering can be taught in an intensive short course, it typically takes years to become a domain expert in a particular domain. Thus it makes a lot of sense to repurpose domain experts as knowledge engineers and it would also seem to make sense to repurpose domain experts as usability engineers in complex work domains.

References

Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI’90 Conf. (Seattle, WA, 1-5 April), 249-256.

Nielsen, J. 1992. Finding usability problems through heuristic evaluation. Proceedings ACM CHI’92 Conference (Monterey, CA, May 3-7), 373-380.

Posted in: Events | Tags: , , , , , , , ,

Leave a Reply

Follow Us!



Latest Posts