Part 2: Week 1 Discussion
Topic 1: First Look at UX/UI Design
This week, you are learning about the fundamental principles of user interface design. You are also exploring how the user experience is driven by the interface design. You also looked at the different types of user interfaces.
Discussion: Select a user interface to critique. You may choose the type of interface to critique (i.e., website, app, AR, VR, tablet, smartphone, smart TV).
Question 1: Include a few screenshots/images of the selected user interface to provide your peers a look at the user interface. Briefly describe the purpose of the user interface.
Question 2: Run a quick heuristic evaluation on the interface. Discuss your initial thoughts on the design and experience using the interface.
*These recommendations and overall experience are based on on your opinions. Later on we will back the recommendations with research.
READINGS
https://www.nngroup.com/articles/usability-101-introduction-to-usability/
https://learning.oreilly.com/library/view/usability-engineering/9780125184069/chapter-49.html
https://www.nngroup.com/articles/ten-usability-heuristics/
https://www.nngroup.com/articles/ux-mapping-cheat-sheet/
https://www.nngroup.com/articles/ux-research-cheat-sheet/
1/13/22, 3:01 PM Module Introduction
https://learn.umgc.edu/d2l/le/content/645074/viewContent/24924322/View 1/4
Lesson 2 Evaluating UI/UX
User involvement is one of the major factors in designing UI for software. However, as much as we would like the users to be involved, user
devotion to a project is never a free or unlimited resource. To maximize the bene�t of their involvement, the design should be as free as possible of
trivial bugs so that the users do not have to waste time encountering and overcoming these issues during the evaluation. [Task-Centered User
Interface Design – A Practical Introduction, C. Lewis, J. Rieman, 1994]
Also, performing an evaluation with just user participants will not reveal all types of issues. For example, an interface used by thousands of
individuals and tested with only a few users will not uncover problems that the evaluating users and the tests they perform don’t happen to
encounter. It also won’t uncover problems that users might have after they get more experience with the system.
Heuristic Evaluation
Heuristic evaluation is the most popular expert evaluation. Nielsen has demonstrated that a relatively small number of evaluators can �nd many of
the usability problems.
A heuristic is a guideline or “rule of the thumb” that can be used to critique a design. The general idea of heuristic evaluation is that several experts
independently delineate usability problems using a set of usability heuristics. The process of heuristic evaluation is:
1. A set of heuristics are developed for the UI domains. I use the word domain to emphasize that the heuristics should be general usability
guidelines, not speci�c to the UI. The heuristics are developed by the usability expert.
2. The usability expert evaluates the UI. The expert will require a prototype, list of goals and task sequences so that the evaluator can
understand the UI. The evaluator makes two passes through the UI.
3. The �rst pass familiarizes the experts with the UI.
4. During the second pass, the expert concentrates on the heuristics and make notes of any design �aw or failure to pass any heuristic.
5. The evaluator writes a summary of the evaluation from the notes.
The goals of the Heuristic Evaluation are to generate a list of usability problems that violate the usability principles in the heuristic list. The items in
the list of usability problems should be speci�c examples in the evaluated UI, for example:
1. On the “open form”, the function of the right arrow in the upper left is not clear. This violates the visibility principle.
2. On the “con�rmation screen”, the save and delete button are adjacent to each. What if the user hit the delete button by mistake? This violates
the error prevention principle.
3. There is not a cancel button on the form for saving. This violates user control principle.
4. etc.
The results of the heuristic evaluation are similar to those of a cognitive walkthrough, speci�c usability problems. General concerns of the user
interface are not the immediate goals of heuristic evaluation. A review of the usability problems may reveal that many of the problems violate the
same usability principle then a summary can state and illuminate the causes.
“Frustrated user = unhappy user. Design your UI to meet your users needs and expectations.”
— J. Lucas Lucchetta
1/13/22, 3:01 PM Module Introduction
https://learn.umgc.edu/d2l/le/content/645074/viewContent/24924322/View 2/4
Nielsen and Molich have proposed a number of usability heuristic sets. A general purpose heuristic list that they have proposed:
Visibility of system status: Is appropriate feedback provided within reasonable time about the user’s act?
Match between system and real world: Is the language used by the gui simple and familiar to the user?
User control and freedom: Are there ways for users to escape from a mistake?
Constancy and standards: Do similar acts perform the same?
Error prevention: Is it easy to make mistakes?
Recognition rather than recall: Are objects, actions and options always visible?
Flexibility and ef�ciency of use: Are there shortcuts?
Aesthetic and minimalist design: Is any irrelevant information provided?
Help users recognize, diagnose and recover from errors: Are error messages useful?
Help and documentation: Is information provided that is easily searched?
Nielsen has also developed a heuristic list for web sites. He suggests remembering the acronym HOME RUN:
High quality content
Often updated
Minimal download time
Ease of use
Relevant to user’s need
Unique to the online medium
Net centric corporate culture
Barnum in Usability Testing and Research has several more heuristic usability lists.
The accuracy of walkthrough evaluations is proportional to the number of evaluators. Fortunately, Nielsen demonstrated that as few as 5
evaluators found 75% of the usability problems, while using a single expert only 25% of the usability problems were found.
Heuristic evaluation is a process where experts use rules of thumb to measure the usability of user interfaces in independent walkthroughs and
report issues. Evaluators use established heuristics (e.g., Nielsen-Molich’s) and reveal insights that can help design teams enhance product usability
from early in development.
Heuristic Evaluation: Ten Commandments
In 1990, web usability pioneers Jakob Nielsen and Rolf Molich published the landmark article “Improving a Human-Computer Dialogue”. It
contained a set of principles—or heuristics—which industry specialists soon began to adopt to assess interfaces in human-computer
interaction. A heuristic is a fast and practical way to solve problems or make decisions. In user experience (UX) design, professional
evaluators use heuristic evaluation to systematically determine a design’s/product’s usability. As experts, they go through a checklist of
criteria to �nd �aws which design teams overlooked. The Nielsen-Molich heuristics state that a system should:
1. Keep users informed about its status appropriately and promptly.
2. Show information in ways users understand from how the real world operates, and in the users’ language.
3. Offer users control and let them undo errors easily.
4. Be consistent so users aren’t confused over what different words, icons, etc. mean.
5. Prevent errors – a system should either avoid conditions where errors arise or warn users before they take risky actions (e.g., “Are you
sure you want to do this?” messages).
6 Have visible information instructions etc to let users recognize options actions etc instead of forcing them to rely on memory
1/13/22, 3:01 PM Module Introduction
https://learn.umgc.edu/d2l/le/content/645074/viewContent/24924322/View 3/4
Cognitive Walkthrough
In addition to the more general design evaluation using human factors, another way to evaluate the UI is by performing a cognitive walkthrough.
The cognitive walkthrough is a technique for evaluating the design of a user interface, with special attention to how well the interface supports
“exploratory learning,” i.e., �rst-time use without formal training. [Usability Evaluation with the Cognitive Walkthrough, John Rieman, Marita
Franzke, and David Redmiles]. Walkthroughs should be done when the interface begins to grow and when components begin to interact with each
other.
Lewis and Rieman suggested the following information is needed for a walkthrough:
A description or a prototype of the interface.
A task description.
A complete list of actions needed to complete the task.
An idea of who the users will be.
During a walkthrough, the evaluator will perform the task using the prototype given. This is a good way of imagining the user’s thoughts and
actions when using the interface.
Usability
Usability can be broken down to:
effective to use (effectiveness)
6. Have visible information, instructions, etc. to let users recognize options, actions, etc. instead of forcing them to rely on memory.
7. Be �exible so experienced users �nd faster ways to attain goals.
8. Have no clutter, containing only relevant information for current tasks.
9. Provide plain-language help regarding errors and solutions.
10. List concise steps in lean, searchable documentation for overcoming problems.
How to Conduct a Heuristic Evaluation
To conduct a heuristic evaluation, you can follow these steps:
1. Know what to test and how – Whether it’s the entire product or one procedure, clearly de�ne the parameters of what to test and the
objective.
2. Know your users and have clear de�nitions of the target audience’s goals, contexts, etc. User personas can help evaluators see things
from the users’ perspectives.
3. Select 3–5 evaluators, ensuring their expertise in usability and the relevant industry.
4. De�ne the heuristics (around 5–10) – This will depend on the nature of the system/product/design. Consider adopting/adapting the
Nielsen-Molich heuristics and/or using/de�ning others.
5. Brief evaluators on what to cover in a selection of tasks, suggesting a scale of severity codes (e.g., critical) to �ag issues.
6. 1st Walkthrough – Have evaluators use the product freely so they can identify elements to analyze.
7. 2nd Walkthrough – Evaluators scrutinize individual elements according to the heuristics. They also examine how these �t into the
overall design, clearly recording all issues encountered.
8. Debrief evaluators in a session so they can collate results for analysis and suggest �xes.
1/13/22, 3:01 PM Module Introduction
https://learn.umgc.edu/d2l/le/content/645074/viewContent/24924322/View 4/4
Effectiveness is a very general goal and refers to how good a product is at doing what it is supposed to do.
ef�cient to use (ef�ciency)
Ef�ciency refers to the way a product supports users in carrying out their tasks.
safe to use (safety)
Safety involves protecting the user from dangerous conditions and undesirable situations.
having good utility (utility)
refers to the extent to which the product provides the right kind of functionality so that users can do what they need or want to do
easy to learn (learnability)
Learnability refers to how easy a system is to learn to use.
easy to remember how to use (memorability).
Memorability refers to how easy a product is to remember how to use, once learned.
Summary
A good non-user evaluation, or expert review, using established usability assessment principles can catch problems that an evaluation with only a
few users may not reveal. If some key evaluation and design guidelines are followed the critical problems can be detected and resolved.
Of course, performing just an evaluation without users won’t uncover all the problems either. Once the evaluation without users is complete, and
appropriate design changes are made, the next step will be to get the users to participate.