9
Chapter Attribution
Cassandra Race
Usability: Evaluating Documents and Websites
I will never forget a Christmas Eve many years ago, when the kids were finally asleep and Mr. and Mrs. Santa Claus began the assembly of the much desired “brand name” doll house. Out came the tools, out came a hundred or so tiny plastic parts, and out came an instruction sheet written by someone clearly from another land far away. After several hours of attempting to decipher some of the worst instructions ever written, we recruited a neighbor’s 12 year old, a seasoned veteran in the world of dream houses, and the assembly was completed in time for Christmas morning.
Whenever usability is mentioned, this incident comes to mind. Usability, a term that refers to how easily and effectively a person can use a document, website, or product to achieve a purpose, is an integral element of workplace and technical writing and must not be overlooked at any level. On the web, it’s critical for survival…if users can’t figure out how to purchase that awesome table lamp, they will quickly go elsewhere on the web to shop. The vendor loses money. If users can’t find the information they need, they will move on…there is plenty else out there that will meet their needs. And someone loses money. In the office, if employees spend large amounts of time figuring out unclear documents or deciphering poorly written instructions, the company loses money.
The concept of testing usability is relatively new…in the 1960’s the rise of the computer industry brought about a need for user manuals and engineers realized that it would be important to know how users interacted with the materials and the technology. When personal computers became available in the 1980’s and the 1990’s brought the World Wide Web into households and businesses, engineers and designers…and technical writers…recognized that research into how people used and interacted with computers and documents was essential for the development of not just programs and software, but for instructional materials. (Jameson p. 399).
As a technical writer in the 21st century, you must incorporate some level of usability testing or evaluation in the documents you create. Think back to Chapter 1, The Nature of Sexy Technical Writing, and to the standards that determine if your document will be effective. Without some level of testing, you won’t know if you have done the job…or if your reader is annoyed or frustrated by writing that is not accurate and comprehensible, a design that is not accessible, information that is missing, or even links and design features that simply don’t work.
Characteristics of Usable Documents
According to Jakob Nielsen (2012), a usable document or web site must have several key elements.
- It’s easy to learn so that the person can quickly accomplish the desired tasks
- It’s efficient, enabling the person to accomplish the task in a timely manner
- It’s easy to remember the process needed to use the document or web site to accomplish this task
- It’s free from errors, enabling the user to complete the task without mistakes
- It’s satisfying to use…the user will find it pleasant or enjoyable to use this design
In addition, a usable document or website combines utility…it has the functions needed…with usability…how pleasant or easy it is to use.
In addition, it has some other attributes, one of which is utility. Does the document or website do what the user needs it to do? If it meets the criteria above, then it is useful. And useful is essential to effective technical writing or design.
Usability Evaluation
The best way to guarantee that your site or document is usable and useful is to evaluate it or test it. How does this work? The methods you choose will largely depend on the size and significance of the project and can range from the simple to the complex.
At the first level, careful proofreading or evaluation of the document using a checklist may reveal areas that need development or clarification. Ask someone to review your draft or prototype and offer suggestions that will improve the design of the document. Most types of usability evaluations involve 3 groups of individuals: Users, the primary audience for the document; subject matter experts (SMEs) who are knowledgeable about the topics of the document or web site; and usability experts, who are trained to determine what questions to ask about the draft or prototype and how to best acquire the answers that will be most useful (Markell, 2015) Usability evaluations also come in different forms, and may include interviewing users ,using a questionnaire or survey, conducting focus groups, and observing users.
Usability Testing for User-Centered Design
Dr. Carol Barnum (2002) identifies the following characteristics of usability testing:
- The goal is to improve the usability of a product
- The participants represent real users
- The participants do real tasks
- The researchers observe actions and record what the participants say
- The researchers analyze the findings, diagnose problems, and recommend changes
The important thing to notice here is the inclusion of paid participants, or users, who are representative of the target audience, and a researched protocol that the testing follows. There are a number of testing models, including lab testing, testing without a lab, and field testing.
In the usability lab (which is the most expensive and time consuming process) a number of users come into a controlled environment and are given a task to complete in a specific time frame. Observers may watch from behind 2 way mirrors and record what they see or hear or use a television monitor to observe and listen to the participants. Typically a lab requires dedicated space and lots of equipment, including video or audio recording devices.
Testing without a lab requires a space like an office or conference room where the participants and observer will not be disturbed. The observer may sit next to the participant and record manually or with a recorder what the participants does, or have the participant “think aloud” during a process. Modern technology, like computers or phones with cameras and microphones, make this form of testing easily available and economically feasible, but, according to Jakob Nielson (2012), a notepad and pen are the only equipment you will need,
Field testing means that the observer goes to the user and “tests” in the actual environment that the document or device will be used, and as an added bonus, can observe users in their natural environment with supports and distractions.
What if I Skip This Process All Together
Yes, usability testing can be expensive and time consuming, but in most cases will be worth the time and expense. The costs of not testing a product or program are reflected in the amount of additional training needed to support the users, the competitive advantage of the product or program, the image and reputation of the organization, and the efficient use of employee and client time (Barnum p.23).
Start By Making A Plan
If you are going to conduct a usability test, you have to start with a plan. That’s how you will document what you’re going to do, how you’re going to do it, how many participants you need to recruit, and what you will have them do. In this case, you will be the usability specialist.
For your plan, you need to identify the scope and purpose of the testing, decide when and where you will do the testing, identify the equipment you will need, determine how many sessions you will conduct and how long each will be, and how many participants you think you will need. You must determine what tasks you will be testing, and develop the A metric is a standard of measurement used for evaluation.” data-original-title=”” href=””>metrics for evaluation. For example, subjective metrics include the questions you’ll ask the participants about ease and pleasure, and quantitative metrics indicate what data about errors, completion rate, or time to complete a task you will collect. You may need to identify your staff and what role other members of the team will play (usability.gov Planning a Usability test).
Recruit the Participants
Once you have a plan, you will recruit your participants. You will try to find people who are as close to your target audience as possible, and you may have multiple users groups. Its okay to use your own colleagues for testing during piloting stages, but not during actual testing. If you are seeking insights, Jakob Neilson states that 5 users will give you as much information as you will need. For quantitative data collection, seeking statistics, you will need at least 20 users. If you are going to conduct iterative testing over the course of developing a document or site, you should have a different group of participants for each test. Lastly, since participants are usually compensated, you will need to decide how you will pay them. Keep in mind that you cannot pay federal employees.
Run the Test!
A typical usability test might look like this:
The facilitator welcomes the participant, explains the test session, and asks any demographic questions. The facilitator will then explains what the participant will do, then explains the task scenario. The participant begins working on the scenario and may think aloud during the process while the observer or facilitator takes notes of what is said and the participant’s actions. The session ends when tasks are complete or the mandated time is up, and the facilitator either interviews the participant with end of session subjective questions or thanks the participant, offers the compensation, and escorts the participant from the testing area.
Jen Bergstrom (2013) observes that choosing the best moderation technique for the session depends on the goals of the session. A concurrent think aloud (CTA) is useful for understanding participants thoughts as they work through the task. The retrospective think aloud (RTA) has the participants retrace their steps when the session is complete. Concurrent probing (CP) requires that the facilitator ask follow up questions whenever the participant makes a comment of does something out of the ordinary. Retrospective probing (RP) waits until the end of the session and then asks questions about the participants’ thoughts and actions as a follow up. Each method has its pros and cons, and none of them contribute to collecting quantitative metrics data.
Interpret and Record the Data
After you finish conducting your tests, it will be necessary to turn all that data into information that you can use to improve the document or site. Essentially, you will sort the quantitative data, like performance measures, and the subjective data, like attitude. You will analyze it carefully, looking for problems. Lastly, you will present your research in a report. Here’s an example of a usability report for a study conducted on The Purdue OWL.
Don’t Forget Accessibility
Typically, usability testing does not consider the user with a disability. As a technical communicator, you have a responsibility, both legally and ethically, to produce documents and sites that meet are compliant to Section 508 of the Americans with Disabilities Act. A site that is accessible presents information through multiple channels that allows users with disabilities to access the same information as users without disabilities.
Chapter References
Barnum, C. M. (2002). Usability testing and research. New York: Pearson Education, Inc.
Bergstrom, J. R. (2013). Moderating usability tests. Retrieved from Http://www.usability.gov/how-to-and-tools/methods/running-usability-test.html.
Jameson, D. A. (2013). New options for usability teating projects in business communication courses. Business Communication Quarterly, 76(4), 397-411. Doi:10.1177/1080569913493460.
Nielsen, J. (2012). Usability 101: Introduction to usability. Retrieved from: Https://www.nngroup.com/articles/usability-101-introduction-to-usability/
U.S. Department of Health and Human Services. (2013). Planning a usability test. Retrieved from Http://www.usability.gov/how -to-and-tools-/methods/planning/usability.html.
U.S. Department of Health and Human Services. (2013).Usability evaluation basics. Retrieved from Http://www.usability.gov/what-and-why-usability-evaluation.html.