The Insighter June 2007 View all articles on our new site |
Neo Insight's e-newsletter
on Usability topics and techniques. We invite you to subscribe to our monthly e-newsletter. Our next workshops
Upcoming events
In this issue
Agile usability testingMost people agree that it would be good to conduct usability testing of the customer experience associated with their application or website, but many feel it will take too long or be too expensive. Over the years we have developed techniques which fit in well with the more agile development techniques being applied within many companies. Our techniques generally involve a small number of users and can be completed in 2 to 3 weeks or even less. The added value is that we are often able to test up to twice as many tasks within a given time period as can be done using more conventional techniques. Method In essence, we conduct a series of mini-usability tests, each with its own design, test, analysis and design recommendations components. These short iterations may take from a half day to a week to complete. When these techniques are combined with a rapid prototyping development environment, we are able to test, make recommendations, have the changes implemented, and validate the design changes within a couple of iterations -— generally less than a week. This provides a quick way to measure and assess most major usability problems - both the ones anticipated or hypothesized and those unexpected issues which often surface during testing. Typically, the process involves seven steps:
Then we iterate this 7 step process 2 or 3 times... The emphasis is on identifying and fixing usability problems as quickly as possible, verifying that the fix has been successful, and moving on to find new problems which may have been masked by other usability issues until these have been fixed (e.g. you won't know if someone can use an input form if they can't find the form). Invariably, we encounter other usability issues which were not anticipated from the expert review. These are usually more prevalent with specialized applications or user groups, but are easily addressed using the same technique. The important point is to have some clear, overriding goals for the customer experience. Working with marketing, product management, and/or development groups we establish a jointly agreed set of performance objectives for key tasks. For example, X% of people should be able to complete the task unaided within a specified period of time, number of page views, number of mouse clicks, etc. Having these types of hypotheses ahead of time helps to eliminate individual biases in interpreting user performance results and permits us to use some simple statistical techniques for quickly identifying significant usability issues. This is quite different from the "waterfall" method of product development, where the process is more linear and often problems are not found until the very end when it is very costly and time-consuming to make fixes. How we design agile usability tests WARNING: This article makes reference to non-parametric statistical techniques. We develop a set of high-priority tasks to be tested, typically 2 to 3 times as many as we can actually test with any one test participant. This provides us with a pool of tasks from which we can select substitute tasks, once we've determined that a problem exists. For example, Table 1 shows how substitution can actually test for 10 tasks even though only 5 tasks can be tested per session. It should be noted that the tasks are ordered 1 through 5 only for this example. In reality, we use a Latin Square technique to randomize the ordering of tasks in order to minimize any order effects. Table 1: Task substitution over sessions
The savings accrue from being able to quickly identify usability issues associated with certain tasks and substitute new tasks for continued testing. Following a round of testing similar to that shown in Table 1 (6 test sessions), we would consider the best solutions to address the issues observed and recommend that some changes be made before the next round of testing so that certain tasks could be tested again with the revised user interface. In some cases we would have to gather more data to be certain whether the issue was significant enough to worry about. And, in other cases, the changes required might be too complex to manage between testing rounds. In these cases, we may conduct other types of tests with paper prototypes to explore the options we are considering for a more major or holistic redesign. You'll notice some tasks are swapped out after only 2 or 3 sessions and you may wonder why. Well, one of the things we commonly encountered using traditional usability techniques was that we'd expect something was going to be a problem, we'd observe it occurring for participant after participant and yet we'd keep testing that same task through to the end of the study. This was very wasteful of resources and went well past the point of diminishing returns.
In these studies, we are not trying to predict whether a politician will get 50 versus 54% of the votes. We are simply trying to prove or disprove a simple hypothesis based on the Binomial Distribution. For example, let's say we conservatively hypothesize that 9 out of 10 (90%) of people tested should be able to successfully complete a given task. How likely is it then for us to observe 2 or 3 people in a row who are unable to do so? It turns out to be not very likely at all. In fact, observing 2 failures in 4 people (as shown in Table 2) is still a significant result at the 0.05 level. That is, there is less than a 5% chance of observing this result simply by chance. Therefore, we can feel quite confident that the usability issue we are observing is significant and should be fixed.
More often, product managers will not be satisfied with 10% of the user population having a problem. They will prefer to use a more stringent test and assume the failure rate should be less than 1 in 20 people, or 5%. In this case (see Table 3), the probability of 2 or 3 people in a small sample having difficulties are even less, often generating significant probabilities of less than 1 in 100 (1%).
The end result is that most critical or major usability issues can be discovered and confirmed with only 3 or 4 people, resulting in considerable savings in time and money. Benefits of agile usability testing
Critical success factors
Summary Agile usability testing overcomes some common problems of usability testing:
Our experience has shown this type of agile usability testing produces informed decisions and solutions in the shortest amount of time. We'd love to hear your comments on agile usability testing .
What usability testing issues would you like us to write more about? Our experience with remote usability testingRecently, we've been conducting more and more of our usability test sessions remotely, using TechSmith's UserVue product. We thought you might be interested in our experience to date.
Advantages of remote usability testing Although we encountered a few teething pains early on in our testing, we've found remote usability testing provides us with a number of distinct advantages, including:
Disadvantages of remote usability testing Although there are a number of disadvantages, which may affect the choice for some studies, we have found most of them to be easy to deal with, or to have a minimal impact, when testing web-based applications or Internet/intranet websites.
Remote test environment The typical characteristics of the remote test environment are:
Our experience To us, the main advantage is being able to test in a more realistic, contextually relevant environment — people are in their homes or offices, using familiar equipment and interacting with the software applications they use every day. Although there is some loss of control, we feel it is more than made up for by being able to observe people using the web as they would normally, subject to phone calls, pop-ups, email arrivals, and so on. Seeing people's personal computer environments can sometimes be enlightening. Compare the amount of screen real estate for these two participants (see image below) and notice how differences in browser configuration and resolution can drastically affect the amount of content visible to the participant. Lack of video, which typically shows the participant's face and body language, was a concern at first but we have found most non-verbal cues are readily discernable in the participant's voice. In conjunction with our use of a "think aloud" protocol, which asks the user to verbalize what they are thinking or experiencing, we can generally tell when someone is getting tired, frustrated or confused. Compare these two usability highlight videos, viewing the version without the participant video first, and see if you think much is added by having the participant video in the second version. No video - Large WMV (1.8 MB) or Small WMV (631 KB) or Small MOV (13.8 MB) Recruiting has been easier because people do not have to travel and try to find the test location. Because the time commitment is often one half or less of that required for conventional testing, they are much more willing to take an hour out of their workday. Because we are not constrained by geographical location, one facilitator has been able to test people in Vancouver, Calgary, Toronto, and Ottawa during the same day. We've also been able to test with blind participants using screen readers. Normally, this would be difficult to test in a lab environment as blind individuals often customize their computing environment extensively. With UserVue we did not have to make any special arrangements. Our invitation email provides a simple link which allows them to test their ability to connect ahead of time and then connects them in a screen sharing session at the assigned time. Most people had no difficulty connecting the first time. Only a handful of people required any instruction and this was typically done in the first few minutes of the test session. Being able to invite observers at any time is a big plus for stimulating interest in the usability results and for letting stakeholders share in the experiences of their customers. Once they've been exposed to usability testing, they have a much better appreciation of how it can contribute to more successful websites or web applications. Technical glitches were no more prevalent than we have typically experienced in face-to-face testing. We've encountered no firewall issues and only a couple of instances of the participant's computer running slowly or not being able to get a very good connection speed. In general, participant's were very tolerant of any network issues, such as during the teething period when we had to reestablish a session a few times after our connection was lost. We have been very pleased with the quality of results we've obtained from our remote usability testing. From our experience, they are on par or superior to the results we've obtained previously using face-to-face techniques. The superior elements are due to us gaining a better understanding of contextual issues - for example, type of browser, screen resolution, window sizing, multi-tasking, interactions with other software, etc. A comparison study by Bolt | Peters in 2005 found "no significant differences in the quality and quantity of usability findings between remote and in-lab approaches." However, consistent with our experience, the study showed key advantages of remote testing in the areas of time, recruiting, and the ability to test geographically diverse audiences. Remote testing is most appropriate for informational websites, web applications, intranets, and ecommerce sites. You can read more on this topic by visiting the Wiki for Remote Usability. We'd love to hear your comments on remote usability testing UserVue updatesTechSmith is continually evolving UserVue's capabilities. In the last few months they have made the following improvements which continue to make this one of our tools of choice.
Try it out by visiting the UserVue home page and clicking on the "Try it for free" link on the right. Or contact us for more information about how we have been applying UserVue to conduct remote usability testing across Canada and in Europe - call 613 271-3001 or email us about UserVue . Quotes of the month"If your goal is to sell usability in your organization, then I believe 3-4 users will be sufficient. Much more important than the number of users is the sensible involvement of your project team in the test process and proper consensus-building after the test" Rolf Molich, 2002 “Interior pages accounted for 60 percent of the initial page views. Recognize this and support it. Don't try to force users to enter on the homepage.” Jakob Nielson and Hoa Loranger, Prioritizing Web Usability, 2006 Save $100 on our next two one-day workshops!September 21 is the deadline for early registration in our one-day workshop Usability challenges of new Web technologies which takes place on Thursday, October 4. We will review many live Web 2.0 examples and explore how to adapt traditional usability techniques to design and evaluate the new generation of web user interfaces. Early registrants save $100. Save even more for group bookings. Come join us. For our Thursday October 11 workshop Designing usable Web-based applications, sign up before September 28 for the early registration discount of $100. Web applications are becoming as powerful as the ones on our desktop. Join this workshop to explore the challenges of designing web applications, and come away with tips, techniques and current best practices for providing high-value services that enable your users to fulfill their goals effectively and efficiently. To take advantage of further discounts, call us to run either workshop at your location for five or more people: (613) 271-3001. If you have any comments on The Insighter, or ideas on usability topics you'd like to hear about, send us an email with your comments. We invite everyone to subscribe to the Insighter, our monthly e-newsletter. If you wish to unsubscribe, just send us an unsubscribe email. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||
|