The Insighter

June 2007

View all articles on our new site

Neo Insight's e-newsletter on Usability topics and techniques.
We invite you to subscribe to our monthly e-newsletter.

Our next workshops

Oct 4, 2007 Usability challenges of new Web technologies – One day Workshop.
Save $100 if you register before September 21.
Oct 11, 2007
Designing usable Web-based applications – One day Workshop.
Save $100 if you register before September 28.

Upcoming events

Nov 5-8, 2007 User Interface 12 Conference, Cambridge, MA, USA.
Nov 12-15, 2007 UX Intensive, Vancouver, BC, Canada.
Dec 2-7, 2007 User Experience 2007, Las Vegas, NV, USA.

In this issue


Agile usability testing

Most people agree that it would be good to conduct usability testing of the customer experience associated with their application or website, but many feel it will take too long or be too expensive. Over the years we have developed techniques which fit in well with the more agile development techniques being applied within many companies. Our techniques generally involve a small number of users and can be completed in 2 to 3 weeks or even less. The added value is that we are often able to test up to twice as many tasks within a given time period as can be done using more conventional techniques.

Method

In essence, we conduct a series of mini-usability tests, each with its own design, test, analysis and design recommendations components. These short iterations may take from a half day to a week to complete. When these techniques are combined with a rapid prototyping development environment, we are able to test, make recommendations, have the changes implemented, and validate the design changes within a couple of iterations -— generally less than a week.

This provides a quick way to measure and assess most major usability problems - both the ones anticipated or hypothesized and those unexpected issues which often surface during testing. Typically, the process involves seven steps:

  1. Identify the priority user tasks (most frequent or critical) which must be supported.
  2. Conduct an expert usability review to identify the obvious usability issues and have them fixed.
  3. Develop hypotheses regarding other potential problem areas which need to be tested.
  4. Conduct usability testing focused on those areas of uncertainty.
  5. Analyze the results and produce design recommendations to address the usability issues.
  6. Have the problems fixed in conjunction with the development team.
  7. Conduct the next iteration of testing and validate that the previous usability issues have been resolved.

Then we iterate this 7 step process 2 or 3 times...

The emphasis is on identifying and fixing usability problems as quickly as possible, verifying that the fix has been successful, and moving on to find new problems which may have been masked by other usability issues until these have been fixed (e.g. you won't know if someone can use an input form if they can't find the form).

Invariably, we encounter other usability issues which were not anticipated from the expert review. These are usually more prevalent with specialized applications or user groups, but are easily addressed using the same technique. The important point is to have some clear, overriding goals for the customer experience.

Working with marketing, product management, and/or development groups we establish a jointly agreed set of performance objectives for key tasks. For example, X% of people should be able to complete the task unaided within a specified period of time, number of page views, number of mouse clicks, etc. Having these types of hypotheses ahead of time helps to eliminate individual biases in interpreting user performance results and permits us to use some simple statistical techniques for quickly identifying significant usability issues. This is quite different from the "waterfall" method of product development, where the process is more linear and often problems are not found until the very end when it is very costly and time-consuming to make fixes.

How we design agile usability tests

WARNING: This article makes reference to non-parametric statistical techniques.
Reader discretion is advised.

We develop a set of high-priority tasks to be tested, typically 2 to 3 times as many as we can actually test with any one test participant. This provides us with a pool of tasks from which we can select substitute tasks, once we've determined that a problem exists. For example, Table 1 shows how substitution can actually test for 10 tasks even though only 5 tasks can be tested per session. It should be noted that the tasks are ordered 1 through 5 only for this example. In reality, we use a Latin Square technique to randomize the ordering of tasks in order to minimize any order effects.

Table 1: Task substitution over sessions
Example of task substitution over sessions, based on
obtaining significant results over 2 to 4 sessions. This permits
doubling of the number of tasks which can be tested.

Table showing how tasks can be substituted over sessions once usability issue has been determined

The savings accrue from being able to quickly identify usability issues associated with certain tasks and substitute new tasks for continued testing. Following a round of testing similar to that shown in Table 1 (6 test sessions), we would consider the best solutions to address the issues observed and recommend that some changes be made before the next round of testing so that certain tasks could be tested again with the revised user interface. In some cases we would have to gather more data to be certain whether the issue was significant enough to worry about. And, in other cases, the changes required might be too complex to manage between testing rounds. In these cases, we may conduct other types of tests with paper prototypes to explore the options we are considering for a more major or holistic redesign.

You'll notice some tasks are swapped out after only 2 or 3 sessions and you may wonder why. Well, one of the things we commonly encountered using traditional usability techniques was that we'd expect something was going to be a problem, we'd observe it occurring for participant after participant and yet we'd keep testing that same task through to the end of the study. This was very wasteful of resources and went well past the point of diminishing returns.

The focus should be less about the number of users and
more about increasing the number of tasks tested.

In these studies, we are not trying to predict whether a politician will get 50 versus 54% of the votes. We are simply trying to prove or disprove a simple hypothesis based on the Binomial Distribution. For example, let's say we conservatively hypothesize that 9 out of 10 (90%) of people tested should be able to successfully complete a given task. How likely is it then for us to observe 2 or 3 people in a row who are unable to do so? It turns out to be not very likely at all. In fact, observing 2 failures in 4 people (as shown in Table 2) is still a significant result at the 0.05 level. That is, there is less than a 5% chance of observing this result simply by chance. Therefore, we can feel quite confident that the usability issue we are observing is significant and should be fixed.

Table 2: Participants required to identify usability issues
Number of test participants, exhibiting a specific usability issue, required to obtain a significant result. Assumes the failure rate should be less than 1 in 10 people (10%).

# of Participants Attempting Task # of Participants Not Completing the Task Significance (probability of occurring by chance)
2
2
Yes ( p<0.05)
3
2
Yes ( p<0.05)
3
3
Yes ( p<0.01)
4
2
Yes ( p<0.05)
4
3
Yes ( p<0.01)
4
4
Yes ( p<0.001)

More often, product managers will not be satisfied with 10% of the user population having a problem. They will prefer to use a more stringent test and assume the failure rate should be less than 1 in 20 people, or 5%. In this case (see Table 3), the probability of 2 or 3 people in a small sample having difficulties are even less, often generating significant probabilities of less than 1 in 100 (1%).

Table 3: Participants required to identify usability issues
Number of test participants, exhibiting a specific usability issue, required to obtain a significant result. Assumes the failure rate should be less than 1 in 20 people (5%).

# of Participants Attempting Task # of Participants Not Completing the Task Significance (probability of occurring by chance)
2
2
Yes ( p<0.01)
3
2
Yes ( p<0.01)
3
3
Yes ( p<0.001)
4
2
Yes ( p<0.05)
4
3
Yes ( p<0.001)
4
4
Yes ( p<0.001)

The end result is that most critical or major usability issues can be discovered and confirmed with only 3 or 4 people, resulting in considerable savings in time and money.

Benefits of agile usability testing

  • Good focus on critical and high-priority items but still open to unexpected discoveries.
  • Well suited to testing in conjunction with rapid prototyping environments.
  • More problems found quicker and with less expense.
  • More problems fixed and the fixes validated.
  • Fixing problems early helps uncover other problems which might have gone unnoticed.
  • Tightly integrated with design and development process.

Critical success factors

  • Use of expert review to focus subsequent user testing on likely problem areas.
  • Use of rapid prototyping tools to make quick changes.
  • Culture focused on the user and the overall customer experience.
  • Involvement of all stakeholders in the testing and resolution of usability issues.
  • Empowerment of the team to make quick decisions and design changes.
  • Agreement on key tasks and performance criteria.
  • Access to members of the target audience.
  • Ability to rapidly analyze test results and substitute new tasks.

Summary

Agile usability testing overcomes some common problems of usability testing:

  • Not bringing the team together to get buy in to usability problems and solutions.
  • Not testing enough tasks to uncover the majority of serious usability issues.
  • Not iterating to test the effectiveness of recommended solutions.

Our experience has shown this type of agile usability testing produces informed decisions and solutions in the shortest amount of time.

Mail Neo Insight with your comments.We'd love to hear your comments on agile usability testing .

 

What usability testing issues would you like us to write more about?





Back to Top


Our experience with remote usability testing

Recently, we've been conducting more and more of our usability test sessions remotely, using TechSmith's UserVue product. We thought you might be interested in our experience to date.

UserVue has allowed us to provide faster, more realistic,
and less costly usability evaluations for our clients.

Advantages of remote usability testing

Although we encountered a few teething pains early on in our testing, we've found remote usability testing provides us with a number of distinct advantages, including:

  • Savings in cost and time
    • No travel required for facilitator or participant
    • No test facility or usability lab required
    • Usability tests can be run in parallel with additional facilitators
    • Less incentive required
    • Fewer "No shows"
  • Easier to recruit
    • Larger and more diverse pool available
    • Specialists don't waste time travelling
  • More consistent and flexible testing
    • More realistic environment - in context of other software and apps participant uses
    • Participants feel more at ease in their own home or office
    • Potential to have participant control "proprietary computer and software"
    • Multiple locations can be tested around the world the same day
    • More uniform method and hints from same facilitators
  • Remote observation more effective
    • Does not interfere with test session
    • Allows people anywhere in the world to observe - great for widely-dispersed stakeholders
    • Observation space is not limited - could have hundreds observing if in groups
    • Observers can see what is happening on screen better
    • Observers can easily pass notes or comments to facilitator

Disadvantages of remote usability testing

Although there are a number of disadvantages, which may affect the choice for some studies, we have found most of them to be easy to deal with, or to have a minimal impact, when testing web-based applications or Internet/intranet websites.

  • Lack of physical presence
    • Cannot see the participant's face or body language
    • Confirming identification of participants is more difficult - no image or photo ID
    • Non-disclosure agreements have to be done electronically
    • Distributing incentives can be more difficult - electronic coupons or cheques versus cash
  • Test environment not as controlled
    • Open office configurations make it difficult for participant to follow "talk aloud" protocol
    • Risk that participant's computer may not function properly or could be harmed
    • Limited to compatible software and hardware platforms
    • Phone calls, interruptions, noise, etc.
  • Logistics can be more difficult
    • Different time zones may make scheduling more difficult for facilitators
    • Potential for network delays to cause synchronization problems
    • Limited to testing on high-speed connections

Remote test environment

The typical characteristics of the remote test environment are:

  • User is in their natural home/office environment
  • Facilitator and observers view screen and mouse movements of participant
  • Facilitator and participant interact via a telephone connection and chat
  • Observers hear audio and view screen interactions via computer and internet connection
  • Participant screen, audio, and keyboard/mouse activity are recorded for subsequent analysis

Our experience

To us, the main advantage is being able to test in a more realistic, contextually relevant environment — people are in their homes or offices, using familiar equipment and interacting with the software applications they use every day. Although there is some loss of control, we feel it is more than made up for by being able to observe people using the web as they would normally, subject to phone calls, pop-ups, email arrivals, and so on.

Seeing people's personal computer environments can sometimes be enlightening. Compare the amount of screen real estate for these two participants (see image below) and notice how differences in browser configuration and resolution can drastically affect the amount of content visible to the participant.

Comparison of drastically different amount of visible content for two different screen resolutions

Lack of video, which typically shows the participant's face and body language, was a concern at first but we have found most non-verbal cues are readily discernable in the participant's voice. In conjunction with our use of a "think aloud" protocol, which asks the user to verbalize what they are thinking or experiencing, we can generally tell when someone is getting tired, frustrated or confused. Compare these two usability highlight videos, viewing the version without the participant video first, and see if you think much is added by having the participant video in the second version.

No video - Large WMV (1.8 MB) or Small WMV (631 KB) or Small MOV (13.8 MB)
With video - Large WMV (2.0 MB) or Small WMV (764 KB) or Small MOV (14.2 MB)

Recruiting has been easier because people do not have to travel and try to find the test location. Because the time commitment is often one half or less of that required for conventional testing, they are much more willing to take an hour out of their workday. Because we are not constrained by geographical location, one facilitator has been able to test people in Vancouver, Calgary, Toronto, and Ottawa during the same day. We've also been able to test with blind participants using screen readers. Normally, this would be difficult to test in a lab environment as blind individuals often customize their computing environment extensively. With UserVue we did not have to make any special arrangements.

Our invitation email provides a simple link which allows them to test their ability to connect ahead of time and then connects them in a screen sharing session at the assigned time. Most people had no difficulty connecting the first time. Only a handful of people required any instruction and this was typically done in the first few minutes of the test session.

Being able to invite observers at any time is a big plus for stimulating interest in the usability results and for letting stakeholders share in the experiences of their customers. Once they've been exposed to usability testing, they have a much better appreciation of how it can contribute to more successful websites or web applications.

Technical glitches were no more prevalent than we have typically experienced in face-to-face testing. We've encountered no firewall issues and only a couple of instances of the participant's computer running slowly or not being able to get a very good connection speed. In general, participant's were very tolerant of any network issues, such as during the teething period when we had to reestablish a session a few times after our connection was lost.

We have been very pleased with the quality of results we've obtained from our remote usability testing. From our experience, they are on par or superior to the results we've obtained previously using face-to-face techniques. The superior elements are due to us gaining a better understanding of contextual issues - for example, type of browser, screen resolution, window sizing, multi-tasking, interactions with other software, etc.

A comparison study by Bolt | Peters in 2005 found "no significant differences in the quality and quantity of usability findings between remote and in-lab approaches." However, consistent with our experience, the study showed key advantages of remote testing in the areas of time, recruiting, and the ability to test geographically diverse audiences.

Remote testing is most appropriate for informational websites, web applications, intranets, and ecommerce sites. You can read more on this topic by visiting the Wiki for Remote Usability.

Mail Neo Insight with your comments.We'd love to hear your comments on remote usability testing

Back to Top


UserVue updates

TechSmith is continually evolving UserVue's capabilities. In the last few months they have made the following improvements which continue to make this one of our tools of choice.

  • Improved firewall compatibility - We have not encountered any firewalls in our recent testing which UserVue has not been able accommodate.
  • Marker export - Text markers entered by the facilitator or any observer can be saved in a standard CSV file and opened in Excel for further processing. They are also indexed to the video file.
  • Support for Vista - UserVue now supports Vista. An FAQ on Vista compatibility is available on the TechSmith website.
  • Improved use of screen real-estate - Chat and marker panes can now be hidden to provide a larger view of the participant's screen during testing.
  • Easy set-up or session reconnect - UserVue now supports simple invitation codes which the participant, or observers, can simply enter into the UserVue welcome screen. This permits easy ad hoc test sessions where people can be invited by verbally telling them the code.
  • Customized invitations - Invitation emails can now be sent from any address of your choice.
  • Accelerated workflow - Improvements to the user interface make setting up and controlling sessions easier.

Try it out by visiting the UserVue home page and clicking on the "Try it for free" link on the right.

Or contact us for more information about how we have been applying UserVue to conduct remote usability testing across Canada and in Europe - call 613 271-3001 or email us about UserVue .


Quotes of the month

"If your goal is to sell usability in your organization, then I believe 3-4 users will be sufficient. Much more important than the number of users is the sensible involvement of your project team in the test process and proper consensus-building after the test"

Rolf Molich, 2002

“Interior pages accounted for 60 percent of the initial page views. Recognize this and support it. Don't try to force users to enter on the homepage.”

Jakob Nielson and Hoa Loranger, Prioritizing Web Usability, 2006


Save $100 on our next two one-day workshops!

September 21 is the deadline for early registration in our one-day workshop Usability challenges of new Web technologies which takes place on Thursday, October 4. We will review many live Web 2.0 examples and explore how to adapt traditional usability techniques to design and evaluate the new generation of web user interfaces. Early registrants save $100. Save even more for group bookings. Come join us.

For our Thursday October 11 workshop Designing usable Web-based applications, sign up before September 28 for the early registration discount of $100. Web applications are becoming as powerful as the ones on our desktop. Join this workshop to explore the challenges of designing web applications, and come away with tips, techniques and current best practices for providing high-value services that enable your users to fulfill their goals effectively and efficiently.

To take advantage of further discounts, call us to run either workshop at your location for five or more people: (613) 271-3001.

Back to Top


If you have any comments on The Insighter, or ideas on usability topics you'd like to hear about, send us an email with your comments.

We invite everyone to subscribe to the Insighter, our monthly e-newsletter.

If you wish to unsubscribe, just send us an unsubscribe email.

 
 
  Home   About Us   Services   Case Studies   Training   Teamworks