Wednesday, January 25, 2012

Not right does not mean wrong

I'm reading a really good document about risk analysis, and the author makes the point that when using probabilities to make predictions, at some point the future will unfold in a way that will make others perceive you were wrong. He emphasized "perceive" and that got me thinking.

We do that a lot. Someone does their analysis, makes a decision, and then acts on it. Like a football coach that decides to go for it on fourth down in overtime rather than punt and put the ball in the hands of the opponents' red hot quarterback. The play doesn't work and everyone says he was wrong. Oh yeah, based on what?

"Well, the play didn't work, that proves he was wrong." No it doesn't. It could merely be an instance where the future took the less probable path. It's gonna happen! Chances are good that his decision was right.

Someone's analysis and decisions should be judged only over time and against a pattern of how often the predictions come true. Additionally, we should judge them by whether their analysis has a feedback loop that learns from failure and how quickly they respond to the unexpected outcome.

Anything else is just Monday morning quarterbacking.

Monday, January 16, 2012

Think Aloud Is More than Talk Aloud

Nielsen's current Alert Box reinforces that think-aloud is a great usability test tool. I couldn't agree more, but I'd like to add some in-the-trenches wisdom I learned from my first usability mentor, Loren Burke. There is a big difference between someone thinking out loud about the task they are doing and someone voicing their opinion about the design. The first is very valuable; the second, meh at best and dangerous at worst.

Here is the kind of data you WANT to get from a user who is thinking out loud:
  • "I'm looking at the UI and I think it does..."
  • "I want to do..."
  • "Hmmm, that's not what I expected, I thought it was going to..."
  • "That took longer than I expected/wanted."
In short, you want to learn about how the user sees her task and how she is making sense out of the UI in terms of that task.

What you don't need to hear is stuff like:
  • "I think the background should be blue."
  • "I don't think other users are going to understand..."
For one, this kind of information does not enlighten the problem space, it just adds one more opinion into a mix that probably has no shortage of opinions already. At best, it's a distraction; at worst it leads developers to make bad design decisions based on what the "users told us they wanted." First of all, the user who specified blue as the background, did he have any background in visual design? I'm amazed how quickly we are willing to overturn the opinion of our in-house visual design expert with the degree from SCAD just because an accountant tells us he likes blue.

Ok, but how do you get users to give you the kind of feedback you need? I use two techniques to improve the think-aloud I get:
  • Instructive practice
  • Reinforcement and extinction (a la B. F. Skinner)
Instructive Practice
I picked this exercise up from Mike Hannafin at the University of Georgia. Before I start my first task with a user I explain the think-aloud protocol and then ask the participant to count the windows in his house while practicing thinking out loud. I tell them, "I'm not interested in how many windows you have, but I am interested in how you go about doing this." Some will sit quietly and then say, "12." I then point out that I have learned nothing about how they solved the problem. I ask them to try again but to work real hard at thinking out loud. They try again, "OK, in my kitchen I have one over the sink, in the living room there are three, the den has one behind the couch..." At that point I stop them. "OK, now I have some insight into how you solve the problem, you imagine yourself inside your house and you kind of go from room to room counting the windows--starting with the kitchen." Usually their light bulb goes on and they say something like, "Oh, I see, you want me to think out loud." I suppress my instinctive response of "Duh!" and say, "Exactly, I'm going to want to know how you are making sense of the product while you do what you do."

Another useful tip is to have a safe and short first task, so if the person is having trouble thinking out loud you have a chance to work with them some more before getting into longer, meatier tasks.

Reinforcement and Extinction
Reinforcement and extinction are two principles from B.F. Skinner's Operant Conditioning. Most of us are pretty familiar with reinforcement, but extinction might be a new concept.

Reinforce user behavior only when they give you data, and praise them for exactly that: giving you data. Do NOT reinforce them for giving design suggestions.

For example, if a user tells me "This instruction here is really confusing," I first try to clarify where the confusion is, e.g. a word they don't understand, an ambiguity or whatever. Then I usually say "Thanks, that's useful to know." Same thing if they're trying to get the product to do something it doesn't. "So you expect it to automatically correct the word 'manger' to 'manager' because manger wouldn't make sense here. Thank's that's useful to know." Notice, I did NOT say, "That's a good suggestion." Why not? For one, I've got a room full of developers watching this test that know it's an unreasonable expectation for a spell checker. Also, when I praise a user for making a good design suggestion, what behavior am I reinforcing? Making design suggestions. But I didn't bring this user in to make design suggestions, I wanted to see how real people made sense of the UI in the context of doing authentic tasks.



The next technique, extinction, is how I deal with unwanted feedback such as style preferences, design suggestions, etc. I do nothing. That is what extinction is, the removal of feedback that would reinforce the behavior. Essentially, reinforced behavior continues, behavior that is not reinforced eventually goes away (becomes extinct). So a typical exchange would go something like this.

User: I didn't see the Submit button.
Me: So you didn't know what to do when you finsished the form because you didn't see the Submit button. Thanks, that's useful information.
User: Yeah, I think you should make it red.
Me: In this next task we are going to ask you to...

My first response: positive reinforcement for the insight--didn't notice the button.
My second response: moved right along without acknowledging the design suggestion. Thanks to the first response, we know we have a problem with Submit not being noticed. I'll let that visual designer with the degree from SCAD run with that.

I know there are some who would want the user's design ideas. But once you start down that path, the user quits sharing their insight into the problem space and starts giving you their solutions. It's like the user who says "12 windows." OK, I know your answer but I have no insight into how you got there. And again, usually I have no shortage of experts in the subject of the solution; what I lack is understanding the problem from the user's perspective.

So by all means, get your users thinking out loud, but encourage talk that illuminates the problem space--that's what a usability test is all about.