I was in a test where a fellow UX professional wanted "meaty" feedback. I was curious as to what she meant by that. I thought we learned after observing and questioning just 2 users that checkout paths are pretty much commodities -- users knew approximately what to expect, they wanted better ways to do things like split shipping and they wanted to have a clearer breakdown of charges. A couple of times, users told us that if they didn't know the real total of the purchase, they would have abandonned the site and have just gone to the store. To me - those were huge findings mainly because they covered the little things that honestly push users just over the edge to stop shopping and just give up. Basic, but huge. To me that meant - make this as close to everyone else as possible so I don't need to think (and if we want to stray from that - we need to rethink the process and review it with them - they may not want that).
Are we always looking for the big revelation? I hate to say this - general experiences for usability are pretty - well - common and straightforward. Sometimes, almost intuitive. I guess this comes to the question - what are we learning from usability? Are we looking for those seemingly tiny pebbles that turn out to be boulders or are we looking for the big ah-has? Are we looking for problems or are we getting insights into what makes these users tick and do what we want them to do?
So back to my question - what is a substantial result of a usability test and considered to be satisfying? From my perspective, the goal of testing is learning how the users think - not just about the to-dos in the experience, but what they are willing to tolerate in order to achieve their goals (in this case, buying something). I see testing as an opportunity to try to understand what matters to users (what's hard and easy), and that is what gives us insight into what's usable. I want to leave a test understanding what will push a user to walk away as well as what will keep a user fixed to the screen. Mainly, I want to understand what makes them tick.
I think it also matters what the background of the user is. Is the user a frequent buyer? Does he use a lot of sites for the same thing? What does he expect? What has he seen that he likes? Today, we are meeting users who are generally highly experienced and competent on the Web - it's not new, it's part of their daily lives. Just looking at how information gets distributed for news and notices, we know people are online - a lot - and have very strong opinions about what they like and want. Usually, the devil is in the details - a slow page load time, a cumbersome way to do something they have done before elsewhere. Those are the things that will make them say, "This site sucks."
Then we come back to users knowing what they know. They don't develop products. Most likely, they won't give you an innovative idea or way to approach a process. This isn't meant to knock users - it's just not their job and they don't think that way. To them, it's about comparisons - Site A does this well, Site B does this poorly, Site C is just cool. They don't examine what goes through their minds when they buy a shirt - they just go online and get the shirt or go to the store and put down the credit card. if they have problems of any sort - then they abandon ship and the business loses the money. They don't examine the details of why there is a problem. There just is.
Sure, you could give users a survey to get their feedback on what could be interesting, but does that make a difference? On surveys, a person's commitment to give an accurate answer is dependent on that person's investment in the survey. Most times, people give a quick answer to get the job done and get their reward or people can give an answer to improve a business or hotel or destination they frequent (they are invested in seeing their input used). In testing, we need to get inside their heads and figure out how they are thinking and making decisions. As UX professionals (and we all know this), we need to figure out how users may approach an offline process online. Does a general usability test and review achieve this? Sometimes, it does. Often, we're back in the land of comparisons.
I guess if the goal of the test is to get that new insight, you need to present the users with the innovation you want to make - that big ah-ha! It's not up to them to give you that feedback. If anything, you should be able to get that out of the discussion. Questions and comments like these should be commonplace:
- Question: What are you thinking when you enter in your credit card number?
- Question: Should the site have this already because you logged in and saved this or used this 800 times?
- Comment: I want control of what I enter so don't pre-populate it and it's creepy for you to store that
- Comment: This is just a necessity - let's get it over with
So back to what's meaty. I guess meaty to me is getting inside the user's brains and getting feedback as to what they want to do and what's easy for them. If you want to get that big innovation - you have to have it on the table. It's up to the designers and product managers and developers to come up with that based on the feedback. The users will tell you how it compares to what they have used and how they process that flow of information. I guess meaty comes from how you listen to the users.