How do you best learn about animal behavior - move the animal to a lab or go to the environment and observe the animal?
I've been reading a lot about usability testing lately and I started wondering, when we test applications with users - do the users need to be onsite or could the test subjects be somewhere else, like their home or business environments? Based on my experiences with testing, I'm wondering if the model we currently use for testing gets accurate results of if, well, we need to do something differently to really understand how users think.
I took a class a while back about design research. Design research, which is similar to usability research in many ways, has been moving toward having in person interviews with subjects at their native environments (home, work - anywhere that they want to learn more about the user/subject). So let's say for a consumer product to use at home, a researcher would come into your home, watch what you do, and come up with ideas for a product based on your behaviors. This helps researchers understand their audience far better than any interview over the phone or in a lab could do - you get to see how people act directly in the environment they will use the device or equipment.
I started wondering if it made sense to do this for usability testing. I have a few reasons for this:
- When we bring users into a lab, are we really getting a true response as when the same user is at home? Are we really seeing the true response to the user experience? For example, people have a tendency to multi-task and being in a lab, you just don't see that
- How honest are people really when they are being interviewed in person (even if they are told that they are not offending people with feedback)?
- Words vs actions - which means more in usability and user experience. I'd rather see the user's actions than hear them rave about an interface
User in a lab vs home - which provides more accurate feedback?
If you are testing a consumer product, you may want to test it while the person is on his home computer. You would want to see if the person does indeed multi-task while using your application, or if the person gets distracted and stops using it for any length of time. What if the user gets interrupted by his kids frequently - it would be good to know that to be sure to include longer session times. What if if the user makes dinner while computing - that would encourage extended session times as well or encourage the development of a recipie program. What about the living room? The bedroom? The bathroom? (ok, maybe I don't care to know the user uses his computer in the bathroom). This is how we have movies, music and books on the iPad and other devices for entertainment.
The same is true for business users for applications - what if the business user gets interrupted frequently? What if he already has 3 other systems to use for a similar purpose? You may get this information from a lab session or phone call with questions, but the best way to learn this is through direct observation of the person's behaviors in their own work environment.
If a user comes into a lab with the 2-way mirror or recording devices, will you get the same response as if you saw how the user works in his natural environment? From experience I have picked up more by being at someone's workplace to pick up on nuances that I never would have seen or experienced in a lab. For example, to watch a user switch application screens very quickly on a 15 in monitor allowed me and a colleague to make a recommendation to management to have data entry clerks get a larger screen to display up to 3 application screens at once. In another case, I heard a lot about how travel agents had a difficult time logging into this travel application. Sure, it sounded bad, but I didn't feel their pain until I sat next to one of them and almost threw the computer out the window myself out of frustration on not being able to log in.
The value of seeing users in their native environments is immeasurable. I see it as a requirement for accurate user testing and research. Otherwise, the test is too clinical and isn't really giving accurate readings except to know how someone uses an application when he is in a room, alone, with no other applications on the screen.
Honesty of test subjects
I think this is always tricky. You never know if someone is telling you the truth in testing about their experiences. You can only get a good idea based on the user's actions and what you observe from the system. With most people, just in general, if their actions and words don't agree - watch their actions. Actions definitely speak louder than words.
Sometimes users will over-exaggerate or over-emphase an issue because they are looking for something to say. They believe that you are looking to get feedback and feel they need to give you something for the money they are getting. Sometimes users don't know what to say - they get how the application works, find what they need, and are not sure what else you are looking to learn from them. There is sometimes a feeling that the subjects know that they are subjects, like mice or rats or monkeys. I've been behind the wall or as a facilitator and heard from subjects "Is this what you are looking for?" "Am I being helpful?" "Did you hear that back there?"
With comments like that, are you getting accurate comments and feedback? Or are you learning about a user in the same way a researcher learns about a rat (getting clinical details that help with research, but they are controlled to find basic items being researched - nothing about the environment).
Words and Actions
As I wrote earlier - you can always trust actions over words. So, can you really trust what someone says about an application if, at the end of the day, he doesn't use the tool anyway? I've witnessed users not reading instructions, skipping through an application, and then at the end saying "You really need to add instructions," as if the user would read it anyway. I've seen users comment how they would buy a product from the company I was doing testing for and make the results work to support that - but their actions showed how well or not well they understood the application.
This is where leveraging Web stats and other tools help researchers understand what users actually do rather than what they SAY they do. It also helps for observation and understanding why users may drop off at a key point. With these theories, you can then make observations that could be tested and researched in the user's main environment.
These factors are why I find great success with remote testing in the user's environment. If you have a camera, can see the user's facial expressions, see the user's response to the test questions, observe what they click on screen and see the recording, get an idea how users multi-task, and see what is going on in their home - you learn a lot more about users in a way that is not direct and in their faces. Hopefully, the user will forget that you are there and observing them. I think moving to a model like this - more remote observation in the users environment - will give better results for better product development and innovation.
So I guess this means I would prefer watching gorillas in the jungle to better understand what they do and how they think. I'd rather see how they play and make observations than construct a test that may direct results.