"Let's look at the data."
I wholeheartedly believe that you need to review data before making business decisions - actually I prefer to make decisions based on 80% data, 20% intuition. I know I'm stating the obvious, but sometimes people don't really look at the data before they make a decision. I know - that sounds weird, doesn't it? It's just as weird as some people who look at TOO MUCH data before making a decision. Yes - that's right - too much data.
Looking at too many data sources can honestly cloud decision making. Analysis paralysis. We have all experienced this. Study after study to prove what we intuitively know is right and what we see as trends as being real trends. Almost like having research to prove that the sky can be blue. Unnecessary and redundant, and this type of practice prevents people from getting things done - or working - what we get paid to do.
So how much data is enough to make a decision? This is a question I've seen at every organization. And the exact balance comes from perceiving data as the other sides of a conversation created from product development - there is the side of the company creating the product, and feedback from the industry and customers (or data).
I believe in learning by doing. Doing, or working, is the only way to make progress and it's better to do and fail than not do at all. Data, or feedback, should help drive doing. And feedback should keep the conversation going with product and be enough to feed the work.
Too much feedback from a product is like learning about Christianity by being in Church and listening to a preacher rather than being in the outside world, being generous to one's fellow man and just living. Listening all the time is certainly not very fulfilling (why Church only lasts for a few hours one day per week), nor is it a way to learn anything. You only learn from experience. This is also why children have homework - it's nice that a child understands how addition works, but that doesn't mean the child knows how to add.
Feedback includes the audience's response to a product as well as knowing what your competitors are doing. Once you make something, you need to understand how people use it, what they do with it, why they like it (or not like it).
Types of feedback:
- Web stats/metrics
- A/B Testing (almost like asking a direct question to get quick, direct feedback)
- Talking to customers
- Competitive analysis (helps for trends)
In addition to understanding what your customers like is just as important as understanding which features the non-customers use at other sites. If you pay attention to the feedback, it really is like having a dialog with your users and prospects. They respond when you make changes - either they buy more or less, buy from you or someone else.
This is probably the easiest type of user feedback a product manager or user experience professional can access to keep a product conversation going. It's readily available and can be reviewed at any time to help keep the work moving and respond to users. The metrics which are most meaningful:
- Heat maps - see what users click on
- Pathing reports - how users navigate the site
- Dropoff reports - where do users lose interest and simply leave
- Numbers OUTSIDE of metrics - purchase rates, call rates, sales rates - basically, any actions that the Web drives users to interact with the company in real time
You get a pretty good idea of what users do from metrics alone and are able to identify quickly what you need to do to keep that conversation going. There are never enough Web metrics.
If you can't talk to customers directly and you have a specific question about which approach works better - have an A/B test.
What makes a great A/B test? I know there are a number of articles out there that address this and I may write an entry on this itself, but to sum it up, key areas to test:
- Language - new positioning/messaging, calls to action, how to phrase things
- Page layout - how does the presentation of the product make a difference in an interaction
- Colors and page elements - buttons, text color and fonts, headline colors, content colors
- Images - what images and angle resonate with users
You can never have enough good A/B tests (well, you can if they are not effectively planned out).
Talking to Customers
This is the best way to keep conversations going with users. You can learn about issues from metrics and surveys, but it is only from talking directly to customers that you learn the "why" of what they do. This is key to keep the conversation meaningful and get customers and prospects to use your product more.
How much data is enough here?
- Regular usability tests with customers (informal studies every other week; minimally 2 times per quarter)
- Focus groups/innovation games at least once per quarter
- Talk to salespeople regularly to get insights about customers
Salespeople are a great resource for B2B marketers - and these people just aren't leveraged enough. Even people who work in stores could provide great feedback about the types of people who come into their store. These people who work directly with customers know what will get a customer to buy, how to approach a customer, what resonates with them, which messages work. They understand customer motivations and know what will work and won't work.
You can never talk to salespeople or customers enough. If you could get weekly or quarterly feedback from sales and customers - you are doing great!
Surveys provide helpful feedback because you can ask users direct questions, but they come with some risk. If you don't ask the right questions, you won't useful answers to guide your new projects. Again, feedback is about maintaining a conversation, and surveys give limited feedback to keep a conversation going. If you get a trend of confusing responses, you can't directly ask participants why they chose as they did - and that why question always gives the best insight. Without the why, you can only speculate why users gave unexpected answers - there is no hard proof. If anything you need a second study to confirm your speculations (this isn't like confirming that the sky is blue - this is confirming which insight or musing about the participants is accurate. With no data, the ideas of marketers is literally musing). To put it simply, the challenge of surveys is that surveys beget surveys.
Surveys also support an older view of users and customers - demographics. I have found that using demographic data to create a picture of users typically confirms stereotypes and creates new ones. It places a label on people that has no real correlation to anything.
Here's an example - I read the article that outlines demographics about health issues across the nation. It has an interesting graph and shows some trends. What is disturbing about the analysis is the generalizations that are made about people who suffer from specific illnesses:
For instance, residents of counties such as East Baton Rouge, La., in the "Minority Central" segment are more likely than average to suffer from Irritable Bowel Syndrome (IBS). While this disease tends to afflict white Americans more than black Americans, a study funded byGlaxoSmithKline and RTI Health Solutions found a correlation between socioeconomic status and the disease. Those with lower education levels and income were more likely to be afflicted. While consumers in these areas tend to have higher-than average concentrations of blacks, and lower than average groupings of Hispanics, overall they fall into a lower socioeconomic status than our other segments, possibly putting them at higher risk for IBS.
Now, I'm not just disturbed by the corporate funding of the survey (that's a whole different story) - it's the correlation that was made between socioeconomic status and disease. Sure, that's a great jumping off point, but what does it tell me about a problem - nothing! It gives me no insight except that maybe diet or environment may contribute to this. Then again, the issue could be genetics. Saying poor people suffer from IBS is creating a stereotype at some level. Data like this to me, unless it is used for diving into further research (and that isn't clear from this article), doesn't create a picture beyond creating and confirming stereotypes. And this is feedback, but again, it's not working towards a conversation. It positions the participants as lab rats being observed rather than people who participate in the world and different organizations and systems.
Ethnographic data is far better way to position users. It gets to the motivation of the individual - the why. Why do poorer people suffer from IBS? Why do people do what they do? What is the motivation? Again, if you don't have the why and understand people's motivations, you aren't making conversation. Users don't make decisions only based on their economic background, gender or race - these may be factors, but often they don't make a bit of difference - they often make decisions based on their value systems. Understanding the value systems (motivations) of people is missing and so needed for a successful dialog and feedback for a product.
Use surveys sparingly - demographic data doesn't really give you why answers; surveys give answers to direct questions. Only use when you have specific inquiries that don't require a why answer.
Competitors and Trends
Trends are key to understand what's going on with competitors. A trend is defined as "The general direction in which something tends to move." General direction - that's key here. It's basically what is your competition doing? What's the new level of parity? To get an idea of the new parity, you don't need to examine 20 sites - 5-6 will do. You are looking for trends, and if 3-4 of your 6 competitors are doing the same thing, that means that either:
- They tested an approach and it works
- There is a technical reason they are doing what they are doing
- Their users asked for the feature
- They are just copying each other because it is easier
You can't assume that what you see in a trend is truly based on what users like - even if you see that trend across multiple sites. Users only use things based on their experience and what they are used to - almost like creatures of habit. New ideas take a while to get used to. Further, most users don't know what to ask for - they don't do this type of work for a living. They depend on us in product to create something for them and solve their problems - they only know what they don't like (and where they spend their money proves that). Sites will often copy each other with the erroneous thought that a feature or approach has been usability tested, when it may have just been the easiest way to do something quick.
Getting feedback from competitors keeps the product conversation going.
Know 5-6 competitor sites deeply - more than enough to get information about a trend and what others are doing. More than that, and you are just seeing more of the same. It's always good to look at competitors and use their products, but spreading out interest just won't get you new information unless there is a market disruptor who is doing something in a brand new way.
Numbers may be numbers that you can't really argue with, but depending on your interpretation of them, they can tell very different stories. You can look at the number 5 as being halfway to 10 or half less than 10; it's the same number, but the interpretation of 5 is slightly different with different meaning. The interpretation of a number is always up for debate and can only be clarified within context. And this gets back to looking at too much data and what that means.
This is why there has to be some level of intuition in interpreting the data and keeping product dialog alive. The military is encouraging intuitive thinking:
The U.S. military also pointed to studies suggesting a sixth sense can arise from "implicit learning" — absorbing information without being aware of the learning process — rather than building up expertise through years of practice.
And there is also:
Intuition is tactical – tactical meaning the reflexive and mostly quick reaction to a given event. Analytics is strategic – strategic meaning planned course of action. It’s easy to see which would save lives on live battles and accidents.
From these statements, it is easy to see that some level of intuition is key to be responsive to user feedback and requests at a faster rate with product updates. If you get too analytical/strategic - then you won't get anything done and stop the dialog/feedback loop of product and use. Or worse, you respond too late because you are stuck creating a plan.
Analyze the data enough to make a decision. And yes, your intuition at some level is enough to give you insights to make changes. Spend enough time to understand what users are "saying", and then react. If you are wrong, you can always make changes again - at least you tried. Creating the "perfect plan" for users won't get you anywhere.
There is a delicate balance of intuitive and analytics/strategy when keeping a conversation. With conversations, sometimes you need to just respond with your gut and intuition; sometimes you need to think about what you need to say in response. In all cases, you need to be informed about a subject to have a meaningful conversation and respond intelligently (not fantastically). However, in all cases, your response needs to be timely to keep that conversation going. Taking a step back to look at the data and analyze the facts may be a good strategy if things are disastrous in a conversation, but generally, you want to respond promptly - or else a conversation won't continue.
80% data and 20% intuition keeps a solid conversation going between products and the market (competitors and users). It allows you to get data and then work to respond to what the data is telling you and keep a product evolving. Like a great conversation, each side needs to be informed (the data) and contribute (work and interaction). In today's competitive world, there is no room for analysis paralysis and carefully made plans that may miss an opportunity. You really need to trust in the intuition required in successful conversations.