Give Better Feedback
As a product person, I’ve been in countless situations where I was asked to do a product review, both professionally and personally. While I do enjoy seeing other people’s work and the trust others have bestowed on me, I have to admit doing product reviews is always challenging.
The difficulty lies in two things. First, people expect designers to give them best practices that always work. Second, people expect designers to know everything that's wrong with a product.
Myth #1: Best practices always work
There were quite a few times when someone presented me with a design and said: “Can you take a look at this and let me know if they comply with best practices?” That is a flawed statement because best practices tend to change from product to product. Best practices exist as a result of rigorous A/B testing by funneling massive amount of traffic to your product. An example of a best practice is “Keep your most important information above the fold because people don’t scroll.” That might be true for marketing landing pages. But on a mobile device, the first inclination for many on us when landing on a mobile site or app is to scroll. Another good example of best practice is “Don’t use too much text because people don’t read on the web.” In the early days of Blue Bottle Coffee, the founding team had two versions of an e-commerce landing page. One with very few text and another with lots of text that described each blend of coffee in detail. The results shocked the Blue Bottle Coffee team. The second version with much more text converted significantly better than the first version. It turned out that visitors to the ecommerce site thought the detail description conveyed a sense of quality and trust. Most people did take the time to read each description.
Myth #2: Designers know everything
Another reason I find doing product reviews difficult, especially with products I am not familiar with, is the lack of context. Those who aren’t designers expect designers to point out everything that is wrong about their product almost immediately. Simply by looking at a landing page, I have no idea if there was another iteration of the same screen. Did the primary CTA have a different label? Have you tested with a different image than the current one? Have you tried separating the signup process to multiple short steps vs one long step?
Designing a better feedback system
It is important to give good feedback. I want to share a few tips to create more efficient design reviews. These steps require conscious efforts from the feedback giver.
1. Set the expectation
Although it's usually the seeker who initiates a feedback session, the giver shouldn’t take that for granted and shower the seeker with feedback at will. At the beginning of each session, I seek to remove the power structure from the relationship. I always start by saying, I am not telling you what you should do, I am telling you what I would do if I were in your situation. The choice is yours, I’m here to help you fill in the gaps in case you missed anything. Doing so showed them that I respect their position in the product. After all, it’s their product. But it also creates an environment for me to deliver the most candor feedback without worrying about hurting their feelings.
2. Ask who the user is
It’s surprising often we forget who we're designing for despite its importance. Asking who the user is helps frame the mindset for the giver. More importantly, it serves as a reminder the seeker to re-focus on creating an experience for the end users.
3. Ask for the objective
I cannot remember how many times this has happened. I would start giving feedback and only to find out that they didn’t align with the goal of the product. Before I start, I’d always make sure I have a good grasp on the product’s objectives. Is it to increase the sign up rate? Reduce shopping cart abandonment? Create a better on-boarding experience? Doing so allows the giver to stay laser-focused on the giving feedback only on core objectives. It also avoids wasting time giving irrelevant feedback that might dilute the core message.
4. Ask about previous experiments
Asking this question encourages the seeker to share previous experiments that either worked or failed. There are two types of experiments: A/B testing and brand new ideas. A/B testing is useful when you have a lot of traffic that lets you split visitors to land on two different versions of the site that differ in one element. A/B testing is effective in creating short-term, incremental improvements. But it is not good at uncovering user truth. Over optimization with A/B testing will take you to a local maximum where the return on future tests is diminishing. That leads to the second experiment, which is testing brand new ideas. Have you tried a completely different landing page? How do you know what you have built is what people want? I’ve found that asking those questions helps people see the problem they’ve been tackling from a new perspective. They were able to step back and see it from a higher level. Developing tunnel vision is not uncommon for people who have stared at the same thing for too long.