With human beings so susceptible to various cognitive biases, it’s useful to proactively consider some common biases that could impact product development. I composed a brief, non-exhaustive list including quick explanations and relevance. Hopefully, by keeping these biases top of mind, product builders can avoid falling prey to them.
Selection Bias – A sub-group of a larger group is identified but the sub-group is ultimately not representative of the larger group because a certain type of person is more likely to have been included in the sub-group.
Selection bias can easily creep into user feedback settings. Any feedback mechanism that is opt-in by default should be considered high risk for selection bias, as some user profiles may be more willing to fill out a survey or join a live call. For example, in a broad survey to a product’s current user base asking them to rate their experience, users with negative feedback may comprise the majority of survey respondents but a small minority of the overall user base.
This bias can be mitigated by using additional known information about users, where available, to help ensure the sample is representative of the larger group. This may not always be possible, as it is best practice to limit collection of unnecessary user data. Curating a smaller group that you know represents a reasonably diverse set of user profiles is another, often more practical, option.
Attitude-behavior Inconsistency – People often act differently or in conflict with their stated attitudes. They say they will act one way when presented with a hypothetical situation but in fact do something else when facing this scenario in real life.
This phenomenon has been demonstrated across numerous scientific studies. A recent paper citing some historical literature is linked here if interested. This bias suggests that relying solely on qualitative user feedback describing what users would do in various scenarios is risky. Features could be built based on this type of real user feedback but then not actually adopted because users don’t behave consistent with that feedback.
This bias can be mitigated by leveraging actual usage data and metrics to understand user behavior. When meeting with users or prospects in person for feedback, consider allowing them to navigate some form of prototype and observing their behavior and reactions, rather than chatting through more abstract hypotheticals.
Hidden or Omitted Variable Biases – Technically combining two here, but these occur when an unaccounted for variable has significant impact on a dependent variable.
These are particularly prevalent in statistical modeling / data science applications and are therefore relevant for anyone interested in using such methods. For example, a SaaS company may be interested in what factor most influences user upgrades from Tier I to Tier 2 of their product. These biases can lead to false attribution and to the company targeting the wrong users or making product investments based on flawed logic.
A simple way to mitigate against this bias is by sanity checking the variables included in the statistical model. By zooming out and taking the big picture perspective, it’s easier to spot where there could be an issue. If a model is showing a variable to have a sizable effect but there isn’t a logical mechanism for such a large effect, then it’s worth examining this variable and whether there might be a missing variable that could account for the effect if included in the model.
Confirmation Bias – Justifying a pre-existing viewpoint using cherry picked portions of data and evidence, despite the mixed or even contradictory nature of the evidence in aggregate.
This is an easy one to fall into when building products because there is usually an initial idea of what should be built in someone’s head, whether that’s the designer, product manager, founder, or another teammate. It can be very difficult to let that vision go even in the face of contradicting evidence.
This bias can be mitigated by setting success metrics up front. For example, before running an A/B test on the form of the call to action in a workflow, the team should define not only what they are measuring but what threshold they would find meaningful to implement a change. By defining the implications initially, it’s more difficult to massage contradicting evidence into a supportive narrative.
Survivorship Bias – Over-indexing on the characteristics of known success cases while not accounting for the relevant failure cases.
This one is highly relevant for start-up and product strategy. Just because AirBnB and Uber skirted local regulations and became breakout successes, it doesn’t mean this is a viable strategy for the next marketplace looking to scale. Times change and regulatory regimes change and adapt. Most failed products and companies fade away into the ether, so it can be difficult to isolate which elements played the largest roles in the success cases.
This bias can be mitigated by simply remembering that context matters. Learning from the experiences of other people, products, and companies is important, but be sure that the learning or strategy makes sense in your context rather than blindly applying it.
Succinct Takeaway
It’s of course impossible to account for and mitigate every type of bias. Taking a relatively small amount of time to acknowledge, think through, and periodically revisit these key biases, however, can help shield your product development efforts from substantial adverse effects.
Leave a comment