

Killing that feature you knew would change the world
The process known as “market validation” is among the most challenging ones to execute – from a time-resource-impact perspective, but also psychologically. While time and resources vs impact are often discussed, the psychological strain is often neglected.
For inexperienced product leads failure is difficult to swallow mainly because of the trust they put in their own judgement – and trusting yourself rather than the market is a well-known recipe for disaster.
The good, the bad and the ugly (feature)
Feature development reasoning may vary, but usually falls in one of these groups:
- direct customer requests
- competitor advantages
- assumptions & innovation
- others
Direct customer requests look to be the easiest to add to the backlog and should get some fair usage; much of the same goes towards features, that your competitors have – you shouldn’t provide them with unnecessary advantages.
Let’s take a minute to remember Jobs’ famous “Your customer doesn’t know what he wants” quote. Saying he wants a feature is different than actually using it – just think how many customers intend to pay for a newly-built app and how many actually go for their wallets.

This metric has “NO” written all over. Sorry.
To make matters worse, the more features you add to your product, the more bloated its UVP would become. So how can you decide what’s staying?
Analytics to the rescue!
Passion vs. Reason
Product managers are very passionate about their products – often comparing them to their “babies” – and it’s a good thing. It brings work engagement, fulfilment and joy to the workplace: and overall better results. You shouldn’t work on a project that you aren’t passionate about.
However, you shouldn’t get too personal with your work either. Otherwise, you’d start valuing your own opinion more than other people’s – and miss key feedback. Sometimes, your opinion differs from the users’:

Sorry, Mr. Blog, Sir, you are no longer needed.
Rejection may come in various shapes and forms; therefore, we suggest tracking multiple metrics, instead of using what Alistair Croll and Benjamin Yoskovitz call the “One Metric that Matters” (OMTM). Here’s why.
Imagine the following scenario: you have a great data visualization software. Your users struggle at the beginning, but once they get the hang of it, remain with the product for 18 months on average. To address new customer complaints, you add a feature that helps novice users, but takes a lot of screen estate. As a result, your new users get on board quickly, but the average RPU (revenue per user) drops, and churn rate is through the roof.
To address cases like this one, we devised a priority list of metrics to track:
- Feature usage (total & percentage of all users)
- Feature usage over time & cohort split
- Impact of major features on conversion & usage
- UI/UX-related metrics
This hierarchy is quite important to follow to limit the damage as soon as you identify it – plus, overall usage is way more crucial than, say, UX.
Coping with rejection
Product owners need to be in a position to cope with rejection instantly. Such is the lean way: build – measure – learn. We live and die by that code.
Make sure you’ve measured properly and extensively, but once you’ve learned, act. Remember – iterative improvement is what you should be after.
Even if it means killing that feature you thought would change the world.