When product teams think anecdotes is research
I'm a bit shocked to learn that I have over 40 11" x 11" x 17" boxes worth of books in the house but here we are. My arms are very tired and this week's deadline is looming.
Ever since I learned to pay more attention to UX discussions on the internet and as you'd expect browsing the internet, there's plenty of ridiculous hot takes out there. But this particular one that Pavel is rightfully dunking on brings back a lot of very relevant memories for me. So, story time!

I've been working on Software-as-a-Service products since 2009 and Cloud-based tech offerings since 2018. Over the many years, I've worked on a bunch of products that follow this pattern:
- PM proposes a new feature, it's one that tons and tons of customers request and is seen as obvious and fundamental. "I want to back up my data" is a very typical example of this, as is "I want to automate this action".
- The PM, after talking to a bunch of people and asking them things like whether they want the feature, whether they'd pay for it, how much, etc.. makes a strong pitch and everyone gets on board.
- I'm pulled in as the data person and we work out a success metrics plan so that when the feature launches we can see what sort of impact it has.
- During the metrics planning I usually ask what they expect success to look like. PM often confidently says they expect > 90% adoption. So many people say backups are best practice, that automation is required for scaling up, etc,. it is blindingly obvious everyone will use it.
- The feature launches. Metrics say 10% of all users adopt the feature.
- Oh, [some excuses here]. It'll surely pick up in a bit. Let's wait for more data.
- Things do not pick up.
- Cue a few weeks of me proving that our metrics telemetry is not broken.
- Things still do not really pick up much, just hangs in the 10-20% range. Some product changes and experiments are proposed and worked on to try to boost the numbers. They help to varying degrees.
- PM stops trying to hype it up, moves on to work on another feature.
I've seen this exact story happen multiple times in my career, on completely different products, completely different teams, and often in completely different industries and companies. And yes, I'm a slightly bit bitter about having to spend quite a bit of time doing add-on analysis for an increasingly panicked PM trying to pull the tiniest sliver of success from the gaping jaws of defeat.
Data-determined decisionmaking
The pattern of behavior we see here is the PM essentially letting a data point, "everyone I ever talk to asks for this exact same feature" make their decisions for them without ever considering in what contexts that people will or will not adopt the feature they're considering.
As an example, while working on various backup/disaster recovery products, I once saw a backup/DR feature get single-digit percentage adoption and the team couldn't figure out why since everyone kept asking for it. We finally resorted to having qualitative UX researchers go and interview a bunch of potential customers.
After talking to a number of customers, the picture became clearer. Customers saw the new feature come in and were interested in it. The only way the product could be backed up is to effectively create a separate copy of the data in another region and have replication features, and the users more or less understood that. Setting up this replication had been anticipated as a major friction point so the process had already been simplified and refined down to a minimal number of clicks and menus and no user really had a problem with.
Thus far, it sounded like we had done everything correctly. We had a product feature that everyone was constantly requesting. The customers were aware the feature was launched and could easily find it. They were even interested in using it, and we made sure the adoption process was as simple and easy as we could make it. Then why was practically no one using it in the data? Backing up your data is rule number one in data protection processes.
The answer was simple once you sat down and thought about it. The only way to make a backup of that particular product was to make a hot replicated copy of your existing data. We had effectively made a very elaborate "double your spend" button and our users very clearly understood the implications of pressing said button. Considering some customers might be spending thousands, or potentially even hundreds of thousands on their data, very few of these people could authorize spending double overnight.
But if the problem was as simple as "wait for the finance department to eventually authorize the purchase order", then adoption would slowly approach the initial PM's goal of > 90% adoption, right? Well, that obviously didn't happen even in the very long run. While people are more than happy to say they'd use a data protection feature since it is best practice, when confronted with the actual bill for such a feature, they are much more realistic about things. Can't they just back up a tiny critical subset and let the rest be at risk? Did they even need backups since up until now things have been largely fine without them?
And so, adoption never hit the hyper-optimistic > 90% in the original proposal. While on the surface everyone was clamoring for a feature to protect their data, they were actually just adding a very common item to the list of things they use to mitigate their data loss risk. Most importantly, prior to the release of the backup replication feature, these same users had already come up with other ways to mitigate that same data loss risk. This is true even if their risk mitigation plan is to simply not care about mitigating the risk for cost reasons. Unless the new feature could provide value at a cost better than a customer's existing plans, there wasn't going to be much adoption.
The only way to avoid having such over-inflated expectations would be to have a very deep understanding of what users are actually trying to accomplish, and how they're currently accomplishing it. This is the kind of knowledge that deep user research attempts to get to. The problem is that talking to a bunch of customers over drinks at an industry conference and asking very leading questions doesn't equal actual user research.
Where data work fits into this
If you're a data person like me, the chances are pretty good that you feel a certain amount of anxiety at even the mere thought of having to interview customers to understand them. That's what qualitative researchers are for, right? But putting aside who does the customer interviewing part of the research, there are still many things that use data folk can do to try to steer teams away from such a giant pitfall.
First, we might have access to historical data of these scenarios that we can use to remind people to temper their expectations. Maybe you can find users who are engaging in a behavior that is related to what will be built, and you can show it's a relatively uncommon occurrence. Nowadays, if I'm working on a product feature that looks like its headed towards a low adoption rate, then I'm pretty vocal about warning teams up front to expect low numbers. To the extent that teams are willing to listen, I can steer them into better understanding their customers and figuring out if the feature they're planning on building is actually something users will likely adopt.
Then, when the time comes for me to help a team set up their success metrics, I'm always pushing to make sure we have telemetry "high up in the funnel". That is to say, I make sure we have stats on how many people are exposed to and likely know the feature existed, and what fraction actually show some interest in the feature. I know that I'm going to have to prove that lots of users simply aren't showing interest about the new feature, so I might as well make sure I can show that up front.
Finally, when teams are in the flailing around in denial phase of the launch, what we can do as the data person helping them make sense of their metrics is to help them face the reality that people aren't going to adopt the thing en-masse. There's usually not going to be some little tweak or flow redesign that is going to make a feature explode in popularity.
Yes, a lot of this work doesn't land in our quantitative wheelhouse, but we're still central to a bunch of the moving pieces. Plus, it's in our best interest to spare our teams the whole cycle of disappointment and stress involved in doing such launches.
Standing offer: If you created something and would like me to review or share it w/ the data community — just email me by replying to the newsletter emails.
Guest posts: If you’re interested in writing something a data-related post to either show off work, share an experience, or want help coming up with a topic, please contact me. You don’t need any special credentials or credibility to do so.
"Data People Writing Stuff" webring: Welcomes anyone with a personal site/blog/newsletter/book/etc that is relevant to the data community.
About this newsletter
I’m Randy Au, Quantitative UX researcher, former data analyst, and general-purpose data and tech nerd. Counting Stuff is a weekly newsletter about the less-than-sexy aspects of data science, UX research and tech. With some excursions into other fun topics.
All photos/drawings used are taken/created by Randy unless otherwise credited.
- randyau.com — Curated archive of evergreen posts. Under re-construction thanks to *waves at everything
Supporting the newsletter
All Tuesday posts to Counting Stuff are always free. The newsletter is self hosted. Support from subscribers is what makes everything possible. If you love the content, consider doing any of the following ways to support the newsletter:
- Consider a paid subscription – the self-hosted server/email infra is 100% funded via subscriptions
- Send a one time tip (feel free to change the amount)
- Share posts you like with other people!
- Join the Approaching Significance Discord — where data folk hang out and can talk a bit about data, and a bit about everything else. Randy moderates the discord. We keep a chill vibe.
- Get merch! If shirts and stickers are more your style — There’s a survivorship bias shirt!