A pigeon outside of Bryant Park in the winter just being there

When orgs don't listen, but wants "validation"

Dec 16, 2025

The past couple of days, a BlueSky thread from Pavel has been living pretty rent-free in my head. The thread was talking about a pattern of work where leaders start leaning on UX research teams for "validation" projects. I've seen and experienced the pattern in various ways in my ~7ish years in a full blown UX organization, but haven't really observed it enough times at a high enough level to notice the pattern.

Since the observations are worth knowing, I'll repost the core text below for your convenience, with some light edits to account for quote-posts and formatting.

https://bsky.app/profile/spavel.bsky.social/post/3m7uwyvbqik2b

This overlaps with a challenging mode of decision-making in orgs: validation.

I've made a decision, and invested some effort into executing it. Now I need you to validate that decision.

If you find any problems we can still fix, we'll fix them. But if you find problems we can't fix — don't.

[Quote post of ‪Rachel Coldicutt‬ ‪@rachelcoldicutt.bsky.social‬
Per my last re-post, twice this year I've been commissioned to write reviews of harms that can arise from the use of LLMs and then, as we surface the harms, told
it's not appropriate to be political.
But LLMs are a deeply extractive technology, they are political by design.]

---
This is the real reason that tech orgs love "build to learn." The more time they spend on building, the more the entire project gets locked in to the course of action.

The range of "acceptable" issues that the validation can find rapidly diminishes, making the idea seem more and more perfect.

We have decided to use AI. Tell us about the issues. Oh, but don't tell us about the issues we can't fix, because that would make us look bad. We only want to know about the issues we can fix, so that we fix them, and can tell everyone we fixed them.

And of course, the roadmap for implementation is already packed (and any slack time will be eaten up by timelines slipping). So the issues validation finds better be very small indeed, otherwise it will be the researcher — and not the faulty product — that the org considers a problem.

Organizationally, it's a way to weaponize the sunk cost fallacy against research. The longer you wait before "validating" the safer your idea is from holes being poked in it by evidence.

Research stops being an input into decision-making and starts being an accountability sink.

[Quote post of ‪Pavel‬ ‪@spavel.bsky.social‬
When we say "research shows that you shouldn't do it" these people revolt, because "do the thing" was their sole success criterion. They were doing it not to improve retention or increase productivity or any of those things, so alternative ideas that work won't be accepted. It HAS to be That Thing. ]

Because if there's one thing mediocre leaders need above all else, it's validation. They know their ideas are bad. They're insecure.

So it's not enough that people merely do what the leader wants. People must agree with the idea and celebrate the idea-haver for having it.

This also means that when the idea fails, the leader shifts the blame onto everyone else. "We" all agreed that this was the right thing to do.

Never mind that the agreement was mandatory.

Now, going back to my experiences, validation work is something that I never liked doing, and it was something that as a team we would push back on. In many instances, the pattern of behavior is "we decided and built a thing, now go tell us that we did it right", which is extremely low-impact work for researchers. About the best we can hope for is we begrudgingly do the work, raise all sorts of issues, and get ignored. As a research team, our goal was to get involved BEFORE the decisions get made to build stuff so that we can figure out what the actual thing to build should be. In our view, that's how good products get made, not endless iteration upon a flawed base concept.

Now, sometimes we do work with a partner team that is actually willing to listen to the results of our validation research and make significant course corrections. I honestly have no problem doing those projects for those folks because now we're in an actual feedback loop. Working with those people is usually very fun, rewarding, and fast-paced. But in my experience the vast majority of team leads don't listen to the feedback and instead request countless "could you look at things this other way?" in an attempt scrape together some silver lining to report to their execs. This is why, as a rule, I instinctively become wary when someone asks us to validate their product designs are working in the marketplace as they hope without even bothering to talk to us about defining success at the concept phase.

I've seen relatively mild examples that resemble Pavel's account when it comes to features that some big whale customer demands. It's fine that we're building a bespoke feature that a client is going to spend $$$ Millions for, but then why are we being asked to validate that the feature is any good? It's already been "validated" for purely monetary reasons so we don't need a UX or analytics validation that it's a good idea. Examples like these abound to varying degrees. I'm sure many readers can think of at least a couple of times in their experience where this has happened.

But the times when I've seen the worst version of this the most is when teams are desperate to "Ship Something!". Every current "We Build AI Thing!" team on the planet is essentially like this right now, because research is inherently slow and every stakeholder is out for blood, glory, and "first mover advantage!". Honestly, most of the time these teams don't even bother asking for validation studies to be done because they'd rather use the headcount to launch more stuff – just full on spaghetti at the wall development because surely one of these attempts will become the next ChatGPT and bring glory and riches to the person who can take credit for it. Moreover, there's so much failed crap pushed out every week these days that the market quickly forgets those attempts.

When faced with that kind of incentive structure and organizational culture, on a structural level, we as UX researchers have very little tools to directly deal with this. This is about politics, money, organizational relationships, culture, and power. If development teams all fall into a habit of just spraying features haphazardly without listening to feedback and getting rewarded for it, and if the UX teams can't build relationships with decision makers fast enough to get ahead of the development curve to get critical input in before decisions happen, then UX teams start looking like expensive burdens and blockers. Take this too far and... why would you even pay to have such teams around?

I've been in many organizations that will say they respect and listen to UX when they build products, but you usually hear those when things are going well. It's when things become difficult or hyper-competitive that you'll find out which teams are willing to slow down their work in order to actually listen. Some do, many don't.

As UX practitioners, we all firmly believe that over the long run, identifying user needs and building products to address the needs is the path to success. Having UX around is a way to more consistently find and address needs at an org level without having to rely on some lucky visionary to guess correctly. The long run is a game of averages that outlives individuals, and good processes improve average outcomes by cutting out the failure and identifying the successes. No matter how you crunch schedules and adopt "rapid research" cycles, it's still extra steps and cost. But when an org starts to balk at even paying that relatively small cost in order to "Ship Now!" I'd be wary of things to come. While the chickens may come home to roost some point in the future, you might not be in that future picture by then.

Anyways, stay close to your stakeholders.


Standing offer: If you created something and would like me to review or share it w/ the data community — just email me by replying to the newsletter emails.

Guest posts: If you’re interested in writing something, a data-related post to either show off work, share an experience, or want help coming up with a topic, please contact me. You don’t need any special credentials or credibility to do so.

"Data People Writing Stuff" webring: Welcomes anyone with a personal site/blog/newsletter/book/etc that is relevant to the data community.


About this newsletter

I’m Randy Au, Quantitative UX researcher, former data analyst, and general-purpose data and tech nerd. Counting Stuff is a weekly newsletter about the less-than-sexy aspects of data science, UX research and tech. With some excursions into other fun topics.

All photos/drawings used are taken/created by Randy unless otherwise credited.

Supporting the newsletter

All Tuesday posts to Counting Stuff are always free. The newsletter is self hosted. Support from subscribers is what makes everything possible. If you love the content, consider doing any of the following ways to support the newsletter:

  • Consider a paid subscription – the self-hosted server/email infra is 100% funded via subscriptions, get access to the subscriber's area in the top nav of the site too
  • Send a one time tip (feel free to change the amount)
  • Share posts you like with other people!
  • Join the Approaching Significance Discord â€” where data folk hang out and can talk a bit about data, and a bit about everything else. Randy moderates the discord. We keep a chill vibe.
  • Get merch! If shirts and stickers are more your style — There’s a survivorship bias shirt!