Just a tree somewhere, doing tree things

Are quant and qual UXR melting into one thing?

Dec 9, 2025

One of the trends in the UX research space I've seen happen is the slow (though apparently accelerating?) trend where the distinction between quantitative and qualitative researchers are blurring. Back when the quant UX thing started in the mid 2010s, the tools for data science were extremely primitive and the story goes that it was a very rare breed of researcher that not only had the interest and skills to want to understand user behavior, but also pair that with the computational and quantitative skills to pull information out of large datasets to do research.

Over time, the subdiscipline of quantitative UX research came to encompass a bunch methods that involved "big" datasets, computer systems, instrumentation at scale, and even large scale survey-based methods. To put it bluntly, quants would be more likely talking to (really, yelling at) computers to do their work, while quals are more likely talking to actual humans. This pattern developed over time and essentially grew in parallel to the field of data science, especially the product-focused parts of DS. Even now I feel those two "roles" are essentially the same skillset.

But we're now in 2025, ZIRP is a thing of the past, the economy is weakening which brings with it budget cuts, layoffs, and a very competitive job market. So now, I'm seeing more and more places that are looking for "mixed methods researcher" folks. In the few conversations I've had with companies looking for such mixed methods folk, it's clear to me that they actually still want either a quantitative, or a qualitative researcher in their heart of hearts, but added on a "but we want some skills from the other bucket" condition. Maybe they have lots of data and generally need a quant, but their primary use case is some niche enterprise-only thing. There's only 100 customers in the world for the product and so they need their quant to occasionally be able to do an interview study. Or maybe the company needs a primarily qualitative researcher to do exploration and early stage product development research, but they have pre-existing data that they want someone to look at. It's not enough data work to justify hiring a completely separate quant for, but they want someone who can at least look.

In times of loose money and limitless growth, a company can just hire two separate people instead of trying to find a unicorn that can do both things. But in the current market of tight budgets and higher unemployment, the company can afford to be picky because there's enough unicorns floating around that you'd probably get lucky.

I'm going to put aside the consideration for whether companies are right in trying to merge the two roles. There's obviously reasons where merging roles is ill advised, with there being benefits to maintaining specialization for tough, novel problems that requires deep expertise. Large organizations that have enough work to keep multiple researchers occupied can clearly benefit from specialization of labor. But for other situations, there could be benefits to having a generalist that leans more heavily to either quant or qual work while doing small stints in the other discipline. I don't have a broad enough industry/market view to know where the trends are going for this. Maybe the current demand for unicorns is an short-lived artifact of tighter market conditions, or maybe it's a more durable trend. I have no clue.

Instead, I wanted to look at how possible it is to become a "do it all" researcher in 2025. As a practical matter, how difficult is it to come to grips with quant stuff if you start off as a qualitative researcher, or vice versa?

Tooling improvements

The past 10-15 years have created a massive improvement in many aspects of data access and manipulation tooling. What used to be annoying bespoke custom code sent off to Hadoop clusters have largely been replaced with various dialects of SQL. Counting instances of events, gathering and analyzing chains of multiple events, even more crazy things like session creation can be done to varying levels of SQL complexity.

Dashboarding and reporting tools have not only become relatively easy, but have lots of the necessary integrations with data ingestion systems to make that work a lot less tedious than it once was.

Even if you have to go into custom code, there's now a very robust data environment that surrounds working with data now. Pandas, DuckDB, Polars, Jupyter notebooks, the whole R ecosystem and community all provide tons of examples and solutions to the most common data problems any practitioner is going to bump into on a day-to-day basis.

At the end of the day, it's never been easier for someone who wants to learn to manipulate data to learn it. It takes a lot less specialized knowledge about unrelated parts of the tech stack to get things working now. While it's useful to know the technical underpinnings for performance, cost, and scalability reasons, it's not necessary to know that stuff to start doing productive work anymore.

Methods toolbox has remained the same

Product development based data work largely answers a very repeated set of questions:

  1. How did A affect B? (causality)
  2. We want to estimate the size of something. (counting)
  3. We want to ask this group something. (surveying)
  4. We want to better understand this group using data (exploratory analysis)

Within these broad categories, there's a bunch of different methods that will provide answers depending on the situation. Causality questions are often approached with experiments and quasi-experimental setups, survey situations have a lot of different designs and methods associated with them, and so on.

For the vast majority of the work we do, the list of methods hasn't changed much. A/B tests are still the standard way to show causality, we still do a lot of decisionmaking by literally counting how often something occurs. There's lots of subtle innovations that can happen with the methods as applied to new situations, but anyone coming out of a graduate level research class has probably been exposed to most of the frameworks we use. There hasn't been any major revolutions in how we fundamentally "Do Science".

This is good news for anyone looking to bridge between quant and qual work because it means whatever they've picked up through osmosis is just as valid then as it is now.

I've actually worked with a number of qualitative researchers who have tagged along enough stakeholder meetings with me that they picked up a decent intuition about what needed to be measured and why. They don't feel confident in their knowledge enough to make experimental design decisions without me, but to be very honest they already knew what needed to be done to create a proper A/B test. I don't think it would be very difficult to educate those folks into having the confidence to make those decisions.

Also, all this applies to qualitative methods too. Methods like heuristic evaluations, interviews, diary studies, and journey mapping are all durable methods that get used constantly without much change, and I've personally learned how to do some of them by tagging along on qualitative studies and observing. I can't say I'm good at talking to strangers, but it's not impossible.

Math is obviously still the same

Because that's how math works. About the biggest "change" of note is how Bayesian methods and tools have gotten a bit more accepted by stakeholders. They're not all asking for t-tests and statistical significance anymore when more suitable alternatives might exist for a specific problem.

Math knowledge is always going to be a bottleneck for quantitative work, but the fundamental intuitions about important things like sample size, power, errors and bias seem to readily translate across fields. Statistical intuition is surprisingly important in our work because we constantly have to use it while working with stakeholders and handling their questions and requests.

The hard part is how there's a big jump to translate intuitions into actual calculations and models, but I don't think it's a hard limiting factor for someone who's moonlighting the occasional quant project. There's very little practical difference whether someone accidentally used a confidence interval instead of a prediction interview for the estimate of how much revenue a regression model is reporting. For situations where such a difference does matter, like products that involve life-or-death, safety, etc., I would hope everyone involved has the self-awareness to get a proper statistician on board to begin with since not even most dedicated quantitative UX researchers are fully equipped for those problems.

Things have inched closer, but not enough to make it natural

Since the only real improvement in quantitative work has been the tooling, I think it's now easier for people to become mixed methods, but not so easy that everything can be smashed into a single job everywhere.

The methods to quant and qual work are defined well enough that anyone could pick up a book and learn how to execute a given method. But the prerequisite skills in both math on the quant side, and observation/synthesis on the qual side are still disjoint. We all learned those coming out of our respective background study and there's often no natural need to learn the other thing. So people have to seek that knowledge out on their own, which is always going to be a rare thing without incentives.

So, while current job market trends are going to be for wanting people who can do everything, I don't think it's a sustainable trend yet. Just like how spending 10 years looking for "full stack data scientists" in the market ultimately failed because it just wasn't practical to source enough candidates with all the needed skills. The industry at large even tried to bootcamp their way into creating enough DS unicorns and we still wound up with "Data engineers" "ML engineers" and "Analytics engineers". Not even dedicated data science degree programs that try to teach all the things at once managed to solve the problem at a big enough scale to make it a reality.

At the best, I think there's going to be a niche for people who can wield mixed methods at work. Just like fabled "Full Stack Engineers", the places that really want one will compensate those folks handsomely for the privilege of having them. But the vast majority of people will still have a specialization one way or another, with the occasional side gig in the other discipline.

So, after all this, I think my conclusion is that as an individual, having more tools in your toolbox is never a bad thing, especially in highly competitive job markets. But we are far from the death of specialization.


Standing offer: If you created something and would like me to review or share it w/ the data community — just email me by replying to the newsletter emails.

Guest posts: If you’re interested in writing something, a data-related post to either show off work, share an experience, or want help coming up with a topic, please contact me. You don’t need any special credentials or credibility to do so.

"Data People Writing Stuff" webring: Welcomes anyone with a personal site/blog/newsletter/book/etc that is relevant to the data community.


About this newsletter

I’m Randy Au, Quantitative UX researcher, former data analyst, and general-purpose data and tech nerd. Counting Stuff is a weekly newsletter about the less-than-sexy aspects of data science, UX research and tech. With some excursions into other fun topics.

All photos/drawings used are taken/created by Randy unless otherwise credited.

Supporting the newsletter

All Tuesday posts to Counting Stuff are always free. The newsletter is self hosted. Support from subscribers is what makes everything possible. If you love the content, consider doing any of the following ways to support the newsletter:

  • Consider a paid subscription – the self-hosted server/email infra is 100% funded via subscriptions, get access to the subscriber's area in the top nav of the site too
  • Send a one time tip (feel free to change the amount)
  • Share posts you like with other people!
  • Join the Approaching Significance Discord — where data folk hang out and can talk a bit about data, and a bit about everything else. Randy moderates the discord. We keep a chill vibe.
  • Get merch! If shirts and stickers are more your style — There’s a survivorship bias shirt!