A week before my whole life got turned upside down, I got the super super rare opportunity to go outside with my camera and just take photos of whatever caught my eye. It was a nice break.

A generalist in a "specialist" job market

Oct 14, 2025

The job market is all sorts of broken these days, as is well documented by lots of other people on the internets. AI applicants. AI filters, more applicants than positions to hiring managers can afford to be picky and wait for a candidate that checks all their boxes and then some. It's rough out there.

To add to the pile, I'm slowly getting used to a new wrinkle. Such a tight market means that it's a very tricky proposition to portay yourself as a generalist. Between bot screeners and overworked hiring managers, the cost of having someone look at the list of things you've done and have them actively consider how that is applicable to the specific job they envision is apparently a lot to ask. So, the name of the game is some form of customization and refactoring to align what I've actually worked on the past two decades into whatever local lingo the hiring manager speaks. Because the more you sit down and think about work and how things get used, a lot of skills can be reframed to make more sense under a different lens.

Quantitative UX Research roles are sometimes out there. Rarely. It's always been an odd job title that didn't catch on in the mainstream outside of a few places. But for now my "main" resume works along those lines since all my work is normally framed in the sense of identifying issues using data and then driving work to get teams to address the issues while measuring the impact of any interventions. In my case, it's heavy on logs-based analysis, leading metrics definitions, getting alignment with stakeholders, helping teams figure out how their product works, and running occasional A/B experiments.

Then of course there's the piles and piles of Data Science and Analytics roles. Suddenly all those years of doing data collection, cleaning, analysis on disgustingly messy usability data needs to be realigned. Now we're maintaining giant data pipelines that the business relies on. We're doing feature engineering to feed the data into analytics models and dashboards and in theory even ML models because if data is clean enough to be used for inference tasks, then it's likely got some utility for building ML models depending on whatever it is you're doing. If anything, knowing when to say "this data is useless noise that's without redemption" is a good skill to have but damned if it's tricky to get across that it's something you've had to do.

And finally, there's all the Data Engineering roles. Thousand-line SQL queries and directed graphs showing their horrific inter-dependencies? Check. Debugging jobs that break because the data is stored poorly and hot spotting, or a bunch of people are dumping useless rows of data? That's a Tuesday. Smacking code into images and watching them fail in Kubernetes? Sure. When you work up and down the tech stack trying to understand where your data comes from and how to collect it better, you wind up exposed to the mess as a side effect. Meaning I completely forgot I could do all this stuff until a recruiter send me a "Staff Engineer" position and the only thing on the entire description I hadn't done was have the word "engineer" in my job title.

So anyways, I feel like I have to put myself into a weird funhouse mirror setup and generate three different versions of myself to align with what people are now expecting. If anything, after 20 years of training I've become almost that fabled data science unicorn that does stats, coding, and business all at once but the market has shifted to wanting specialists. They're all just different views on the same person and work skills, but the roles have now diverged so much it's untenable to attempt approach them with a single resume like back in the olden days. Even when all this other stuff is included in my work profiles, there's a huge difference in where to put emphasis.

And other AI bullshit

So, if I have to see unsolicited bullshit on LinkedIn, I'm gonna share some of the horrors floating by. And this week's horror is a job posting I saw on LinkedIn, some AI/Data Science agency is looking that's offering an hourly part time contract where the key responsibilities is "Review and refine AI-generated code, prompts, and analytical outputs. Validate technical concepts for statistical soundness and computational accuracy..." and so on.

Essentially, know that recent meme going around about how there is now a growing market for "Hire an engineer to fix your vibe-coded app" going around? There's now an AI/DS version of that.

I saw this exact same predatory bullshit happening in the translation scene a decade ago when machine translation started getting "good enough" that people who didn't know better were convinced that it was cheaper and faster to "let the machine do the translation and hire a translator, or even just an editor, to 'clean things up'". Translation is already not a very lucrative field to go into (it's why I did it as a side hobby instead of a career), so the tech was just used by extremely sketchy agencies that would underbid for jobs and sell low quality translations to people who are unable to tell if their translation was any good.

The most frustrating that about the model is that translation speed doesn't increase much with the help of machine translation tools. For an actual professional translator, the act of checking an existing translation essentially requires making a new mental translation and comparing to the text that's generated, so it takes about the same amount of time to do so. But you're getting paid less for the same work because you're supposed to be "just checking" and that's supposed to be somehow faster (in the mind of the person paying).

Skipping having a professional translator do the checking and having an editor just "clean up the rough parts" is even worse because now the inevitable errors (equivalent to modern LLM hallucinations) get taken at face value. The number of sentences I've seen that have completely opposite meanings due to the use of constructions like double negatives or sarcasm are endless. But hey, if you don't care about quality then it's all good. This is how you get those laughable English-adjacent instruction manuals from cheap imported items.

And so, we're seeing AI slop do this for countless fields now all at once. That job posting I saw is but the tip of the iceberg of a whole industry that sprang up around the same business model. "Entrepreneurs" and people who care more about money than craft are reaching out to the tools that mimic the work on the surface in an effort to cut costs and make a quick buck. Most of them have no idea what the nuances of the daily work actually involves. The only way they're gonna learn to stop is when they feel economic pain from things failing, and not everyone will feel pain.

In the game/anime/manga industry where I used to do TL work, it essentially took giant grassroots fan outrage at obviously bad translation work to convince business leaders that the savings wasn't worth it. You'd see Steam reviews calling out poor translations in games. Forums would catch on fire. Complaints would roll in. Even now the same business leaders are always testing the boundaries of what is acceptable or not to the consumer, sneaking a bit of MTL here, a bit of AI there. Business leaders are always willing to touch the hot stove again because "we might just get away with it this time".

The same is going to have to happen for all these shops looking to genAI their way to bigger profits. Will that vibe-coded model actually do that it is supposed to do? Is your vibe-coded evaluation/re-training setups up to the task in an ever-shifting world? What about the chatbot app that's replacing half the customer service team? The clients are obviously not going to be able to tell in the short term, and after that the payment checks has long been cashed. They might not even realize what the root cause of the problem is.

About the only comfort I take in this is that eventually the chickens come home to roost. The translation market for human translators has essentially split so that the humans are on one layer, competing for a smaller pool of work from the companies that are (for now) willing to continue paying for human translation. Often it is a differentiating point for them. The robo-translation agencies live in the lower price range of the market, continuing to feed off customers who don't know any better, don't care, or doesn't want to pay a human's rates.

I see similar things happening across a lot more industries as "AI" finds more and more niches it can sneak into without people being able to push back on it. Just like how most people hate off-shore call centers or those "'tell me what you want to do?' voice controlled phone systems" sometimes pushback just isn't enough to move the needle. It's going to be a very exhausting period for all of us.


Standing offer: If you created something and would like me to review or share it w/ the data community — just email me by replying to the newsletter emails.

Guest posts: If you’re interested in writing something, a data-related post to either show off work, share an experience, or want help coming up with a topic, please contact me. You don’t need any special credentials or credibility to do so.

"Data People Writing Stuff" webring: Welcomes anyone with a personal site/blog/newsletter/book/etc that is relevant to the data community.


About this newsletter

I’m Randy Au, Quantitative UX researcher, former data analyst, and general-purpose data and tech nerd. Counting Stuff is a weekly newsletter about the less-than-sexy aspects of data science, UX research and tech. With some excursions into other fun topics.

All photos/drawings used are taken/created by Randy unless otherwise credited.

Supporting the newsletter

All Tuesday posts to Counting Stuff are always free. The newsletter is self hosted. Support from subscribers is what makes everything possible. If you love the content, consider doing any of the following ways to support the newsletter:

  • Consider a paid subscription – the self-hosted server/email infra is 100% funded via subscriptions, get access to the subscriber's area in the top nav of the site too
  • Send a one time tip (feel free to change the amount)
  • Share posts you like with other people!
  • Join the Approaching Significance Discord — where data folk hang out and can talk a bit about data, and a bit about everything else. Randy moderates the discord. We keep a chill vibe.
  • Get merch! If shirts and stickers are more your style — There’s a survivorship bias shirt!