Research keeps getting cut. Not because leaders don't believe in it. Because researchers keep making the wrong argument to the wrong people in the wrong language.
That's on us.
The evidence problem nobody names
Most executive teams make million-dollar decisions based on the least reliable evidence available.
Their own opinions. Sunk costs too politically expensive to admit. Consultants paid to validate whatever the strategy already is.
When real user insight contradicts what a senior leader has already committed to, the research doesn't win. It gets ignored. Or leadership shops for signals that confirm what they intended to do anyway, regardless of how flawed those signals are.
The research sits in a repository. The decision gets made anyway.
In a startup, this is visible fast. The product fails. The lesson is traceable.
In a large enterprise, failure is slow and invisible. Success measures are vague. Accountability is distributed. The cumulative cost of bad assumptions gets absorbed across enough budget lines that nobody ever has to answer for the original decision.
Death by a thousand cuts. Each one survivable. The total, enormous.
We've been classifying research wrong
Research isn't one thing. Most organisations fund it as one thing, which is why they always cut the wrong part.
There are three distinct types. Each needs a different investment conversation.
Research as quality assurance. Prototype testing with real users. Usability validation. Task completion in actual conditions. This isn't a research cost. It's a quality cost. The same category as engineering QA. Nobody cuts QA and calls it a research efficiency. Stop cutting user testing.
Most organisations use UAT instead. It doesn't cover the same ground. UAT validates against business requirements written by people who aren't users, based on assumptions about what users need. It answers one question: did we build it right? User research answers a different question: did we build the right thing?
UAT cannot tell you whether the product solves the job users actually need done. It cannot tell you whether real people can complete tasks in the conditions they actually work in. It has no mechanism to answer the most important question: is this worth anyone's time and money?
- Did we build it right?
- Does it meet the specification?
- Does the task complete?
- Has the bug been fixed?
- Did we build the right thing?
- Does it solve the actual job?
- Is this worth the user's time?
- Does this create real value?
You can pass every UAT criterion and still ship something nobody wants. AI-assisted UAT will improve. It will never close the gap between assumed use cases and real human behaviour. That requires real humans. No workaround exists.
Research as strategic intelligence. Discovery work. Generative research. Longitudinal tracking of what customers actually do, not what they say they do. This is what informs product direction before commitments get locked. It's the early signal that a direction isn't working, before the sunk cost becomes a political identity.
ResearchOps as infrastructure. Repositories. Participant panels. Tooling governance. Consent frameworks. Skills taxonomies. This is the genuine capital investment. It doesn't depreciate. Every study added makes it more valuable, not less. It compounds.
Fund all three as a single overhead line and you ship untested products, make uninformed bets, and destroy infrastructure that took years to build. Simultaneously. Invisibly. With no single person accountable for any of it.
Research is an intelligence system. Start treating it like one.
At its most mature, ResearchOps isn't the operational backbone of a design function.
It's the infrastructure of an organisational intelligence system.
The capability that lets a business sense what's actually happening with real customers. Anticipate behaviour shifts. Make strategic bets based on evidence rather than the loudest voice in the room.
The right comparison isn't QA or R&D. It's business intelligence. It's the market sensing capability that strategy consultants charge millions for, except built as a permanent internal asset that compounds rather than an engagement that walks out the door and invoices you.
This only works if leadership is equipped to receive it. Most isn't.
The room we forgot to research
Here's the thing the ResearchOps community won't say out loud.
We are experts in human behaviour, motivation, and decision-making. We apply those skills rigorously to users. We map their jobs to be done, their mental models, their real contexts.
Then we walk into a leadership meeting and present to a room full of humans we haven't researched at all.
We research users. Then we forget to research the room.
Every executive in that room has incentives and agendas. Sunk costs they're defending. Public commitments they can't afford to reverse. Consultants they trust more than their own teams. A definition of success that has nothing to do with the user outcome on your slide.
Experienced researchers already have every skill needed to understand that room. We know how to identify what drives decisions. We know how to map the gap between stated preference and actual behaviour. We know how to shift mental models rather than just presenting data at people.
We just don't apply those skills to the people who control our budgets.
Executives don't act on findings. They act on conviction. Conviction comes from narrative. From insight that feels like their idea, that supports the outcome they're already seeking, that gives them confidence rather than the discomfort of being told they're wrong.
The best research in the world, presented as a findings report, loses to a confident consultant with a story. Every time. Not because the consultant is right. Because they understand that decisions are made by humans, not by evidence.
The pillar we haven't built
ResearchOps has invested heavily in the infrastructure for producing insight. It has invested almost nothing in the infrastructure for translating insight into decisions.
Those are different capabilities. And the second one is what actually determines whether research changes anything.
Translating insight into decisions means understanding each stakeholder's incentives before you walk into the room. It means designing how insight is communicated the same way you'd design a product, for the specific human receiving it, at the specific moment they need to act. It means embedding research into the rituals where decisions actually happen, not presenting it as a separate agenda item that gets noted and forgotten.
It means making the cost of ignoring evidence visible. Not by confronting leaders with data they'll dismiss. By making evidence so continuous and so embedded that acting against it requires an explicit choice that others can see.
This isn't manipulation. It's applying research craft to the right problem.
A challenge to our own community
Why hasn't the ResearchOps community named this?
Because researchers are uncomfortable with the political dimension. Researching the room feels like playing politics, and we've built a professional identity around objectivity that sits uneasily with organisational influence.
Because the community has focused on scaling execution rather than scaling impact. Better tools. Faster research. More studies. Not better decisions.
And because naming it requires admitting something most of us don't want to say. Brilliant research alone isn't enough. The insight isn't the product. The behaviour change in the room is the product. And we haven't been designing for that with anything like the rigour we apply to designing for users.
We keep building better infrastructure for insight that doesn't change decisions. Then we wonder why the budget gets cut.
What actually happened when we got it right
hours saved in product and development time at REA Group
annual tooling consolidation at Bupa
research team doubled based on demonstrated impact
At REA Group we built research as infrastructure, not headcount. A repository accumulating knowledge across ten lines of business. An ops model that made every researcher faster and every insight reusable. The design system that came out of that capability saved over 400,000 hours of product and development time, tracked through engineering metrics, not design team estimates. That number survives CFO scrutiny because it speaks CFO language.
At Bupa, doubling the research team from eight to sixteen was the first move. Consolidating tooling, saving over $300,000 annually, and building a repository that made research investment visible was the second. But the move that determined whether any of it changed decisions was learning how each business leader defined value and designing how insight reached them accordingly.
The infrastructure matters. The narrative matters more.
So here's what needs to change
Stop funding research as a single overhead line. Fund it as three distinct investments: quality, intelligence, and infrastructure.
Stop treating user research for prototype testing as optional. It's a quality cost. Give it the same status as engineering QA and the same cost-of-failure logic.
Build ResearchOps infrastructure like you'd build any other long-lived organisational system. With governance, measurement, and a compounding return argument that survives a budget review.
And then research the room with the same rigour you research users. Map the incentives and decision-making patterns of the people who control budgets. Design the narrative for the specific human receiving it. Embed insight into the moments decisions are made.
We have every skill required to do this. We use those skills brilliantly every day.
On users.
It's time to use them on the room.
Benson Low is Head of Design Capabilities at Bupa Australia and a board member of the global ResearchOps Community.