The Gist
The problem: As the sole researcher supporting multiple product teams at a large retail organization, requests quickly outpaced capacity — with no shared system for deciding what to tackle first.
The approach: Hosted a research intake workshop, clustered nearly 100 questions by theme, and built a custom scoring framework evaluating each cluster against four criteria: existing knowledge, team readiness, decision impact, and risk of inaction.
The outcome: A transparent research roadmap that aligned stakeholders on priorities — and a reusable template that was formally adopted and shared across the wider research organization.
Situation
More questions than any one researcher could answer
Supporting multiple product teams as a sole researcher isn't unusual — but it does create a structural tension. Every team has urgent questions. Every question feels high priority to the team asking it. And without a shared framework for evaluation, prioritization defaults to whoever is loudest or most recent.
To get ahead of this, I hosted a research intake workshop to collect questions from all the teams I was supporting at once. The goal was to surface the full picture before committing to anything. What came back was close to 100 questions spanning product design, content strategy, feature decisions, and broader business direction.
No researcher — working alone — could address that volume. The question wasn't "how do I do all of this?" It was "how do I build a defensible, transparent system for deciding what actually matters most?"
Key Findings
What the volume revealed
Volume isn't the problem — the absence of a shared evaluation system is
Close to 100 questions isn't inherently unmanageable. But without a consistent method to compare them, even a short list becomes a negotiation. Teams were implicitly competing for research attention rather than aligning around shared priorities.
Many questions were variations on the same underlying theme
Clustering the questions by topic revealed significant overlap. Questions that felt distinct to individual teams were often addressing the same friction from different angles. Grouping them made it possible to tackle a cluster of concerns through a single, well-scoped study.
Teams wanted to understand the "why" — not just the outcome
It wasn't enough to deliver a prioritized list. Teams needed to see how decisions were made. A transparent scoring model didn't just produce a roadmap — it built trust in the process and made it easier for teams to accept when their questions weren't prioritized first.
A structured approach created value beyond this project
Once the framework was applied and shared, it became apparent that this wasn't a one-time solution — it was a reusable method. Other researchers could apply the same scoring logic to their own portfolios, creating consistency across the organization.
The Approach
Four criteria. One clear roadmap.
After clustering the questions into thematic groups, I built a custom scoring framework in Excel to evaluate each cluster objectively. Every cluster was rated against four criteria — each scored on a 0–5 scale — producing a composite score that drove the final prioritization.
Existing Research Knowledge
How much do we already know about this topic? If strong existing knowledge exists, new research may not be the highest leverage move.
"What do we already know — and how confident are we in it?"
Team Readiness to Act
How prepared is the team to use the findings once research is complete? Research delivered to a team not yet positioned to act on it has limited near-term impact.
"If we had findings tomorrow, could they act on them?"
Decision Impact Level
How significant is this research to the decision at stake — whether for UI design, product direction, or broader business strategy? Higher-stakes decisions justify greater research investment.
"How much does this decision actually matter?"
Risk of Not Doing the Research
What are the consequences of delaying or skipping this study entirely? Some questions are low-risk to defer. Others carry real costs — to users, to the product, or to the business — if left unanswered.
"What's the cost of getting this wrong without evidence?"
What Changed
From noise to a navigable roadmap
A theme-clustered view of all research questions replaced ad-hoc, first-come-first-served intake — surfacing the full landscape before any commitments were made.
A four-criterion scoring model replaced gut-feel prioritization — giving every cluster an objective composite score grounded in business context, not individual urgency.
A shared research roadmap created cross-team alignment — teams understood not just what was prioritized, but why, reducing friction and negotiation over research attention.
A reusable framework template was formalized and presented to the wider research organization — turning a personal method into a shared organizational tool that researchers at all levels could apply.
Outcome
What started as a personal solution to an impossible workload became something with broader reach. The framework gave teams a window into how research decisions were made — and gave me a structured, defensible way to do the work that actually mattered most. That kind of transparency doesn't just reduce friction. It builds the conditions for research to be taken seriously.