Commercial real estate portfolio intelligence platform enterprise
Commercial Real Estate · Enterprise Product Strategy · JTBD

The team knew they were building a portfolio platform. Research defined what that actually meant.

A global real estate team was expanding a site selection tool into something broader. Before any product decisions were made, a Jobs-to-be-Done study across four roles and two continents answered the question that had no clear answer yet: who is this for, what do they need, and what do we build first?

Industry

Commercial Real Estate · Enterprise Product Strategy

Methods Used

JTBD Interviews · Thematic Analysis · Cross-role Pattern Mapping

Research Scale

17 interviews · 4+ roles · North America & Europe

Key Outcome

Evidence-based product scope + sequenced build roadmap grounded in actual user jobs

The gist

The problem: A global real estate team was expanding an existing site selection tool into a full portfolio management platform. The vision was there — the clarity wasn't. Who was it for beyond Transaction Managers? What did each role actually need to accomplish? And when the team had to make trade-offs, what came first? Without research, all of those answers were assumptions.

What we did: A JTBD study — 17 one-on-one interviews across Transaction Managers, Asset Managers, Strategic Portfolio Leads, and others, across North America and Europe. Rather than collecting feature requests, every conversation focused on real situations: what were you trying to decide, and what actually happened when you tried to get the information you needed?

What changed: For the first time, the team had an evidence-based picture of who they were building for — not assumed, not inherited from the old tool. Research identified 7 universal jobs shared across every role, provided the language for cross-functional prioritization, and delivered a sequenced build strategy: data access first, then operational workflows, then strategic planning.

A platform with a vision and a starting assumption — but no validated scope.

The team had a clear starting direction: build for Transaction Managers, help them use portfolio data in site selection decisions. That made sense for the original tool. But as the scope expanded toward a full portfolio management platform, questions multiplied. Which other roles would depend on this? What were they trying to accomplish that they couldn't today? And where exactly did shared needs end and role-specific ones begin?

The answers mattered because they would determine everything — what to build, for whom, in what sequence, and how to make prioritization decisions when the team inevitably had to trade off between capabilities. Without grounding those answers in research, the team would be building on inherited assumptions from a smaller tool with a narrower scope.

Before any product decisions were made, I proposed a Jobs-to-be-Done study. Not to validate a roadmap that already existed — but to create the foundation one could actually be built on.


17 interviews. One question that mattered more than any feature request.

The study ran 17 one-on-one, 60-minute interviews across roles and regions — Asset Managers, Transaction Managers, Strategic Portfolio Leads, and others in North America and Europe. The central principle was simple: focus on outcomes, not requested features.

"Walk me through the last time you needed to pull together portfolio information — what were you trying to decide, and what happened?"

That question — and the stories it produced — revealed something that a feature survey never would have: the actual jobs people were trying to get done, and where the current reality was failing them. The conversations surfaced over 200 individual job stories across all participants, which were then analyzed thematically to identify patterns across roles and regions.

The analysis produced two outputs: seven universal jobs shared across every role regardless of function, and a set of role-specific jobs unique to how each function engaged with the portfolio. That distinction became the organizing framework for everything that followed.


Three things the team didn't know before the research — and couldn't have assumed their way to.

1

The platform needed to serve more roles than originally scoped — and jobs fell into two distinct layers

The tool wasn't just a Transaction Manager tool. Multiple roles depended on portfolio data in different ways — and the research made clear that treating them all the same, or building only for one, would fail the others. What made this finding actionable was the two-layer structure it revealed: some jobs were universal, shared across every role regardless of function. Others were specific to how each role engaged with the portfolio. That distinction became the organizing framework for the entire product — determining what had to be built for everyone versus what could be built for specific functions.

2

Before anything else could work, people just needed to get to reliable information

The most consistent pain across every role wasn't sophisticated analysis — it was basic access. People were navigating 10+ fragmented systems, spreadsheets, and colleagues just to answer routine questions. A simple portfolio query took 45+ minutes. And when they did find data, they spent more time verifying whether it was accurate than using it — because the same data point could produce different answers depending on which system it came from. Research made clear: until trust in data was established, no other capability in the tool would reach its potential. This finding determined the sequence of the entire build strategy.

3

Institutional knowledge was leaving with every team departure

Past decisions, site history, alteration records, the rationale behind deals made years ago — this knowledge lived in email inboxes and people's heads. When someone left, it left with them. One interview surfaced a striking example: a site of over 100,000 square metres had been effectively "forgotten" — not flagged in any active system — because the people who knew about it were gone and the documentation had never been preserved centrally. Framing knowledge preservation as a product requirement, not just an operational inconvenience, changed how the team thought about what the platform needed to do.


Universal jobs and role-specific jobs — the distinction that structured the entire product.

The 200+ individual job stories consolidated into a two-layer structure. Seven jobs were universal — shared across every role, regardless of function or region. Nine additional jobs were role-specific, reflecting the distinct ways each function engaged with the portfolio. Understanding which layer a job belonged to determined how it should be prioritized and for whom it should be built.

Universal — all roles

7 jobs shared across every function

These jobs had to be solved first — not because they were the most exciting, but because every other capability depended on them.

Integrated portfolio intelligence & flexible reporting
Data quality & standardization
Landlord consent & site alteration management
Strategic lease renewal planning & execution
Landlord relationship management
Integrated asset lifecycle & handover management
Historical data & knowledge preservation
Role-specific — by function

9 jobs unique to how each role works

These jobs reflected the distinct responsibilities and expertise of each function — essential context for building role-appropriate experiences.

Strategic Portfolio Leads: asset value optimization, idle asset recovery, space utilization
Asset Managers: lifecycle transitions & compliance, real-time issue resolution
Transaction Managers: land acquisition, deal execution, site evaluation
Tech & Risk: risk intelligence and project coordination

A sequenced build strategy — derived directly from how the jobs related to each other.

The research didn't just identify what to build. It made clear in what order — because the jobs themselves had dependencies. Until the foundational data jobs were addressed, operational and strategic jobs couldn't reach their potential. The sequence wasn't a preference; it was a logical consequence of what the research revealed.

01

Phase 1 — Foundation

Data access & quality

Eliminate the 45+ minute searches. Establish a single reliable source of truth. Until users trust the data, nothing else matters.

02

Phase 2 — Operations

Workflows & coordination

Enable landlord consent, lease renewals, and lifecycle handovers — the operational jobs that required reliable data to function at all.

03

Phase 3 — Strategy

Strategic planning

Support portfolio-wide intelligence, relationship management, and knowledge preservation — the jobs that required both good data and operational clarity to be meaningful.

A shared, evidence-based picture of who the platform was for — For the first time, the team had a validated scope grounded in what people actually needed to accomplish — not assumed from a legacy tool's narrower remit, and not inherited from a single function's perspective.

A principled answer to the hardest product question — "What do we build first?" is usually the question that reveals whether a team is working from clarity or from politics. The sequenced strategy gave the team a defensible, research-grounded answer: data access and quality before everything else.

Universal jobs as a shared language across product, design, and engineering — When prioritization conversations happen without a shared vocabulary, they default to advocacy and seniority. Universal jobs gave product, design, and engineering a common reference point — making alignment faster and decisions more grounded.


Research as the foundation — not a checkpoint, but the starting point for everything that followed.

The JTBD study gave the team something that no stakeholder workshop, competitor analysis, or inherited tool roadmap could have provided: a clear, evidence-based answer to who this platform was actually for and what it genuinely needed to do.

The universal jobs framework became the organizing structure for the product. The sequenced build strategy gave engineering and design a principled starting point. And the role-specific jobs ensured that as the platform matured, each function's distinct needs could be addressed without collapsing everything into a generic one-size-fits-all experience.

The most important outcome wasn't any single finding — it was the shift from assumption-driven scoping to evidence-driven direction. In enterprise product development at this scale, that shift is the difference between building something that grows into what users need and building something that has to be rebuilt once reality catches up with the assumptions.

Research at a glance

17

one-on-one interviews across roles and regions — producing 200+ individual job stories

7

universal jobs identified — shared across every role, becoming the product's organizing framework

3

sequenced build phases — data, then operations, then strategy — grounded directly in job dependencies

Previous: Retail Loyalty Experience