How to Build a Product Strategy That Actually Gets Buy-In
You’ve spent months building a product strategy. Market research, competitive analysis, financial models, the works. You walk into the boardroom confident. Fifteen minutes later, an executive asks one question you didn’t anticipate and suddenly your recommendation feels shaky.
I’ve been there. Multiple times. Here’s what I learned about building strategies that survive C-suite scrutiny and the framework that helped me get executive approval for a major AI investment.
The Approach
When setting up the product strategy team, we needed a repeatable framework for investigating opportunities and making recommendations to senior leadership. These were typically longer-term opportunities (1 to 3 years out) looking at big bets rather than incremental improvements. This made quantifying impact harder (I covered this in my first post), but the framework helped us build conviction even in the context of uncertainty.
We landed on four key areas to investigate for every strategic question and a clear scoping document we’d fill in and distribute upfront.
The Strategy One-Pager
Before diving into research, we’d create a one-page scoping document containing:
This kept us focused and gave stakeholders visibility into our approach before we disappeared for weeks of research.
Now, here are the four key areas we’d investigate:
Market Context
This involved understanding the competitive landscape, market trends, and what customers actually want. We used several approaches:
Desk research - Mostly through Google (nowadays you’d definitely get AI to source this though), with the caveat that publicly available information tends to be lower quality.
Market research portals - We trialed several and settled on retail-specific ones (since we were in grocery retail). These had reports on market shifts, estimates on market size by territory and subcategory, and other valuable trends.
Expert interviews - This was new to me and genuinely eye-opening. We trialed a few expert networks and settled on a couple we used regularly. You’d be surprised how much insight you can get from a 30-60 minute interview with someone who worked at a competitor or is a market expert.
Customer interviews - When possible and appropriate, either using existing customer data or conducting new research with customers was always useful.
Commercial Viability
This is about estimating the size of the prize. Would this opportunity generate more revenue? Save costs? Where in the P&L would it have an impact?
We built commercial models with multiple scenarios - pessimistic to optimistic - calculating the impact on EBITDA, the key metric our stakeholders cared about. For example, when looking at optimising an existing proposition, we’d model: a pessimistic view (5% opex improvement), realistic (10% improvement) and optimistic (15% improvement). Each scenario showed which P&L lines would be affected and the payback period.
The learning curve: None of us were finance experts initially. We leaned heavily on team members from strategy consulting backgrounds and got close to our finance team to understand the P&L and how the business made money, then built scenarios on top of that foundation.
My advice? Get comfortable with your company’s financials early. Understand how your business makes money and what really moves the needle. I’d highly recommend Simonetta Batteiger’s Financial Concepts for Product People course - it’s a great introduction with practical frameworks you can apply right away.
Technical Feasibility
This is where we collaborated with tech teams to understand: What products would be involved? How much effort? How complex? Do they think this is something we can build?
Key learning: Involve tech teams early. We learned to have a kick-off with them as soon as possible, explaining why we’re investigating this opportunity, what we’re trying to achieve, and what input we need from them. This made the process much smoother than when we tried to do research first and involve them later.
Clear Narrative
Because we presented to C-suite executives, we needed crisp presentations with strong executive summaries. We’d summarise findings, build up to the conclusion, provide a clear recommendation, then break it down in subsequent slides.
This came naturally to our strategy consulting hires, but the rest of us learned along the way. The key was telling a compelling story backed by data, by digging into the implications instead of just presenting a pile of research.
The 50-Slide Trap
Early on, we made the classic mistake: more research equals more credibility. We’d walk in with comprehensive decks trying to cover every angle. Then we learned the hard way: executives don’t want exhaustive research. They want confidence that you’ve thought through the right questions.
We evolved into starting with a hypothesis-driven executive summary right at the beginning (before even diving into research). This made it clear what assumptions we wanted to test and kept us focused.
We realised we needed to answer three things for the exec presentation:
What’s the context and problem we’re solving?
What’s the recommendation?
What are the key risks?
If we could articulate those clearly, we were ready to present. A crisp narrative with clearly articulated risks and uncertainties beats a 50-slide deck with every possible data point.
Real Example: Building an AI Strategy
Let me walk you through how I applied this framework when building an AI strategy in a previous role.
The Context
We had various AI initiatives scattered across the product portfolio, but no coherent strategy. Different teams were embedding AI in different ways, and we weren’t sure if we were solving the right problems or just adding AI for the sake of it.
Sound familiar? This is where many companies are with AI right now. Lots of experiments, no clear direction on where to place the big bets.
The Research Process
Market landscape: I started by gathering reports from Gartner to understand the broader AI market. But I needed deeper insights specific to our industry.
Third-party market research: Our internal product marketing team didn’t have capacity for deep research, so I commissioned Dialectica (an expert network with an in-house research team) to do some bespoke market research for us. Within two weeks, I had a 20-slide deck with solid insights based on desk research and expert interviews.
Expert interviews: We also ran our own interviews with former employees of main competitors who were ahead of us in leveraging AI. This helped us understand what worked and what didn’t work for them - learning from their mistakes rather than making our own.
Internal portfolio review: I then mapped our existing product portfolio to identify the best candidates for embedding AI, looking at where we had the strongest foundation and the biggest opportunity.
Connecting the dots: I connected insights from the market research and expert interviews with our internal portfolio, identifying areas where we had a genuine right to win. This gave me a priority order of opportunities with example initiatives under each one. I also incorporated a high-level view of value vs effort with input from engineering, to serve as a guide for discussions with leadership. The commercial viability discussion was more high-level at this stage, as this was a strategy for an area that already had momentum in the business; what was missing was a clear plan of action focused on where we could differentiate.
The Key Insight: The Right-to-Win Test
What became clear was this: the question isn’t “Where can we add AI?” It’s “Where do we have a right to win and where can AI create a step change?”
Enterprise buyers are inundated with “AI-native” claims. Meanwhile, the real work (data quality, model fine-tuning, latency, and cost per inference) takes longer and costs more than teams expect.
Our strategy reframed the conversation: map where competitors already dominate, acknowledge where we don’t have an edge, and double down on the products where we could use AI to 10x user productivity. That’s where the value and the buy-in came from. Before adding any strategic initiative, we’d ask: “If our top competitor launched this tomorrow, would customers still choose us?” If the answer is no, you’re playing catch-up, not creating differentiation.
The Recommendation Structure
Finally, I structured the recommendation deck like this:
Current context - Where we were with our AI portfolio, the problem (no strategy), and what the current roadmap looked like
Research findings - Market research insights, themes from expert interviews, and portfolio review with opportunities mapped and sized
Recommendation - Focus on the highest-value opportunities backed by research, but validate with customers first
Next steps - Set up the customer special interest group and continue developing the AI portfolio with continuous customer feedback
The Outcome
The strategy was very well received by Senior Product Leadership and C-level executives. On the back of this, we secured additional investment for the following year to accelerate initiatives in this space.
The Takeaway
There’s no single “right” way to do product strategy. There are plenty of excellent frameworks out there, such as Martin Eriksson’s decision stack. But ultimately, it comes down to:
Understanding what’s important for the business
Knowing the market context
Understanding your current situation
Bringing together enough information to make a solid recommendation
Presenting it in a compelling way
The framework I’ve shared here worked for us, but you should adapt it to your context. Maybe you don’t need detailed commercial modelling and a high level business case would be enough. Maybe technical feasibility is less important than customer validation. The key is being intentional about what information you need to make a confident recommendation.
Here’s what I’d suggest: Pick one strategic question you’re facing right now. Write down what information you’d need to answer it confidently. Write down a hypothesis. Then ask: which of those things can you learn this week? Start small. One expert interview. One competitive analysis. One conversation with your finance team about the P&L. Movement beats perfection.
I’d love to hear about the strategic questions you’re tackling. What’s the one question you need to answer to move forward?
P.S. If you found this useful, you might also like my previous post on building a product strategy team from scratch. That covers the what and why of setting up the team, while this post focuses on the how of the methodology we used.




