The standard PM playbook says: talk to users, define problems, prioritise solutions, ship. The problem is that step three — “define solutions” — assumes a relatively stable technology surface. When your underlying model improves by 30% every six months, the solutions you ruled out last quarter might be trivially achievable today.
The Horizon Model
Instead of a traditional now/next/later roadmap, AI product teams benefit from thinking in three horizons defined by confidence rather than time:
Horizon 1 — High confidence (next 6 weeks). Features you can build with current model capabilities. Fully specced, in sprint, dependencies understood.
Horizon 2 — Medium confidence (6 weeks – 6 months). Features that require capability improvements you expect but haven’t validated. Directionally committed, not fully specced.
Horizon 3 — Low confidence (6+ months). Strategic bets contingent on model improvements that aren’t guaranteed. Shared as vision, not commitment.
The key is being explicit about which horizon each item lives in and why.
Capability Triggers
Define capability triggers for items in Horizons 2 and 3. A trigger is a testable condition: “When our model achieves >85% accuracy on our evaluation set for task X, we will move feature Y from H2 to H1.”
This turns vague future-state planning into actionable monitoring. Your engineering team runs evals. When a threshold is crossed, the roadmap updates automatically — no quarterly planning meeting required.
Communicating Uncertainty Without Losing Confidence
Stakeholders hate uncertainty. But presenting an AI roadmap as certain is worse — it erodes trust when reality diverges from the plan, which it will.
The framing that works: “Here is what we will definitely build. Here is what we are betting on. Here is what we are watching.” Confidence comes from having a clear process for updating the plan, not from pretending the future is knowable.