Discovery Playbook¶
This is the detailed reference for running the discovery track. For the philosophy and how the tracks connect, see the Discovery Overview.
Discovery Activities¶
1. Problem Discovery¶
Goal: Identify and validate problems worth solving.
Activities:
- Stakeholder interviews — Regular conversations with students, advisors, administrators, and partners. Not "what features do you want?" but "walk me through the last time you tried to..." Listen for workarounds, frustrations, and unmet needs.
- Usage data analysis — Review analytics dashboards (Metabase), session reports, journey analysis. Where do students drop off? Where does quality break? What triggers failures?
- Support/feedback review — What are users asking about? What are advisors reporting? What came up in the last co-creation session?
- Competitive/landscape scan — What are other post-secondary exploration tools doing? What's working in adjacent spaces? What can we learn from guidance counselor workflows?
Outputs:
- Problem statements: "Students in [situation] struggle with [problem] because [root cause], which leads to [consequence]."
- Opportunity assessments: How many students are affected? How severe is the problem? How often does it occur?
Cadence: Ongoing. The PM should be doing some form of problem discovery every week — not just before planning.
2. Problem Framing¶
Goal: Prioritize which problems to solve next based on evidence.
Prioritization criteria:
- Impact: How many students/users are affected? How severe is the pain?
- Alignment: Does solving this advance our mission (post-secondary pathway exploration for WA students)?
- Evidence: Do we have data supporting this problem, or is it an assumption?
- Feasibility: Is this something we can meaningfully address with our current team and technology?
- Urgency: Is there a time constraint (e.g., enrollment deadlines, partner commitments, grant milestones)?
Tools:
- Opportunity Solution Tree — Map desired outcomes to opportunities (problems/needs) to potential solutions. Keeps the team focused on outcomes and prevents jumping straight to features.
- Problem scoring — Simple weighted scoring against the criteria above. Not a formula — a structured conversation.
Outputs:
- Prioritized problem list with evidence summaries
- Clear "we're solving THIS problem next" decisions with rationale
Cadence: PM maintains this continuously. Formal prioritization review bi-weekly or as needed when new evidence surfaces.
3. Solution Exploration¶
Goal: Generate and evaluate multiple approaches before committing to one.
Activities:
- Story mapping — Map the user's journey through the problem space. Identify the critical path and where intervention has the most leverage.
- Design sprints / sketching sessions — PM and designer (and sometimes a dev) spend focused time exploring 2-3 solution approaches. Keep it rough — whiteboard sketches, not pixel-perfect mockups.
- Feasibility spikes — When a solution approach has technical unknowns, ask a dev to spend a bounded amount of time (half day to a day) exploring whether it's possible and at what cost. This is discovery work, not sprint work.
- Reference research — Use existing patterns, competitor analysis, and domain expertise to inform solution design. Don't reinvent what works elsewhere.
Key rule
Explore at least two approaches before converging on one. The first idea is rarely the best idea.
Outputs:
- 2-3 solution concepts with trade-offs articulated
- Feasibility assessment from engineering (can we do this? what would it take?)
- Recommended approach with rationale
Cadence: Driven by the prioritized problem list. PM and designer should be exploring solutions 1-2 sprints ahead of when they'd enter the delivery backlog.
4. Prototyping & Validation¶
Goal: Test the proposed solution with real users before building it.
Prototype types (use the cheapest one that answers your question):
| Type | Effort | Best For |
|---|---|---|
| Paper/whiteboard sketch | Minutes | Testing flow and information architecture |
| Clickable prototype (Figma) | Hours-days | Testing usability, navigation, comprehension |
| Wizard of Oz | Hours | Testing value — human behind the scenes simulating AI behavior |
| Concierge | Days | Testing the full experience with manual fulfillment |
| Live data prototype | Days | Testing with real content/data to validate quality and relevance |
Validation methods:
- Usability testing — Watch 5 users try to complete a task with the prototype. Where do they get stuck? What do they misunderstand?
- Value testing — Does the user care about this? Would they use it? Does it change their behavior?
- Demand testing — For larger features: is there evidence users want this before we build it?
Key rule
Test with real users from our target population (WA high school/college students, advisors). Internal team feedback is useful but not sufficient — we are not our users.
Outputs:
- Validation results: what worked, what didn't, what we learned
- Go/no-go/iterate decision
- Refined solution ready for dev-ready story writing
Cadence: Before any significant feature enters the delivery backlog. Small improvements and bug fixes don't need prototyping.
5. Story Writing & Handoff¶
Goal: Translate validated solutions into dev-ready backlog items.
A story is dev-ready when:
- [ ] The problem it solves is clearly stated
- [ ] Acceptance criteria are specific and testable
- [ ] Designs/mockups are attached (if applicable)
- [ ] Edge cases and error states are considered
- [ ] Technical constraints or dependencies are noted
- [ ] It's been validated through discovery (or explicitly flagged as an assumption)
What a good story looks like:
Title: [Concise description of the user-facing change]
Problem: [Why this matters — what problem are we solving and for whom?]
Solution: [What we're building and why this approach]
Acceptance Criteria:
- Given [context], when [action], then [expected result]
- Given [context], when [action], then [expected result]
Design: [Link to Figma or attach mockups]
Technical Notes: [Constraints, dependencies, API changes, data model impacts]
Out of Scope: [What this story explicitly does NOT include]
Validation: [How was this solution validated? Link to test results or note if unvalidated]
Handoff timing: Stories should be dev-ready by Thursday backlog grooming. The PM presents them, the team asks questions, devs do rough t-shirt sizing. Stories that aren't ready go back for refinement.
Discovery Cadence¶
Discovery doesn't follow the sprint calendar, but it does have a rhythm. The PM and designer should always be working 1-2 sprints ahead of delivery.
Weekly PM Activities¶
| Activity | Time Investment | Notes |
|---|---|---|
| Stakeholder/user conversations | 2-3 hours/week | At least 1-2 conversations per week. Can be formal interviews or informal check-ins. |
| Data review | 1-2 hours/week | Review dashboards, session reports, journey analysis. Look for patterns, not just numbers. |
| Solution exploration with design | 2-3 hours/week | Collaborative sessions — sketching, prototyping, reviewing validation results. |
| Feasibility check-ins with engineering | 30-60 min/week | Informal — "is this possible?" "what would this take?" Not a formal meeting. |
| Story refinement | 2-3 hours/week | Writing and refining backlog stories based on discovery work. |
| Prioritization & planning | 1-2 hours/week | Maintaining the opportunity backlog, updating priorities based on new evidence. |
Total: ~10-14 hours/week on discovery activities. This is the PM's primary job — not project management, not status reporting, not writing Jira tickets without context.
Weekly Designer Activities¶
| Activity | Time Investment | Notes |
|---|---|---|
| Solution exploration with PM | 2-3 hours/week | Collaborative sketching, design concepts, trade-off discussions. |
| Prototyping | 3-5 hours/week | Building prototypes for validation — Figma, paper, whatever's cheapest. |
| Usability testing | 2-3 hours/week | Running tests with real users, synthesizing results. |
| Design refinement for delivery | 2-3 hours/week | Finalizing designs for stories entering the sprint, answering dev questions. |
| Design system maintenance | 1-2 hours/week | Keeping components, patterns, and documentation current. |
Total: ~10-16 hours/week. The designer splits time between discovery (exploring future work) and delivery (supporting current sprint).
When Discovery Overlaps with Delivery¶
Some discovery activities involve engineering:
| Activity | Who | When | How It's Tracked |
|---|---|---|---|
| Feasibility spikes | Dev (assigned by team) | During the sprint | Spike story in the sprint backlog, time-boxed (max 1 day) |
| Technical discovery | Dev + PM | During the sprint | Spike story or part of a discovery ticket |
| Data analysis | Dev/analyst + PM | Ongoing | Not tracked in sprint — it's discovery work |
| Prototype support | Dev (optional) | As needed | Not tracked in sprint unless it's significant effort |
Feasibility spikes are the exception — they live in the sprint backlog because they consume dev capacity. Everything else is PM/design work that runs in parallel.
Artifacts¶
Living Documents (PM Maintains)¶
| Artifact | Purpose | Format |
|---|---|---|
| Opportunity Backlog | Prioritized list of validated problems worth solving | Spreadsheet, Notion, or Jira epic list |
| Opportunity Solution Trees | Visual map of outcomes to opportunities to solutions | Whiteboard, Miro, or FigJam |
| User/Stakeholder Interview Notes | Raw insights from conversations | Shared doc, organized by theme |
| Validation Log | What we tested, what we learned, what we decided | Shared doc or spreadsheet |
Per-Feature Artifacts (PM + Designer)¶
| Artifact | Purpose | When |
|---|---|---|
| Problem brief | 1-pager: who has this problem, evidence, impact, proposed approach | Before solution exploration |
| Prototype | Testable representation of the solution | Before validation |
| Validation results | What we tested, with whom, what we learned | After validation, before story writing |
| Dev-ready stories | Backlog items with acceptance criteria and designs | Before Thursday grooming |
Role Expectations in Discovery¶
Product Manager¶
- Primary responsibility: Ensure the team is working on the highest-value problems with validated solutions.
- Spend at least 30% of your time talking to users and stakeholders — not just reading data.
- Maintain the opportunity backlog. If someone asks "why are we building this?" you should have a clear, evidence-based answer.
- Don't skip validation because you're confident. Your confidence is an input, not evidence.
- Collaborate with design on solutions — don't hand over requirements and expect mockups back.
- Bring engineering into discovery early for feasibility — don't surprise them with complex solutions at grooming.
Designer¶
- Primary responsibility: Ensure solutions are usable and desirable before engineering effort is committed.
- Lead usability testing. You are the team's connection to how users actually experience the product.
- Explore multiple solution concepts before converging. Show the PM (and sometimes the team) trade-offs.
- Stay involved during delivery. Devs will have questions, edge cases will surface, and designs will need to adapt.
- Own the design system. Consistency reduces design and dev effort. Invest in reusable patterns.
- Push back when asked to design solutions for unvalidated problems. "What evidence do we have that this is the right problem?" is always a fair question.
Engineering (in Discovery)¶
- Primary responsibility: Inform feasibility and identify technical opportunities the PM/designer might not see.
- Participate in feasibility assessments when asked. "Can we do this?" and "What would this cost?" are the key questions.
- Suggest simpler alternatives. You often see ways to solve the same problem with less engineering effort.
- Flag technical risks early. If a proposed solution has infrastructure, data model, or integration implications, surface them during discovery — not during sprint planning.
- You are not order-takers. If you don't understand why something is being built, ask. If you disagree with the approach, say so.
Stakeholders (in Discovery)¶
- Primary responsibility: Provide domain expertise, user context, and organizational constraints.
- Be available for interviews and feedback sessions. The PM needs your perspective to make good prioritization decisions.
- Share problems, not solutions. "Students are struggling to understand financial aid options" is more useful than "build a financial aid calculator."
- Trust the team to find the right solution. You define the problem and the constraints; the team determines the approach.
- Provide feedback on prototypes when asked. Early feedback is cheap; late feedback is expensive.