Skip to main content
OrchestKit v7.1.10 — 79 skills, 30 agents, 105 hooks · Claude Code 2.1.69+
OrchestKit
Skills

Prioritization

RICE, WSJF, ICE, MoSCoW, and opportunity cost scoring for backlog ranking. Use when prioritizing features, comparing initiatives, justifying roadmap decisions, or evaluating trade-offs between competing work items.

Reference medium

Primary Agent: product-strategist

Prioritization Frameworks

Score, rank, and justify backlog decisions using the right framework for the situation.

Decision Tree: Which Framework to Use

Do you have a hard deadline or regulatory pressure?
  YES → WSJF (Cost of Delay drives sequencing)
  NO  → Do you have reach/usage data?
          YES → RICE (data-driven, accounts for user reach)
          NO  → Are you in a time-boxed planning session?
                  YES → ICE (fast, 1-10 scales, no data required)
                  NO  → Is this a scope negotiation with stakeholders?
                          YES → MoSCoW (bucket features, control scope creep)
                          NO  → Value-Effort Matrix (quick 2x2 triage)
FrameworkBest ForData RequiredTime to Score
RICEData-rich teams, steady-state prioritizationAnalytics, user counts30-60 min
WSJFSAFe orgs, time-sensitive or regulated workRelative estimates only15-30 min
ICEStartup speed, early validation, quick triageNone5-10 min
MoSCoWScope negotiation, release planningStakeholder input1-2 hours
Value-Effort2x2 visual, quick team alignmentNone10-15 min

RICE

RICE Score = (Reach × Impact × Confidence) / Effort
FactorScaleNotes
ReachActual users/quarterUse analytics; do not estimate
Impact0.25 / 0.5 / 1 / 2 / 3Minimal → Massive per user
Confidence0.3 / 0.5 / 0.8 / 1.0Moonshot → Strong data
EffortPerson-monthsInclude design, eng, QA
## RICE Scoring: [Feature Name]

| Feature     | Reach  | Impact | Confidence | Effort | Score  |
|-------------|--------|--------|------------|--------|--------|
| Smart search| 50,000 | 2      | 0.8        | 3      | 26,667 |
| CSV export  | 10,000 | 0.5    | 1.0        | 0.5    | 10,000 |
| Dark mode   | 30,000 | 0.25   | 1.0        | 1      |  7,500 |

See rules/prioritize-rice.md for ICE, Kano, and full scale tables.


WSJF

WSJF = Cost of Delay / Job Size
Cost of Delay = User Value + Time Criticality + Risk Reduction  (1-21 Fibonacci each)

Higher WSJF = do first. Fibonacci scale (1, 2, 3, 5, 8, 13, 21) forces relative sizing.

## WSJF: GDPR Compliance Update

User Value:       8   (required for EU customers)
Time Criticality: 21  (regulatory deadline this quarter)
Risk Reduction:   13  (avoids significant fines)
Job Size:          8  (medium complexity)

Cost of Delay = 8 + 21 + 13 = 42
WSJF = 42 / 8 = 5.25

See rules/prioritize-wsjf.md for MoSCoW buckets and practical tips. See references/wsjf-guide.md for the full scoring guide.


ICE

ICE Score = Impact × Confidence × Ease    (all factors 1-10)

No user data required. Score relative to other backlog items. Useful for early-stage products and rapid triage sessions.


MoSCoW

Bucket features before estimation. Must-Haves alone should ship a viable product.

## Release 1.0 MoSCoW

### Must Have (~60% of effort)
- [ ] User authentication
- [ ] Core CRUD operations

### Should Have (~20%)
- [ ] Search, export, notifications

### Could Have (~20%)
- [ ] Dark mode, keyboard shortcuts

### Won't Have (documented out-of-scope)
- Mobile app (Release 2.0)
- AI features (Release 2.0)

Opportunity Cost & Trade-Off Analysis

When two items compete for the same team capacity, quantify what delaying each item costs per month.

## Trade-Off: AI Search vs Platform Migration (Q2 eng team)

### Option A: AI Search
- Cost of Delay: $25K/month (competitive risk)
- RICE Score: 18,000
- Effort: 6 weeks

### Option B: Platform Migration
- Cost of Delay: $5K/month (tech debt interest)
- RICE Score: 4,000
- Effort: 8 weeks

### Recommendation
Human decides. Key factors:
1. Q2 OKR: Increase trial-to-paid conversion (favors AI Search)
2. Engineering capacity: Only one team, sequential not parallel
3. Customer commitment: No contractual deadline for either

See rules/prioritize-opportunity-cost.md for the Value-Effort Matrix and full trade-off template. See references/rice-scoring-guide.md for detailed RICE calibration.


Common Pitfalls

PitfallMitigation
Gaming scores to justify pre-decided workCalibrate as a team; document assumptions
Mixing frameworks in one tablePick one framework per planning session
Only tracking high-RICE items; ignoring cost of delayCombine RICE with explicit delay cost analysis
MoSCoW Must-Have bloat (>70% of scope)Must-Haves alone must ship a viable product
Comparing RICE scores across different goalsOnly compare within the same objective

  • product-frameworks — Full PM toolkit (value prop, market sizing, competitive analysis, user research, business case)
  • prd — Convert prioritized features into product requirements documents
  • product-analytics — Define and instrument the metrics that feed RICE reach/impact scores
  • okr-design — Set the objectives that determine which KPIs drive RICE impact scoring
  • market-sizing — TAM/SAM/SOM analysis that informs strategic priority
  • competitive-analysis — Competitor context that raises or lowers WSJF time criticality scores

Version: 1.0.0


Rules (3)

Evaluate opportunity cost with value-effort matrices, cost of delay analysis, and trade-off flagging — HIGH

Opportunity Cost & Trade-Off Analysis

Patterns for making prioritization decisions that account for what you give up, not just what you gain. Complements RICE and WSJF scoring with opportunity cost reasoning.

Value-Effort Matrix

A 2x2 matrix for rapid feature sequencing based on expected value and required effort.

               HIGH VALUE
                   |
    Do Next        |     Do First
    (High value,   |     (High value,
     high effort)  |      low effort)
                   |
  -----------------+-----------------
                   |
    Consider       |     Quick Win
    (Low value,    |     (Low value,
     high effort)  |      low effort)
                   |
               LOW VALUE
  HIGH EFFORT                LOW EFFORT
QuadrantActionExample
Do FirstShip immediately -- high ROIFix broken onboarding step
Quick WinBatch into next sprintAdd CSV export button
Do NextPlan and resource properlyPlatform migration
ConsiderChallenge whether to do at allRedesign rarely-used admin page

Scoring for Placement

  • Value (1-10): Combine user impact, strategic alignment, revenue potential
  • Effort (1-10): Engineering weeks, cross-team coordination, risk
  • Threshold: Value >= 6 is "high value", Effort >= 6 is "high effort"

Cost of Delay Analysis

Quantify what it costs to NOT do something each time period it is delayed.

## Cost of Delay: [Feature Name]

### Revenue Impact
- Lost revenue per month of delay: $X
- Source: [Pipeline data, churn analysis, competitive loss]

### User Impact
- Users affected: N
- Workaround cost per user per month: X hours

### Strategic Impact
- Competitive window closes in: N months
- Regulatory deadline: [date or N/A]

### Total Cost of Delay
$X/month (quantified) + [qualitative strategic cost]

Delay Cost Categories

TypeHow to EstimateExample
Revenue delayPipeline deals blocked by missing feature$50K/month in stalled deals
Churn riskCustomers citing this in exit surveys3 enterprise accounts at risk
CompetitiveCompetitor ships first, window shrinksMarket share loss
ComplianceFines or market access loss after deadlineGDPR: $20M max fine
CompoundingDelay makes future work harderTech debt interest

Trade-Off Flagging Template

When two options compete for the same resources, surface the trade-off explicitly for human decision-makers. Do not make the call -- present the data.

## Trade-Off: [Decision Title]

### Context
[Why this trade-off exists -- shared resources, timeline conflict, etc.]

### Option A: [Name]
- **Pros:** [List 2-3 concrete benefits with data]
- **Cons:** [List 2-3 concrete downsides with data]
- **RICE Score:** [If available]
- **Cost of Delay:** [$/month]

### Option B: [Name]
- **Pros:** [List 2-3 concrete benefits with data]
- **Cons:** [List 2-3 concrete downsides with data]
- **RICE Score:** [If available]
- **Cost of Delay:** [$/month]

### Recommendation
Human decides. Key factors to weigh:
1. [Factor 1 -- e.g., Q2 OKR alignment]
2. [Factor 2 -- e.g., team capacity next sprint]
3. [Factor 3 -- e.g., customer commitment]

Sequencing Principles

When features have dependencies or shared resources, use these sequencing rules:

  1. Highest cost-of-delay first -- unless blocked by dependencies
  2. Unblock others early -- a low-value enabler that unblocks 3 high-value items ships first
  3. Reduce risk early -- unknowns first, known work later (fail fast)
  4. Batch small items -- group Quick Wins into a single sprint to clear the backlog

Incorrect -- prioritizing by gut feel:

Priority list:
1. AI search (CEO wants it)
2. Dashboard redesign (designer is excited)
3. CSV import (seems easy)

Correct -- opportunity cost matrix with explicit trade-offs:

Priority list (by cost of delay):
1. CSV import -- $30K/month blocked deals, 1 week effort (Do First)
2. AI search -- $25K/month competitive risk, 6 week effort (Do Next)
3. Dashboard redesign -- $0 cost of delay, nice-to-have (Consider)

Trade-off flagged: AI search vs platform migration for same
eng team in Q2. See trade-off analysis doc for decision.

Prioritize features with RICE and ICE scoring using Reach, Impact, Confidence, and Effort — HIGH

RICE & ICE Prioritization

RICE Framework

Developed by Intercom for data-driven feature comparison.

Formula

RICE Score = (Reach x Impact x Confidence) / Effort

Factors

FactorDefinitionScale
ReachUsers/customers affected per quarterActual number or 1-10 normalized
ImpactEffect on individual user0.25 (minimal) to 3 (massive)
ConfidenceHow sure are you?0.5 (low) to 1.0 (high)
EffortPerson-months requiredActual estimate

Impact Scale

ScoreLevelDescription
3MassiveFundamental improvement
2HighSignificant improvement
1MediumNoticeable improvement
0.5LowMinor improvement
0.25MinimalBarely noticeable

Confidence Scale

ScoreLevelEvidence
1.0HighStrong data, validated
0.8MediumSome data, reasonable assumptions
0.5LowGut feeling, little data
0.3MoonshotSpeculative, new territory

Example Calculation

Feature: Smart search with AI suggestions

Reach: 50,000 users/quarter (active searchers)
Impact: 2 (high - significantly better results)
Confidence: 0.8 (tested in prototype)
Effort: 3 person-months

RICE = (50,000 x 2 x 0.8) / 3 = 26,667

RICE Scoring Template

FeatureReachImpactConfidenceEffortRICE Score
Feature A10,00020.828,000
Feature B50,00011.0412,500
Feature C5,00030.517,500

ICE Framework

Simpler than RICE, ideal for fast prioritization.

ICE Score = Impact x Confidence x Ease

All factors on 1-10 scale.

ICE vs RICE

AspectRICEICE
ComplexityMore detailedSimpler
Reach considerationExplicitImplicit in Impact
EffortPerson-months1-10 Ease scale
Best forData-driven teamsFast decisions

Kano Model

Categorize features by customer satisfaction impact.

TypeAbsentPresentExample
Must-BeDissatisfiedNeutralLogin works
PerformanceDissatisfiedSatisfiedFast load times
DelightersNeutralDelightedAI suggestions
IndifferentNeutralNeutralAbout page design
ReverseSatisfiedDissatisfiedForced tutorials

Framework Selection Guide

SituationRecommended Framework
Data-driven team with metricsRICE
Fast startup decisionsICE
SAFe/Agile enterpriseWSJF
Fixed scope negotiationMoSCoW
Customer satisfaction focusKano

Common Pitfalls

PitfallMitigation
Gaming the scoresCalibrate as a team regularly
Ignoring qualitative factorsUse frameworks as input, not gospel
Analysis paralysisSet time limits on scoring sessions
Inconsistent scalesDocument and share scoring guidelines

Incorrect — RICE without documented assumptions:

Feature A: RICE = 8,000
Feature B: RICE = 12,500
Priority: B, then A

Correct — RICE with transparent scoring:

Feature B: Smart search with AI
- Reach: 50,000 users/quarter (active searchers)
- Impact: 2 (high - significantly better results)
- Confidence: 0.8 (tested in prototype)
- Effort: 3 person-months
RICE = (50,000 × 2 × 0.8) / 3 = 26,667

Prioritize backlogs with WSJF Cost of Delay and MoSCoW scope management — HIGH

WSJF & MoSCoW Prioritization

WSJF (Weighted Shortest Job First)

SAFe framework optimizing for economic value delivery.

Formula

WSJF = Cost of Delay / Job Size

Higher WSJF = Higher priority (do first)

Cost of Delay Components

Cost of Delay = User Value + Time Criticality + Risk Reduction
ComponentQuestionScale
User ValueHow much do users/business want this?1-21 (Fibonacci)
Time CriticalityDoes value decay over time?1-21
Risk ReductionDoes this reduce risk or enable opportunities?1-21
Job SizeRelative effort compared to other items1-21

Time Criticality Guidelines

ScoreSituation
21Must ship this quarter or lose the opportunity
13Competitor pressure, 6-month window
8Customer requested, flexible timeline
3Nice to have, no deadline
1Can wait indefinitely

Example

Feature: GDPR compliance update

User Value: 8 (required for EU customers)
Time Criticality: 21 (regulatory deadline)
Risk Reduction: 13 (avoids fines)
Job Size: 8 (medium complexity)

Cost of Delay = 8 + 21 + 13 = 42
WSJF = 42 / 8 = 5.25

WSJF vs RICE

Use WSJF WhenUse RICE When
Time mattersValue matters
Deadlines existSteady-state prioritization
Dependencies complexIndependent features
Opportunity cost highUser reach important

MoSCoW Method

Qualitative prioritization for scope management.

Categories

PriorityMeaningGuideline
Must HaveNon-negotiable for release~60% of effort
Should HaveImportant but not critical~20% of effort
Could HaveNice to have if time permits~20% of effort
Won't HaveExplicitly out of scopeDocumented

Application Rules

  1. Must Have items alone should deliver a viable product
  2. Should Have items make product competitive
  3. Could Have items delight users
  4. Won't Have prevents scope creep

Template

## Release 1.0 MoSCoW

### Must Have (M)
- [ ] User authentication
- [ ] Core data model
- [ ] Basic CRUD operations

### Should Have (S)
- [ ] Search functionality
- [ ] Export to CSV
- [ ] Email notifications

### Could Have (C)
- [ ] Dark mode
- [ ] Keyboard shortcuts
- [ ] Custom themes

### Won't Have (W)
- Mobile app (Release 2.0)
- AI recommendations (Release 2.0)
- Multi-language support (Release 3.0)

Practical Tips

  1. Calibrate together: Score several items as a team to align understanding
  2. Revisit regularly: Priorities shift -- rescore quarterly
  3. Document assumptions: Why did you give that Impact score?
  4. Combine frameworks: Use ICE for quick triage, RICE for final decisions

Incorrect — MoSCoW without viable Must-Have set:

Must Have:
- User auth, CRUD, search, export, AI features,
  mobile app, analytics, notifications (90% of scope)

[Product not viable with just Must-Have items]

Correct — Must-Have delivers viable product:

Must Have (60% of effort):
- User authentication
- Core data model
- Basic CRUD operations

Should Have (20%):
- Search, export, notifications

Could Have (20%):
- Dark mode, keyboard shortcuts

References (2)

Rice Scoring Guide

RICE Scoring Guide

Comprehensive guide for using RICE prioritization effectively.

RICE Formula

RICE Score = (Reach × Impact × Confidence) / Effort

Reach Scoring

Estimate how many users/customers will be affected per quarter.

Score% of UsersDescription
10100%All users
880%Most users
550%Half of users
330%Some users
110%Few users

Calculating Reach

Reach = (Users affected) / (Total users) × 10

Example:
- Total MAU: 10,000
- Users who use search: 8,000
- Reach for search improvement: 8,000/10,000 × 10 = 8

Impact Scoring

How much will this move the needle on your goal?

ScoreImpact LevelDescription
3.0Massive3x or more improvement
2.0High2x improvement
1.0MediumNotable improvement
0.5LowMinor improvement
0.25MinimalBarely noticeable

Impact Assessment Questions

  1. What metric does this affect?
  2. By how much will it change?
  3. What's the baseline?
  4. What's the target?

Confidence Scoring

How certain are you about Reach and Impact estimates?

ScoreConfidenceEvidence Level
1.0HighData-backed (analytics, A/B tests)
0.8MediumSome validation (user interviews, surveys)
0.5LowGut feel (experienced intuition)
0.3MoonshotSpeculative (new territory)

Confidence Calibration

  • Used similar feature before? → +0.2
  • Have user research? → +0.2
  • Have analytics data? → +0.2
  • New domain/technology? → -0.2
  • Many unknowns? → -0.2

Effort Scoring

Person-weeks of work to ship (design, development, testing).

ScoreEffortTimeline
0.5Trivial< 1 week
1Small1 week
2Medium2 weeks
4Large1 month
8XL2 months
16XXLQuarter

Effort Estimation Tips

  • Include all disciplines (design, eng, QA)
  • Add buffer for unknowns (1.2-1.5x)
  • Consider dependencies
  • Account for coordination overhead

Example Scoring

## Feature: Advanced Search Filters

### Reach: 8
- 80% of users use search at least once/week
- Source: Analytics dashboard

### Impact: 2.0
- Support tickets about search: 40/week
- Expected reduction: 50%
- Secondary: +10% search completion rate

### Confidence: 0.8
- Have user interview data (5 users)
- Similar feature at competitor successful
- No A/B test yet

### Effort: 2
- Design: 0.5 weeks
- Backend: 1 week
- Frontend: 0.5 weeks

### RICE Score
(8 × 2.0 × 0.8) / 2 = 6.4

Common Mistakes

MistakeSolution
Overestimating reachUse actual data, not hopes
Impact without baselineDefine current state first
100% confidenceNothing is certain
Underestimating effortInclude all work, add buffer
Comparing across goalsOnly compare within same goal

When NOT to Use RICE

  • Mandatory compliance/security work
  • Technical debt paydown
  • Infrastructure investments
  • Strategic bets with long payoff

Wsjf Guide

WSJF (Weighted Shortest Job First) Guide

Framework for prioritizing when time-to-market matters.

WSJF Formula

WSJF = Cost of Delay / Job Size

Higher WSJF = Higher priority (do first)

Cost of Delay Components

Cost of Delay = User Value + Time Criticality + Risk Reduction

User Value (1-10)

How much do users need this?

ScoreDescription
10Critical - users leaving without it
7-9High - major pain point
4-6Medium - nice improvement
1-3Low - minor enhancement

Time Criticality (1-10)

How urgent is the timing?

ScoreDescription
10Hard deadline (regulatory, event)
7-9Competitive window closing
4-6Sooner better, but flexible
1-3No time pressure

Risk Reduction (1-10)

Does delay increase risk?

ScoreDescription
10Major risk if delayed (security, stability)
7-9Significant risk accumulation
4-6Moderate risk growth
1-3Risk doesn't change with time

Job Size (1-10)

Relative size compared to other work.

ScoreDescription
1-2XS - days
3-4S - 1-2 weeks
5-6M - 2-4 weeks
7-8L - 1-2 months
9-10XL - quarter+

Example Calculation

## Feature: Security Patch for CVE

### User Value: 6
- Affects enterprise customers
- Not user-facing but required for compliance

### Time Criticality: 9
- CVE published, 90-day disclosure window
- Competitors already patched

### Risk Reduction: 10
- Active exploitation in the wild
- Potential data breach

### Cost of Delay: 6 + 9 + 10 = 25

### Job Size: 3
- Known fix, straightforward implementation
- ~1 week of work

### WSJF: 25 / 3 = 8.33

When to Use WSJF

  • Multiple time-sensitive items competing
  • Opportunity windows exist
  • Dependencies create bottlenecks
  • Need to justify "why now"

WSJF vs RICE

Use WSJF WhenUse RICE When
Time mattersValue matters
Deadlines existSteady-state prioritization
Dependencies complexIndependent features
Opportunity cost highUser reach important

Visualization

              HIGH Time Criticality

           ┌──────────┼──────────┐
           │    DO    │   DO     │
           │   FIRST  │  SECOND  │
HIGH ──────┼──────────┼──────────┼────── LOW
User Value │    DO    │   DO     │  User Value
           │  THIRD   │  LAST    │
           └──────────┼──────────┘

              LOW Time Criticality
Edit on GitHub

Last updated on