Review Pr
PR review with parallel specialized agents. Use when reviewing pull requests or code.
Related Skills
Review PR
Deep code review using 6-7 parallel specialized agents.
Quick Start
/ork:review-pr 123
/ork:review-pr feature-branchOpus 4.6: Parallel agents use native adaptive thinking for deeper analysis. Complexity-aware routing matches agent model to review difficulty.
Argument Resolution
The PR number or branch is passed as the skill argument. Resolve it immediately:
PR_NUMBER = "$ARGUMENTS" # e.g., "123" or "feature-branch"
# If no argument provided, check environment
if not PR_NUMBER:
PR_NUMBER = os.environ.get("ORCHESTKIT_PR_URL", "").split("/")[-1]
# If still empty, detect from current branch
if not PR_NUMBER:
PR_NUMBER = "$(gh pr view --json number -q .number 2>/dev/null)"Use PR_NUMBER consistently in all subsequent commands and agent prompts.
STEP 0: Verify User Intent with AskUserQuestion
BEFORE creating tasks, clarify review focus:
AskUserQuestion(
questions=[{
"question": "What type of review do you need?",
"header": "Focus",
"options": [
{"label": "Full review (Recommended)", "description": "Security + code quality + tests + architecture"},
{"label": "Security focus", "description": "Prioritize security vulnerabilities"},
{"label": "Performance focus", "description": "Focus on performance implications"},
{"label": "Quick review", "description": "High-level review, skip deep analysis"}
],
"multiSelect": false
}]
)Based on answer, adjust workflow:
- Full review: All 6-7 parallel agents
- Security focus: Prioritize security-auditor, reduce other agents
- Performance focus: Add frontend-performance-engineer agent
- Quick review: Single code-quality-reviewer agent only
STEP 0b: Select Orchestration Mode
See Orchestration Mode Selection
CRITICAL: Task Management is MANDATORY (CC 2.1.16)
BEFORE doing ANYTHING else, create tasks to track progress:
# 1. Create main review task IMMEDIATELY
TaskCreate(
subject="Review PR #{number}",
description="Comprehensive code review with parallel agents",
activeForm="Reviewing PR #{number}"
)
# 2. Create subtasks for each phase
TaskCreate(subject="Gather PR information", activeForm="Gathering PR information")
TaskCreate(subject="Launch review agents", activeForm="Dispatching review agents")
TaskCreate(subject="Run validation checks", activeForm="Running validation checks")
TaskCreate(subject="Synthesize review", activeForm="Synthesizing review")
TaskCreate(subject="Submit review", activeForm="Submitting review")
# 3. Update status as you progress
TaskUpdate(taskId="2", status="in_progress") # When starting
TaskUpdate(taskId="2", status="completed") # When donePhase 1: Gather PR Information
# Get PR details
gh pr view $PR_NUMBER --json title,body,files,additions,deletions,commits,author
# View the diff
gh pr diff $PR_NUMBER
# Check CI status
gh pr checks $PR_NUMBERCapture Scope for Agents
# Capture changed files for agent scope injection
CHANGED_FILES=$(gh pr diff $PR_NUMBER --name-only)
# Detect affected domains
HAS_FRONTEND=$(echo "$CHANGED_FILES" | grep -qE '\.(tsx?|jsx?|css|scss)$' && echo true || echo false)
HAS_BACKEND=$(echo "$CHANGED_FILES" | grep -qE '\.(py|go|rs|java)$' && echo true || echo false)
HAS_AI=$(echo "$CHANGED_FILES" | grep -qE '(llm|ai|agent|prompt|embedding)' && echo true || echo false)Pass CHANGED_FILES to every agent prompt in Phase 3. Pass domain flags to select which agents to spawn.
Identify: total files changed, lines added/removed, affected domains (frontend, backend, AI).
Tool Guidance
| Task | Use | Avoid |
|---|---|---|
| Fetch PR diff | Bash: gh pr diff | Reading all changed files individually |
| List changed files | Bash: gh pr diff --name-only | bash find |
| Search for patterns | Grep(pattern="...", path="src/") | bash grep |
| Read file content | Read(file_path="...") | bash cat |
| Check CI status | Bash: gh pr checks | Polling APIs |
<use_parallel_tool_calls> When gathering PR context, run independent operations in parallel:
gh pr view(PR metadata),gh pr diff(changed files),gh pr checks(CI status)
Spawn all three in ONE message. This cuts context-gathering time by 60%. For agent-based review (Phase 3), all 6 agents are independent -- launch them together. </use_parallel_tool_calls>
Phase 2: Skills Auto-Loading (CC 2.1.6)
CC 2.1.6 auto-discovers skills -- no manual loading needed!
Relevant skills activated automatically:
code-review-playbook-- Review patterns, conventional commentssecurity-scanning-- OWASP, secrets, dependenciestype-safety-validation-- Zod, TypeScript stricttesting-patterns-- Test adequacy, coverage gaps, rule matching
Phase 3: Parallel Code Review (6 Agents)
Domain-Aware Agent Selection
Only spawn agents relevant to the PR's changed domains:
| Domain Detected | Agents to Spawn |
|---|---|
| Backend only | code-quality (x2), security-auditor, test-generator, backend-system-architect |
| Frontend only | code-quality (x2), security-auditor, test-generator, frontend-ui-developer |
| Full-stack | All 6 agents |
| AI/LLM code | All 6 + optional llm-integrator (7th) |
Skip agents for domains not present in the diff. This saves ~33% tokens on domain-specific PRs.
See Agent Prompts -- Task Tool Mode for the 6 parallel agent prompts.
See Agent Prompts -- Agent Teams Mode for the mesh alternative.
See AI Code Review Agent for the optional 7th LLM agent.
Phase 4: Run Validation
Phase 5: Synthesize Review
Combine all agent feedback into a structured report. See Review Report Template
Phase 6: Submit Review
# Approve
gh pr review $PR_NUMBER --approve -b "Review message"
# Request changes
gh pr review $PR_NUMBER --request-changes -b "Review message"CC 2.1.20 Enhancements
PR Status Enrichment
The pr-status-enricher hook automatically detects open PRs at session start and sets:
ORCHESTKIT_PR_URL-- PR URL for quick referenceORCHESTKIT_PR_STATE-- PR state (OPEN, MERGED, CLOSED)
Session Resume with PR Context (CC 2.1.27+)
Sessions are automatically linked when reviewing PRs. Resume later with full context:
claude --from-pr 123
claude --from-pr https://github.com/org/repo/pull/123Task Metrics (CC 2.1.30)
Conventional Comments
Use these prefixes for comments:
praise:-- Positive feedbacknitpick:-- Minor suggestionsuggestion:-- Improvement ideaissue:-- Must fixquestion:-- Needs clarification
Related Skills
ork:commit: Create commits after reviework:create-pr: Create PRs for reviewslack-integration: Team notifications for review events
References
- Review Template
- Review Report Template
- Orchestration Mode Selection
- Validation Commands
- Task Metrics Template
- Agent Prompts -- Task Tool
- Agent Prompts -- Agent Teams
- AI Code Review Agent
Rules (3)
Agent Prompts — Agent Teams Mode — HIGH
Agent Prompts — Agent Teams Mode
In Agent Teams mode, form a review team where reviewers cross-reference findings directly:
# DOMAIN-AWARE AGENT SELECTION
# Core agents (always spawn): quality-reviewer, security-reviewer, test-reviewer
# Conditional: backend-reviewer (if HAS_BACKEND), frontend-reviewer (if HAS_FRONTEND)
# Capture scope from Phase 1
CHANGED_FILES = "$(gh pr diff $PR_NUMBER --name-only)"
TeamCreate(team_name="review-pr-$PR_NUMBER", description="Review PR #$PR_NUMBER")
Task(subagent_type="code-quality-reviewer", name="quality-reviewer",
team_name="review-pr-$PR_NUMBER",
prompt="""Review code quality and type safety for PR #$PR_NUMBER.
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files.
When you find patterns that overlap with security concerns,
message security-reviewer with the finding.
When you find test gaps, message test-reviewer.""")
Task(subagent_type="security-auditor", name="security-reviewer",
team_name="review-pr-$PR_NUMBER",
prompt="""Security audit for PR #$PR_NUMBER.
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files.
Cross-reference with quality-reviewer for injection risks in code patterns.
When you find issues, message the responsible reviewer (backend-reviewer
for API issues, frontend-reviewer for XSS).""")
Task(subagent_type="test-generator", name="test-reviewer",
team_name="review-pr-$PR_NUMBER",
prompt="""Review TEST ADEQUACY for PR #$PR_NUMBER.
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files.
1. Check: Does the PR add/modify code WITHOUT adding tests? Flag as MISSING.
2. Match change types to required test types (testing-patterns rules):
- API → integration-api, verification-contract
- DB → integration-database, data-seeding-cleanup
- UI → unit-aaa-pattern, a11y-testing
- Logic → verification-techniques
3. Evaluate test quality: meaningful assertions, no flaky patterns.
4. When quality-reviewer flags test gaps, verify and suggest specific tests.
Message backend-reviewer or frontend-reviewer with test requirements.
End with: RESULT: [ADEQUATE|GAPS|MISSING] - summary""")
# Only spawn if backend files detected (HAS_BACKEND)
Task(subagent_type="backend-system-architect", name="backend-reviewer",
team_name="review-pr-$PR_NUMBER",
prompt="""Review backend code for PR #$PR_NUMBER.
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files.
When security-reviewer flags API issues, validate and suggest fixes.
Share API pattern findings with frontend-reviewer for consistency.""")
# Only spawn if frontend files detected (HAS_FRONTEND)
Task(subagent_type="frontend-ui-developer", name="frontend-reviewer",
team_name="review-pr-$PR_NUMBER",
prompt="""Review frontend code for PR #$PR_NUMBER.
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files.
When backend-reviewer shares API patterns, verify frontend matches.
When security-reviewer flags XSS risks, validate and suggest fixes.""")Team teardown after synthesis (only shut down agents that were actually spawned):
# After collecting all findings and producing the review
# Core agents — always shut down
SendMessage(type="shutdown_request", recipient="quality-reviewer", content="Review complete")
SendMessage(type="shutdown_request", recipient="security-reviewer", content="Review complete")
SendMessage(type="shutdown_request", recipient="test-reviewer", content="Review complete")
# Conditional agents — only shut down if spawned
# if HAS_BACKEND:
SendMessage(type="shutdown_request", recipient="backend-reviewer", content="Review complete")
# if HAS_FRONTEND:
SendMessage(type="shutdown_request", recipient="frontend-reviewer", content="Review complete")
TeamDelete()Incorrect — No team teardown:
# Agents keep running indefinitely
Task(subagent_type="code-quality-reviewer", team_name="review-pr-$PR_NUMBER")
Task(subagent_type="security-auditor", team_name="review-pr-$PR_NUMBER")
# Missing shutdown_request calls!Correct — Proper team teardown:
# After review synthesis complete
SendMessage(type="shutdown_request", recipient="quality-reviewer", content="Review complete")
SendMessage(type="shutdown_request", recipient="security-reviewer", content="Review complete")
TeamDelete() # Clean shutdownFallback: If team formation fails, use standard Task tool spawns from agent-prompts-task-tool.md.
Agent Prompts — Task Tool Mode — HIGH
Agent Prompts — Task Tool Mode
Launch SIX specialized reviewers in ONE message with run_in_background: true:
| Agent | Focus Area |
|---|---|
| code-quality-reviewer #1 | Readability, complexity, DRY |
| code-quality-reviewer #2 | Type safety, Zod, Pydantic |
| security-auditor | Security, secrets, injection |
| test-generator | Test coverage, edge cases |
| backend-system-architect | API, async, transactions |
| frontend-ui-developer | React 19, hooks, a11y |
# DOMAIN-AWARE AGENT SELECTION
# Only spawn agents relevant to detected domains.
# CHANGED_FILES and domain flags (HAS_FRONTEND, HAS_BACKEND, HAS_AI)
# are captured in Phase 1.
# ALWAYS spawn these 4 core agents:
# - code-quality-reviewer (readability)
# - code-quality-reviewer (type safety)
# - security-auditor
# - test-generator
# CONDITIONALLY spawn these based on domain:
# - backend-system-architect → only if HAS_BACKEND
# - frontend-ui-developer → only if HAS_FRONTEND
# - llm-integrator (7th) → only if HAS_AI
# PARALLEL - All agents in ONE message
Task(
description="Review code quality",
subagent_type="code-quality-reviewer",
prompt="""CODE QUALITY REVIEW for PR $PR_NUMBER
Review code readability and maintainability:
1. Naming conventions and clarity
2. Function/method complexity (cyclomatic < 10)
3. DRY violations and code duplication
4. SOLID principles adherence
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files. Focus your analysis on the diff.
SUMMARY: End with: "RESULT: [PASS|WARN|FAIL] - [N] issues: [brief list]"
""",
run_in_background=True,
max_turns=25
)
Task(
description="Review type safety",
subagent_type="code-quality-reviewer",
prompt="""TYPE SAFETY REVIEW for PR $PR_NUMBER
Review type safety and validation:
1. TypeScript strict mode compliance
2. Zod/Pydantic schema usage
3. No `any` types or type assertions
4. Exhaustive switch/union handling
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files. Focus your analysis on the diff.
SUMMARY: End with: "RESULT: [PASS|WARN|FAIL] - [N] type issues: [brief list]"
""",
run_in_background=True,
max_turns=25
)
Task(
description="Security audit PR",
subagent_type="security-auditor",
prompt="""SECURITY REVIEW for PR $PR_NUMBER
Security audit:
1. Secrets/credentials in code
2. Injection vulnerabilities (SQL, XSS)
3. Authentication/authorization checks
4. Dependency vulnerabilities
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files. Focus your analysis on the diff.
SUMMARY: End with: "RESULT: [PASS|WARN|BLOCK] - [N] findings: [severity summary]"
""",
run_in_background=True,
max_turns=25
)
Task(
description="Review test adequacy",
subagent_type="test-generator",
prompt="""TEST ADEQUACY REVIEW for PR $PR_NUMBER
Evaluate whether this PR has sufficient tests:
1. TEST EXISTENCE CHECK
- Does the PR add/modify code WITHOUT adding/updating tests?
- Are there changed files with 0 corresponding test files?
- Flag: "MISSING" if code changes have no tests at all
2. TEST TYPE MATCHING (use testing-patterns rules)
Match changed code to required test types:
- API endpoint changes → need integration tests (rule: integration-api)
- DB schema changes → need migration + integration tests (rule: integration-database)
- UI component changes → need unit + a11y tests (rule: unit-aaa-pattern, a11y-testing)
- Business logic → need unit + property tests (rule: verification-techniques)
- LLM/AI changes → need eval tests (rule: llm-evaluation)
3. TEST QUALITY
- Meaningful assertions (not just truthy/exists)
- Edge cases and error paths covered
- No flaky patterns (timing, external deps, random)
- Mocking is appropriate (not over-mocked)
4. COVERAGE GAPS
- Which changed functions/methods lack test coverage?
- Which error paths are untested?
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files. Focus your analysis on the diff.
SUMMARY: End with: "RESULT: [ADEQUATE|GAPS|MISSING] - [N] untested paths, [M] missing test types - [key gap]"
""",
run_in_background=True,
max_turns=25
)
Task(
description="Review backend code",
subagent_type="backend-system-architect",
prompt="""BACKEND REVIEW for PR $PR_NUMBER
Review backend code:
1. API design and REST conventions
2. Async/await patterns and error handling
3. Database query efficiency (N+1)
4. Transaction boundaries
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files. Focus your analysis on the diff.
SUMMARY: End with: "RESULT: [PASS|WARN|FAIL] - [N] issues: [key concern]"
""",
run_in_background=True,
max_turns=25
)
Task(
description="Review frontend code",
subagent_type="frontend-ui-developer",
prompt="""FRONTEND REVIEW for PR $PR_NUMBER
Review frontend code:
1. React 19 patterns (hooks, server components)
2. State management correctness
3. Accessibility (a11y) compliance
4. Performance (memoization, lazy loading)
Scope: ONLY review the following changed files:
${CHANGED_FILES}
Do NOT explore beyond these files. Focus your analysis on the diff.
SUMMARY: End with: "RESULT: [PASS|WARN|FAIL] - [N] issues: [key concern]"
""",
run_in_background=True,
max_turns=25
)Incorrect — Sequential agents:
# 6 reviewers run one-by-one (slow)
Task(subagent_type="code-quality-reviewer", prompt="...")
# Wait for completion
Task(subagent_type="security-auditor", prompt="...")
# Wait again...Correct — Parallel agents:
# All 6 agents in ONE message (fast)
Task(subagent_type="code-quality-reviewer", prompt="...", run_in_background=True)
Task(subagent_type="security-auditor", prompt="...", run_in_background=True)
Task(subagent_type="test-generator", prompt="...", run_in_background=True)
# All launch simultaneouslyConfigure an AI code review agent for prompt injection and token limit checks — MEDIUM
AI Code Review Agent (Optional)
If PR includes AI/ML code, add a 7th agent:
Task(
description="Review LLM integration",
subagent_type="llm-integrator",
prompt="""LLM CODE REVIEW for PR $ARGUMENTS
Review AI/LLM integration:
1. Prompt injection prevention
2. Token limit handling
3. Caching strategy
4. Error handling and fallbacks
SUMMARY: End with: "RESULT: [PASS|WARN|FAIL] - [N] LLM issues: [key concern]"
""",
run_in_background=True,
max_turns=25
)Incorrect — Missing LLM review for AI code:
# PR modifies prompt.py but no LLM reviewer
Task(subagent_type="code-quality-reviewer", ...)
Task(subagent_type="security-auditor", ...)
# Missing: LLM-specific reviewCorrect — Add LLM reviewer for AI code:
# Detect AI/ML changes, add specialized reviewer
if pr_contains_llm_code:
Task(subagent_type="llm-integrator", prompt="LLM CODE REVIEW...", run_in_background=True)References (5)
Orchestration Mode Selection
Orchestration Mode Selection
Choose Agent Teams (mesh -- reviewers cross-reference findings) or Task tool (star -- all report to lead):
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1-> Agent Teams mode- Agent Teams unavailable -> Task tool mode (default)
- Otherwise: Full review with 6+ agents and cross-cutting concerns -> recommend Agent Teams; Quick/focused review -> Task tool
| Aspect | Task Tool | Agent Teams |
|---|---|---|
| Communication | All reviewers report to lead | Reviewers cross-reference findings |
| Security + quality overlap | Lead deduplicates | security-auditor messages code-quality-reviewer directly |
| Cost | ~200K tokens | ~500K tokens |
| Best for | Quick/focused reviews | Full reviews with cross-cutting concerns |
Fallback: If Agent Teams encounters issues, fall back to Task tool for remaining review.
Review Report Template
Review Report Template
Use this template when synthesizing agent feedback in Phase 5:
# PR Review: #$ARGUMENTS
## Summary
[1-2 sentence overview]
## Code Quality
| Area | Status | Notes |
|------|--------|-------|
| Readability | // | [notes] |
| Type Safety | // | [notes] |
## Test Adequacy
| Check | Status | Details |
|-------|--------|---------|
| Tests exist for changes | // | [X changed files have tests, Y do not] |
| Test types match changes | // | [e.g., API changes have integration tests] |
| Coverage gaps | // | [N untested paths] |
| Test quality | // | [meaningful assertions, no flaky patterns] |
**Verdict:** [ADEQUATE | GAPS (list) | MISSING (critical)]
## Security
| Check | Status |
|-------|--------|
| Secrets | / |
| Input Validation | / |
| Dependencies | / |
## Blockers (Must Fix)
- [if any]
## Suggestions (Non-Blocking)
- [improvements]Review Template
PR Review Template
Review Output Format
# PR Review: #[NUMBER]
**Title**: [PR Title]
**Author**: [Author]
**Files Changed**: X | **Lines**: +Y / -Z
## Summary
[1-2 sentence overview of changes]
## ✅ Strengths
- [What's done well - from praise comments]
- [Good patterns observed]
## 🔍 Code Quality
| Area | Status | Notes |
|------|--------|-------|
| Readability | ✅/⚠️/❌ | [notes] |
| Type Safety | ✅/⚠️/❌ | [notes] |
| Test Coverage | ✅/⚠️/❌ | [X% coverage] |
| Error Handling | ✅/⚠️/❌ | [notes] |
## 🔒 Security
| Check | Status | Issues |
|-------|--------|--------|
| Secrets Scan | ✅/❌ | [count] |
| Input Validation | ✅/❌ | [issues] |
| Dependencies | ✅/❌ | [vulnerabilities] |
## ⚠️ Suggestions (Non-Blocking)
- [suggestion 1 with file:line reference]
- [suggestion 2]
## 🔴 Blockers (Must Fix Before Merge)
- [blocker 1 if any]
- [blocker 2 if any]
## 📋 CI Status
- Backend Lint: ✅/❌
- Backend Types: ✅/❌
- Backend Tests: ✅/❌
- Frontend Format: ✅/❌
- Frontend Lint: ✅/❌
- Frontend Types: ✅/❌
- Frontend Tests: ✅/❌Approval Message
## ✅ Approved
Great work! Code quality is solid, tests pass, and security looks good.
### Highlights
- [specific positive feedback]
### Minor Suggestions (Non-Blocking)
- [optional improvements]
🤖 Reviewed with Claude Code (6 parallel agents)Request Changes Message
## 🔄 Changes Requested
Good progress, but a few items need addressing before merge.
### Must Fix
1. [blocker 1]
2. [blocker 2]
### Suggestions
- [optional improvements]
🤖 Reviewed with Claude Code (6 parallel agents)Conventional Comments
| Prefix | Usage |
|---|---|
praise: | Highlight good patterns |
nitpick: | Minor style preference |
suggestion: | Non-blocking improvement |
issue: | Must be addressed |
question: | Needs clarification |
Example Comments
praise: Excellent use of the repository pattern here - clean separation of concerns.
nitpick: Consider using a more descriptive variable name than `d` - maybe `data` or `response`.
suggestion: This loop could be replaced with a list comprehension for better readability.
issue: This SQL query is vulnerable to injection - use parameterized queries instead.
question: Is there a reason we're not using the existing `UserService` here?Task Metrics Template
Task Metrics Template (CC 2.1.30)
Task tool results now include efficiency metrics. After parallel agents complete, report:
## Review Efficiency
| Agent | Tokens | Tools | Duration |
|-------|--------|-------|----------|
| code-quality-reviewer | 450 | 8 | 12s |
| security-auditor | 620 | 12 | 18s |
| test-generator | 380 | 6 | 10s |
**Total:** 1,450 tokens, 26 tool callsUse metrics to:
- Identify slow or expensive agents
- Track review efficiency over time
- Optimize agent prompts based on token usage
Validation Commands
Validation Commands
Backend
cd backend
poetry run ruff format --check app/
poetry run ruff check app/
poetry run pytest tests/unit/ -v --tb=short
poetry run pytest tests/ -v --cov=app --cov-report=term-missingFrontend
cd frontend
npm run format:check
npm run lint
npm run typecheck
npm run test
npm run test -- --coverageIntegration Tests (if infrastructure detected)
# Detect real service testing capability
ls **/docker-compose*.yml 2>/dev/null
ls **/testcontainers* 2>/dev/null
# If detected, run integration tests against real services
docker-compose -f docker-compose.test.yml up -d
poetry run pytest tests/integration/ -v
docker-compose -f docker-compose.test.yml downTest Adequacy Check
# List changed files without corresponding test files
gh pr diff $ARGUMENTS --name-only | while read f; do
# Skip test files, configs, docs
case "$f" in
tests/*|*test*|*.md|*.json|*.yml) continue ;;
esac
# Check if a test file exists
test_file="tests/$(basename "$f" .py)_test.py"
if [ ! -f "$test_file" ]; then
echo "NO TEST: $f"
fi
doneResponsive Patterns
Responsive design with Container Queries, fluid typography, cqi/cqb units, and mobile-first patterns for React applications. Use when building responsive layouts or container queries.
Scope Appropriate Architecture
Right-sizes architecture to project scope. Prevents over-engineering by classifying projects into 6 tiers and constraining pattern choices accordingly. Use when designing architecture, selecting patterns, or when brainstorming/implement detect a project tier.
Last updated on