Skip to main content
OrchestKit v7.43.0 — 104 skills, 36 agents, 173 hooks · Claude Code 2.1.105+
OrchestKit
Skills

Explore

explore — Deep codebase exploration with parallel agents. Use when exploring a repo, discovering architecture, finding files, or analyzing design patterns.

Command high
Invoke
/ork:explore

Codebase Exploration

Multi-angle codebase exploration using 3-5 parallel agents.

Quick Start

/ork:explore authentication

Opus 4.6: Exploration agents use native adaptive thinking for deeper pattern recognition across large codebases.


STEP 0: Verify User Intent with AskUserQuestion

BEFORE creating tasks, clarify what the user wants to explore:

AskUserQuestion(
  questions=[{
    "question": "What aspect do you want to explore?",
    "header": "Focus",
    "options": [
      {"label": "Full exploration (Recommended)", "description": "Code structure + data flow + architecture + health assessment", "markdown": "```\nFull Exploration (8 phases)\n───────────────────────────\n  4 parallel explorer agents:\n  ┌──────────┐ ┌──────────┐\n  │ Structure│ │ Data     │\n  │ Explorer │ │ Flow     │\n  ├──────────┤ ├──────────┤\n  │ Pattern  │ │ Product  │\n  │ Analyst  │ │ Context  │\n  └──────────┘ └──────────┘\n\n  ┌──────────────────────┐\n  │ Code Health    N/10  │\n  │ Dep Hotspots   map   │\n  │ Architecture   diag  │\n  └──────────────────────┘\n  Output: Full exploration report\n```"},
      {"label": "Code structure only", "description": "Find files, classes, functions related to topic", "markdown": "```\nCode Structure\n──────────────\n  Grep ──▶ Glob ──▶ Map\n\n  Output:\n  ├── File tree (relevant)\n  ├── Key classes/functions\n  ├── Import graph\n  └── Entry points\n  No agents — direct search\n```"},
      {"label": "Data flow", "description": "Trace how data moves through the system", "markdown": "```\nData Flow Trace\n───────────────\n  Input ──▶ Transform ──▶ Output\n    │          │            │\n    ▼          ▼            ▼\n  [API]    [Service]    [DB/Cache]\n\n  Traces: request lifecycle,\n  state mutations, side effects\n  Agent: 1 data-flow explorer\n```"},
      {"label": "Architecture patterns", "description": "Identify design patterns and integrations", "markdown": "```\nArchitecture Analysis\n─────────────────────\n  ┌─────────────────────┐\n  │ Detected Patterns    │\n  │ ├── MVC / Hexagonal  │\n  │ ├── Event-driven?    │\n  │ ├── Service layers   │\n  │ └── External APIs    │\n  ├─────────────────────┤\n  │ Integration Map      │\n  │ DB ←→ Cache ←→ Queue │\n  └─────────────────────┘\n  Agent: backend-system-architect\n```"},
      {"label": "Quick search", "description": "Just find relevant files, skip deep analysis", "markdown": "```\nQuick Search (~30s)\n───────────────────\n  Grep + Glob ──▶ File list\n\n  Output:\n  ├── Matching files\n  ├── Line references\n  └── Brief summary\n  No agents, no health check,\n  no report generation\n```"}
    ],
    "multiSelect": false
  }]
)

Based on answer, adjust workflow:

  • Full exploration: All phases, all parallel agents
  • Code structure only: Skip phases 5-7 (health, dependencies, product)
  • Data flow: Focus phase 3 agents on data tracing
  • Architecture patterns: Focus on backend-system-architect agent
  • Quick search: Skip to phases 1-2 only, return file list

STEP 0b: Select Orchestration Mode

MCP Probe

ToolSearch(query="select:mcp__memory__search_nodes")
Write(".claude/chain/capabilities.json", { memory, timestamp })

if capabilities.memory:
  mcp__memory__search_nodes({ query: "architecture decisions for {path}" })
  # Enrich exploration with past decisions

Exploration Handoff

After exploration completes, write results for downstream skills:

Write(".claude/chain/exploration.json", JSON.stringify({
  "phase": "explore", "skill": "explore",
  "timestamp": now(), "status": "completed",
  "outputs": {
    "architecture_map": { ... },
    "patterns_found": ["repository", "service-layer"],
    "complexity_hotspots": ["src/auth/", "src/payments/"]
  }
}))

Choose Agent Teams (mesh) or Task tool (star):

  1. Agent Teams mode (GA since CC 2.1.33) → recommended for 4+ agents
  2. Task tool mode → for quick/single-focus exploration
  3. ORCHESTKIT_FORCE_TASK_TOOL=1Task tool (override)
AspectTask ToolAgent Teams
Discovery sharingLead synthesizes after all completeExplorers share discoveries as they go
Cross-referencingLead connects dotsData flow explorer alerts architecture explorer
Cost~150K tokens~400K tokens
Best forQuick/focused searchesDeep full-codebase exploration

Fallback: If Agent Teams encounters issues, fall back to Task tool for remaining exploration.


Task Management (MANDATORY)

BEFORE doing ANYTHING else, create tasks to show progress:

# 1. Create main task IMMEDIATELY
TaskCreate(subject="Explore: {topic}", description="Deep codebase exploration for {topic}", activeForm="Exploring {topic}")

# 2. Create subtasks for each phase
TaskCreate(subject="Initial file search", activeForm="Searching files")                # id=2
TaskCreate(subject="Check knowledge graph", activeForm="Checking memory")              # id=3
TaskCreate(subject="Launch exploration agents", activeForm="Dispatching explorers")     # id=4
TaskCreate(subject="Assess code health (0-10)", activeForm="Assessing code health")    # id=5
TaskCreate(subject="Map dependency hotspots", activeForm="Mapping dependencies")       # id=6
TaskCreate(subject="Add product perspective", activeForm="Adding product context")     # id=7
TaskCreate(subject="Generate exploration report", activeForm="Generating report")      # id=8

# 3. Set dependencies for sequential phases
TaskUpdate(taskId="3", addBlockedBy=["2"])  # Memory check needs file search first
TaskUpdate(taskId="4", addBlockedBy=["3"])  # Agents need memory context
TaskUpdate(taskId="5", addBlockedBy=["4"])  # Health needs exploration done
TaskUpdate(taskId="6", addBlockedBy=["4"])  # Hotspots need exploration done
TaskUpdate(taskId="7", addBlockedBy=["4"])  # Product needs exploration done
TaskUpdate(taskId="8", addBlockedBy=["5", "6", "7"])  # Report needs all analysis done

# 4. Before starting each task, verify it's unblocked
task = TaskGet(taskId="2")  # Verify blockedBy is empty

# 5. Update status as you progress
TaskUpdate(taskId="2", status="in_progress")  # When starting
TaskUpdate(taskId="2", status="completed")    # When done — repeat for each subtask

Workflow Overview

PhaseActivitiesOutput
1. Initial SearchGrep, Glob for matchesFile locations
2. Memory CheckSearch knowledge graphPrior context
3. Deep Exploration4 parallel explorersMulti-angle analysis
4. AI System (if applicable)LangGraph, prompts, RAGAI-specific findings
5. Code HealthRate code 0-10Quality scores
6. Dependency HotspotsIdentify couplingHotspot visualization
7. Product PerspectiveBusiness contextFindability suggestions
8. Report GenerationCompile findingsActionable report

Progressive Output (CC 2.1.76)

Output findings incrementally as each phase completes — don't batch until the report:

After PhaseShow User
1. Initial SearchFile matches, grep results
2. Memory CheckPrior decisions and relevant context
3. Deep ExplorationEach explorer agent's findings as they return
5. Code HealthHealth score with dimension breakdown

For Phase 3 parallel agents, output each agent's findings as soon as it returns — don't wait for all 4 explorers. Early findings from one agent may answer the user's question before remaining agents complete, allowing early termination.


# PARALLEL - Quick searches
Grep(pattern="$ARGUMENTS[0]", output_mode="files_with_matches")
Glob(pattern="**/*$ARGUMENTS[0]*")

Phase 2: Memory Check

mcp__memory__search_nodes(query="$ARGUMENTS[0]")
mcp__memory__search_nodes(query="architecture")

Phase 3: Parallel Deep Exploration (4 Agents)

Load Read("$\{CLAUDE_SKILL_DIR\}/rules/exploration-agents.md") for Task tool mode prompts.

Load Read("$\{CLAUDE_SKILL_DIR\}/rules/agent-teams-mode.md") for Agent Teams alternative.

Phase 4: AI System Exploration (If Applicable)

For AI/ML topics, add exploration of: LangGraph workflows, prompt templates, RAG pipeline, caching strategies.

Phase 5: Code Health Assessment

Load Read("$\{CLAUDE_SKILL_DIR\}/rules/code-health-assessment.md") for agent prompt. Load Read("$\{CLAUDE_SKILL_DIR\}/references/code-health-rubric.md") for scoring criteria.

Phase 6: Dependency Hotspot Map

Load Read("$\{CLAUDE_SKILL_DIR\}/rules/dependency-hotspot-analysis.md") for agent prompt. Load Read("$\{CLAUDE_SKILL_DIR\}/references/dependency-analysis.md") for metrics.

Phase 7: Product Perspective

Load Read("$\{CLAUDE_SKILL_DIR\}/rules/product-perspective.md") for agent prompt. Load Read("$\{CLAUDE_SKILL_DIR\}/references/findability-patterns.md") for best practices.

Phase 8: Generate Report

Load Read("$\{CLAUDE_SKILL_DIR\}/references/exploration-report-template.md").

Common Exploration Queries

  • "How does authentication work?"
  • "Where are API endpoints defined?"
  • "Find all usages of EventBroadcaster"
  • "What's the workflow for content analysis?"
  • ork:implement: Implement after exploration

Version: 2.4.0 (April 2026) — Fork-eligible agents for 30-50% cost reduction (#1227)


Rules (5)

Coordinate multi-agent exploration teams with real-time discovery sharing — HIGH

Agent Teams Mode

In Agent Teams mode, form an exploration team where explorers share discoveries in real-time:

TeamCreate(team_name="explore-{topic}", description="Explore {topic}")

Agent(subagent_type="Explore", name="structure-explorer",
     team_name="explore-{topic}",
     prompt="""Find all files, classes, and functions related to: {topic}
     When you discover key entry points, message data-flow-explorer so they
     can trace data paths from those points.
     When you find backend patterns, message backend-explorer.
     When you find frontend components, message frontend-explorer.""")

Agent(subagent_type="Explore", name="data-flow-explorer",
     team_name="explore-{topic}",
     prompt="""Trace entry points, processing, and storage for: {topic}
     When structure-explorer shares entry points, start tracing from those.
     When you discover cross-boundary data flows (frontend→backend or vice versa),
     message both backend-explorer and frontend-explorer.""")

Agent(subagent_type="backend-system-architect", name="backend-explorer",
     team_name="explore-{topic}",
     prompt="""Analyze backend architecture patterns for: {topic}
     When structure-explorer or data-flow-explorer share backend findings,
     investigate deeper — API design, database schema, service patterns.
     Share integration points with frontend-explorer for consistency.""")

Agent(subagent_type="frontend-ui-developer", name="frontend-explorer",
     team_name="explore-{topic}",
     prompt="""Analyze frontend components, state, and routes for: {topic}
     When structure-explorer shares component locations, investigate deeper.
     When backend-explorer shares API patterns, verify frontend alignment.
     Share component hierarchy with data-flow-explorer.""")

Team Teardown

After report generation:

SendMessage(type="shutdown_request", recipient="structure-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="data-flow-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="backend-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="frontend-explorer", content="Exploration complete")
TeamDelete()

# Worktree cleanup (CC 2.1.72)
ExitWorktree(action="keep")

Fallback: If team formation fails, use standard Task tool spawns. See exploration-agents.md.

Incorrect — Sequential exploration without coordination:

Agent(subagent_type="Explore", prompt="Find auth files")
# Wait for result...
Agent(subagent_type="Explore", prompt="Trace auth data flow")
# Sequential, no sharing between agents

Correct — Team mode with real-time discovery sharing:

TeamCreate(team_name="explore-auth")
Agent(subagent_type="Explore", name="structure-explorer",
     team_name="explore-auth",
     prompt="Find auth files. Message data-flow-explorer with entry points.")
Agent(subagent_type="Explore", name="data-flow-explorer",
     team_name="explore-auth",
     prompt="When structure-explorer shares entry points, trace data flows.")
# Parallel execution, coordinated via messages

Score code health across five quality dimensions with structured assessment criteria — MEDIUM

Code Health Assessment

Rate found code quality 0-10 with specific dimensions. See code-health-rubric.md for scoring criteria.

Agent(
  subagent_type="code-quality-reviewer",
  prompt="""CODE HEALTH ASSESSMENT for files related to: $ARGUMENTS

  Rate each dimension 0-10:

  1. READABILITY (0-10)
     - Clear naming conventions?
     - Appropriate comments?
     - Logical organization?

  2. MAINTAINABILITY (0-10)
     - Single responsibility?
     - Low coupling?
     - Easy to modify?

  3. TESTABILITY (0-10)
     - Pure functions where possible?
     - Dependency injection?
     - Existing test coverage?

  4. COMPLEXITY (0-10, inverted: 10=simple, 0=complex)
     - Cyclomatic complexity?
     - Nesting depth?
     - Function length?

  5. DOCUMENTATION (0-10)
     - API docs present?
     - Usage examples?
     - Architecture notes?

  Output:
  {
    "overall_score": N.N,
    "dimensions": {
      "readability": N,
      "maintainability": N,
      "testability": N,
      "complexity": N,
      "documentation": N
    },
    "hotspots": ["file:line - issue"],
    "recommendations": ["improvement suggestion"]
  }

  SUMMARY: End with: "HEALTH: [N.N]/10 - [best dimension] strong, [worst dimension] needs work"
  """,
  run_in_background=True,
  max_turns=25
)

Incorrect — Vague code quality feedback:

Code Review: The code looks okay. Some parts are complex.
Maybe add more tests.

Correct — Structured health assessment with scores:

{
  "overall_score": 6.2,
  "dimensions": {
    "readability": 8,
    "maintainability": 5,
    "testability": 4,
    "complexity": 6,
    "documentation": 8
  },
  "hotspots": [
    "auth.ts:45 - nested if/else 5 levels deep",
    "utils.ts:120 - 200-line function, no SRP"
  ],
  "recommendations": [
    "Extract auth.ts:45-80 to separate validation functions",
    "Add unit tests for utils.ts edge cases"
  ]
}

Identify highly-coupled code and dependency bottlenecks to reduce change risk — MEDIUM

Dependency Hotspot Analysis

Identify highly-coupled code and dependency bottlenecks. See dependency-analysis.md for metrics and formulas.

Agent(
  subagent_type="backend-system-architect",
  prompt="""DEPENDENCY HOTSPOT ANALYSIS for: $ARGUMENTS

  Analyze coupling and dependencies:

  1. IMPORT ANALYSIS
     - Which files import this code?
     - What does this code import?
     - Circular dependencies?

  2. COUPLING SCORE (0-10, 10=highly coupled)
     - How many files would break if this changes?
     - Fan-in (incoming dependencies)
     - Fan-out (outgoing dependencies)

  3. CHANGE IMPACT
     - Blast radius of modifications
     - Files that always change together

  4. HOTSPOT VISUALIZATION

[Module A] --depends--> [Target] <--depends-- [Module B] | v [Module C]


Output:
{
"coupling_score": N,
"fan_in": N,
"fan_out": N,
"circular_deps": [],
"change_impact": ["file - reason"],
"hotspot_diagram": "ASCII diagram"
}

SUMMARY: End with: "COUPLING: [N]/10 - [N] incoming, [M] outgoing deps - [key concern]"
""",
run_in_background=True,
max_turns=25
)

Incorrect — Listing imports without analysis:

auth.ts imports:
- utils.ts
- config.ts
- db.ts

Correct — Hotspot analysis with coupling score:

{
  "coupling_score": 8,
  "fan_in": 12,
  "fan_out": 5,
  "circular_deps": ["auth.ts → user.ts → auth.ts"],
  "change_impact": [
    "auth.ts change breaks 12 files",
    "utils.ts and auth.ts always change together"
  ],
  "hotspot_diagram": "
    [12 files] --depend on--> [auth.ts]
                                  |
                              depends on
                                  v
                      [utils, config, db, user, session]
  "
}

Spawn parallel exploration agents using Task tool for concurrent codebase analysis — HIGH

Exploration Agents (Task Tool Mode)

Launch 4 specialized explorers in ONE message with run_in_background: true:

# PARALLEL - All 4 in ONE message
Agent(
  subagent_type="Explore",
  prompt="""Code Structure: Find all files, classes, functions related to: $ARGUMENTS

  Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.

  SUMMARY: End with: "RESULT: [N] files, [M] classes - [key location, e.g., 'src/auth/']"
  """,
  run_in_background=True,
  max_turns=25
)
Agent(
  subagent_type="Explore",
  prompt="""Data Flow: Trace entry points, processing, storage for: $ARGUMENTS

  Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.

  SUMMARY: End with: "RESULT: [entry] → [processing] → [storage] - [N] hop flow"
  """,
  run_in_background=True,
  max_turns=25
)
Agent(
  subagent_type="backend-system-architect",
  prompt="""Backend Patterns: Analyze architecture patterns, integrations, dependencies for: $ARGUMENTS

  Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.

  SUMMARY: End with: "RESULT: [pattern name] - [N] integrations, [M] dependencies"
  """,
  run_in_background=True,
  max_turns=25
)
Agent(
  subagent_type="frontend-ui-developer",
  prompt="""Frontend Analysis: Find components, state management, routes for: $ARGUMENTS

  Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.

  SUMMARY: End with: "RESULT: [N] components, [state lib] - [key route]"
  """,
  run_in_background=True,
  max_turns=25
)

Fork Pattern (CC 2.1.89 — #1227)

These agents are fork-eligible: short prompts (<500 words), no custom model, no worktree isolation. CC automatically shares the parent's cached API prefix across all 4 forks, reducing cost by ~60%.

See chain-patterns/references/fork-pattern.md for full details.

Do NOT add model= or isolation="worktree" to these agents — it breaks cache sharing.

Explorer Roles

  1. Code Structure Explorer - Files, classes, functions
  2. Data Flow Explorer - Entry points, processing, storage
  3. Backend Architect - Patterns, integration, dependencies
  4. Frontend Developer - Components, state, routes

Incorrect — Sequential exploration:

Agent(subagent_type="Explore", prompt="Find auth files")
# Wait...
Agent(subagent_type="Explore", prompt="Trace auth flow")
# Wait...
Agent(subagent_type="backend-system-architect", prompt="Analyze patterns")
# Slow, sequential

Correct — Parallel exploration in one message:

# All 4 in ONE message with run_in_background: true
Agent(subagent_type="Explore", prompt="Code Structure: Find all files related to auth",
     run_in_background=True, max_turns=25)
Agent(subagent_type="Explore", prompt="Data Flow: Trace auth entry→storage",
     run_in_background=True, max_turns=25)
Agent(subagent_type="backend-system-architect", prompt="Backend Patterns: Analyze auth architecture",
     run_in_background=True, max_turns=25)
Agent(subagent_type="frontend-ui-developer", prompt="Frontend: Find auth components",
     run_in_background=True, max_turns=25)
# Parallel execution

Add business context and findability analysis to technical codebase exploration — MEDIUM

Product Perspective

Add business context and findability suggestions. See findability-patterns.md for discoverability best practices.

Agent(
  subagent_type="product-strategist",
  prompt="""PRODUCT PERSPECTIVE for: $ARGUMENTS

  Analyze from a product/business viewpoint:

  1. BUSINESS CONTEXT
     - What user problem does this code solve?
     - What feature/capability does it enable?
     - Who are the users of this code?

  2. FINDABILITY SUGGESTIONS
     - Better naming for discoverability?
     - Missing documentation entry points?
     - Where should someone look first?

  3. KNOWLEDGE GAPS
     - What context is missing for new developers?
     - What tribal knowledge exists?
     - What should be documented?

  4. SEARCH OPTIMIZATION
     - Keywords someone might use to find this
     - Alternative terms for the same concept
     - Related concepts to cross-reference

  Output:
  {
    "business_purpose": "description",
    "primary_users": ["user type"],
    "findability_issues": ["issue - suggestion"],
    "recommended_entry_points": ["file - why start here"],
    "search_keywords": ["keyword"],
    "documentation_gaps": ["gap"]
  }

  SUMMARY: End with: "FINDABILITY: [N] issues - start at [recommended entry point]"
  """,
  run_in_background=True,
  max_turns=25)

Incorrect — Technical analysis without business context:

Found auth.ts, user.ts, session.ts
Uses JWT tokens, bcrypt hashing
Database: PostgreSQL users table

Correct — Product perspective with findability:

{
  "business_purpose": "Secure user authentication and session management",
  "primary_users": ["End users logging in", "Developers integrating auth"],
  "findability_issues": [
    "auth.ts - generic name, try auth/core.ts",
    "Missing README in auth/ - devs don't know where to start"
  ],
  "recommended_entry_points": [
    "auth/README.md (missing - create this!)",
    "auth/core.ts - main authentication flow"
  ],
  "search_keywords": ["login", "authentication", "session", "JWT", "security"],
  "documentation_gaps": [
    "No auth flow diagram",
    "Token refresh logic undocumented"
  ]
}

References (4)

Code Health Rubric

Code Health Rubric

Standardized 0-10 scoring criteria for assessing code quality across five dimensions.

Scoring Scale

ScoreRatingDescription
9-10ExcellentProduction-ready, exemplary code
7-8GoodMinor improvements possible
5-6AdequateFunctional but needs attention
3-4PoorSignificant issues, refactor recommended
0-2CriticalMajor problems, immediate action required

1. Readability (0-10)

ScoreCriteria
10Self-documenting, intuitive naming, perfect structure
7-8Clear names, logical flow, minimal cognitive load
5-6Understandable with effort, some unclear sections
3-4Confusing logic, poor naming, requires context
0-2Incomprehensible, magic numbers, no conventions

2. Maintainability (0-10)

ScoreCriteria
10SRP adherence, loose coupling, DRY, easy to modify
7-8Good separation, minor duplication, clear boundaries
5-6Some coupling, moderate duplication, changes ripple
3-4High coupling, significant duplication, fragile
0-2Spaghetti code, any change breaks multiple areas

3. Testability (0-10)

ScoreCriteria
10Pure functions, DI, 90%+ coverage, mocks easy
7-8Most logic testable, some DI, 70%+ coverage
5-6Testable with effort, some hidden dependencies
3-4Hard to isolate, global state, 30% coverage
0-2Untestable, tightly coupled, no test infrastructure

4. Complexity (0-10, inverted: 10=simple)

ScoreCriteria
10Cyclomatic <5, max 2 nesting, <20 line functions
7-8Cyclomatic 5-10, 3 nesting, <40 line functions
5-6Cyclomatic 10-15, 4 nesting, some long functions
3-4Cyclomatic 15-25, deep nesting, 100+ line functions
0-2Cyclomatic >25, 6+ nesting, god functions

5. Documentation (0-10)

ScoreCriteria
10Complete API docs, examples, architecture notes
7-8Public API documented, inline comments where needed
5-6Some docstrings, missing edge cases
3-4Sparse comments, outdated documentation
0-2No documentation, misleading comments

Overall Score Calculation

overall = (readability + maintainability + testability + complexity + documentation) / 5

Score Interpretation:

  • 8.0+: Ship it
  • 6.0-7.9: Acceptable, plan improvements
  • 4.0-5.9: Technical debt, prioritize refactoring
  • <4.0: Stop and fix before proceeding

Dependency Analysis

Dependency Analysis

Identify coupling hotspots and dependency patterns in codebases.

Fan-In / Fan-Out Metrics

MetricDefinitionImplication
Fan-InFiles that import this moduleHigh = many dependents, changes risky
Fan-OutModules this file importsHigh = many dependencies, fragile
InstabilityFan-Out / (Fan-In + Fan-Out)0 = stable, 1 = unstable

Ideal Patterns:

  • Core utilities: High fan-in, low fan-out (stable)
  • Feature modules: Low fan-in, moderate fan-out
  • Entry points: Low fan-in, high fan-out

Hotspot Identification

High-Risk Indicators

PatternRiskAction
Fan-in > 10Blast radius largeAdd interface/abstraction
Fan-out > 8Too many dependenciesExtract facades
Instability = 1, Fan-in > 5Unstable coreStabilize or decouple

Coupling Score Formula

coupling_score = min(10, (fan_in + fan_out) / 3)
  • 0-3: Low coupling (healthy)
  • 4-6: Moderate coupling (monitor)
  • 7-10: High coupling (refactor)

Circular Dependency Detection

Signs of Circular Dependencies:

  1. Import errors at runtime
  2. Mysterious None values
  3. Files that always change together
  4. Cannot extract to separate package

Detection Approach:

A imports B
B imports C
C imports A  <- CIRCULAR

Resolution Strategies:

  1. Extract shared interface
  2. Dependency inversion (depend on abstractions)
  3. Merge tightly coupled modules
  4. Event-driven decoupling

Change Impact Analysis

Questions to Answer:

  1. If I modify this file, what breaks?
  2. Which files always change together?
  3. What is the blast radius of a refactor?

Measuring Impact:

  • Direct Impact: Files importing the changed module
  • Transitive Impact: Files importing those files
  • Co-Change Frequency: Git history of files changed together

High Impact Indicators:

  • 5 direct dependents

  • 20 transitive dependents

  • 80% co-change frequency with another file

Exploration Report Template

Exploration Report Template

Use this template for Phase 8 report generation.

# Exploration Report: $ARGUMENTS

## Quick Answer
[1-2 sentence summary]

## File Locations
| File | Purpose | Health Score |
|------|---------|--------------|
| `path/to/file.py` | [description] | [N.N/10] |

## Code Health Summary
| Dimension | Score | Notes |
|-----------|-------|-------|
| Readability | [N/10] | [note] |
| Maintainability | [N/10] | [note] |
| Testability | [N/10] | [note] |
| Complexity | [N/10] | [note] |
| Documentation | [N/10] | [note] |
| **Overall** | **[N.N/10]** | |

## Architecture Overview
[ASCII diagram]

## Dependency Hotspot Map

[Incoming deps] → [TARGET] → [Outgoing deps]

- **Coupling Score:** [N/10]
- **Fan-in:** [N] files depend on this
- **Fan-out:** [M] dependencies
- **Circular Dependencies:** [list or "None"]

## Data Flow
1. [Entry] → 2. [Processing] → 3. [Storage]

## Findability & Entry Points
| Entry Point | Why Start Here |
|-------------|----------------|
| `path/to/file.py` | [reason] |

**Search Keywords:** [keyword1], [keyword2], [keyword3]

## Product Context
- **Business Purpose:** [what problem this solves]
- **Primary Users:** [who uses this]
- **Documentation Gaps:** [what's missing]

## How to Modify
1. [Step 1]
2. [Step 2]

## Recommendations
1. [Health improvement]
2. [Findability improvement]
3. [Documentation improvement]

Findability Patterns

Findability Patterns

Improve code discoverability for developers exploring the codebase.

Naming Conventions for Searchability

PatternExampleSearchability
Domain prefixauth_login(), auth_logout()Grep "auth_" finds all
Feature suffixUserService, UserRepositoryGrep "User" finds related
Action verbscreate_user, delete_orderGrep "create_" finds patterns
Consistent pluralizationusers/, orders/Predictable directory names

Anti-Patterns:

  • Abbreviations: usr, mgr, svc (hard to search)
  • Generic names: utils.py, helpers.js (too broad)
  • Inconsistent casing: getUserData, get_user_data

Documentation Placement

LocationPurposeFindability
README.md in directoryModule overviewFirst thing developers see
Inline docstringsFunction behaviorIDE tooltips, grep
docs/architecture/System designHigh-level understanding
CLAUDE.md / CONTRIBUTING.mdDevelopment guideOnboarding entry

Entry Point Strategy:

  1. Every directory should have a README or index
  2. Complex modules need architecture diagrams
  3. Public APIs need usage examples
  4. Workflows need sequence diagrams

Module Organization

Vertical Slice Architecture

features/
  auth/
    api.py          # Entry point
    service.py      # Business logic
    repository.py   # Data access
    models.py       # Domain models
    tests/          # Co-located tests

Benefits:

  • Related code together
  • Easy to find all auth-related files
  • Clear boundaries

Horizontal Layer Architecture

api/
  auth.py
  users.py
services/
  auth.py
  users.py

Benefits:

  • Technical cohesion
  • Easier cross-cutting concerns

Improving Discoverability

Quick Wins

  1. Add index files: Export public API from __init__.py or index.ts
  2. Use consistent prefixes: handle_, on_, create_, get_
  3. Create README per directory: Brief purpose + key files
  4. Tag with keywords: Add searchable comments for concepts

Search Optimization

# Keywords: authentication, login, JWT, OAuth, session
# See also: user_service.py, token_handler.py

Metadata in Files:

  • Related files cross-reference
  • Alternative terms for the concept
  • Links to documentation
Edit on GitHub

Last updated on

On this page

Codebase ExplorationQuick StartSTEP 0: Verify User Intent with AskUserQuestionSTEP 0b: Select Orchestration ModeMCP ProbeExploration HandoffTask Management (MANDATORY)Workflow OverviewProgressive Output (CC 2.1.76)Phase 1: Initial SearchPhase 2: Memory CheckPhase 3: Parallel Deep Exploration (4 Agents)Phase 4: AI System Exploration (If Applicable)Phase 5: Code Health AssessmentPhase 6: Dependency Hotspot MapPhase 7: Product PerspectivePhase 8: Generate ReportCommon Exploration QueriesRelated SkillsRules (5)Coordinate multi-agent exploration teams with real-time discovery sharing — HIGHAgent Teams ModeTeam TeardownScore code health across five quality dimensions with structured assessment criteria — MEDIUMCode Health AssessmentIdentify highly-coupled code and dependency bottlenecks to reduce change risk — MEDIUMDependency Hotspot AnalysisSpawn parallel exploration agents using Task tool for concurrent codebase analysis — HIGHExploration Agents (Task Tool Mode)Fork Pattern (CC 2.1.89 — #1227)Explorer RolesAdd business context and findability analysis to technical codebase exploration — MEDIUMProduct PerspectiveReferences (4)Code Health RubricCode Health RubricScoring Scale1. Readability (0-10)2. Maintainability (0-10)3. Testability (0-10)4. Complexity (0-10, inverted: 10=simple)5. Documentation (0-10)Overall Score CalculationDependency AnalysisDependency AnalysisFan-In / Fan-Out MetricsHotspot IdentificationHigh-Risk IndicatorsCoupling Score FormulaCircular Dependency DetectionChange Impact AnalysisExploration Report TemplateExploration Report TemplateFindability PatternsFindability PatternsNaming Conventions for SearchabilityDocumentation PlacementModule OrganizationVertical Slice ArchitectureHorizontal Layer ArchitectureImproving DiscoverabilityQuick WinsSearch Optimization