Explore
Multi-angle codebase exploration spawning 3-5 parallel agents for code structure, data flow, architecture patterns, and health assessment. Generates ASCII visualizations, import graphs, and design pattern detection with cross-session memory storage. Use when exploring a repo, discovering architecture, onboarding to a new codebase, or analyzing design patterns.
/ork:exploreCodebase Exploration
Multi-angle codebase exploration using 3-5 parallel agents.
Quick Start
/ork:explore authenticationOpus 4.6: Exploration agents use native adaptive thinking for deeper pattern recognition across large codebases.
STEP 0: Verify User Intent with AskUserQuestion
BEFORE creating tasks, clarify what the user wants to explore:
AskUserQuestion(
questions=[{
"question": "What aspect do you want to explore?",
"header": "Focus",
"options": [
{"label": "Full exploration (Recommended)", "description": "Code structure + data flow + architecture + health assessment", "markdown": "```\nFull Exploration (8 phases)\n───────────────────────────\n 4 parallel explorer agents:\n ┌──────────┐ ┌──────────┐\n │ Structure│ │ Data │\n │ Explorer │ │ Flow │\n ├──────────┤ ├──────────┤\n │ Pattern │ │ Product │\n │ Analyst │ │ Context │\n └──────────┘ └──────────┘\n ▼\n ┌──────────────────────┐\n │ Code Health N/10 │\n │ Dep Hotspots map │\n │ Architecture diag │\n └──────────────────────┘\n Output: Full exploration report\n```"},
{"label": "Code structure only", "description": "Find files, classes, functions related to topic", "markdown": "```\nCode Structure\n──────────────\n Grep ──▶ Glob ──▶ Map\n\n Output:\n ├── File tree (relevant)\n ├── Key classes/functions\n ├── Import graph\n └── Entry points\n No agents — direct search\n```"},
{"label": "Data flow", "description": "Trace how data moves through the system", "markdown": "```\nData Flow Trace\n───────────────\n Input ──▶ Transform ──▶ Output\n │ │ │\n ▼ ▼ ▼\n [API] [Service] [DB/Cache]\n\n Traces: request lifecycle,\n state mutations, side effects\n Agent: 1 data-flow explorer\n```"},
{"label": "Architecture patterns", "description": "Identify design patterns and integrations", "markdown": "```\nArchitecture Analysis\n─────────────────────\n ┌─────────────────────┐\n │ Detected Patterns │\n │ ├── MVC / Hexagonal │\n │ ├── Event-driven? │\n │ ├── Service layers │\n │ └── External APIs │\n ├─────────────────────┤\n │ Integration Map │\n │ DB ←→ Cache ←→ Queue │\n └─────────────────────┘\n Agent: backend-system-architect\n```"},
{"label": "Quick search", "description": "Just find relevant files, skip deep analysis", "markdown": "```\nQuick Search (~30s)\n───────────────────\n Grep + Glob ──▶ File list\n\n Output:\n ├── Matching files\n ├── Line references\n └── Brief summary\n No agents, no health check,\n no report generation\n```"}
],
"multiSelect": false
}]
)Based on answer, adjust workflow:
- Full exploration: All phases, all parallel agents
- Code structure only: Skip phases 5-7 (health, dependencies, product)
- Data flow: Focus phase 3 agents on data tracing
- Architecture patterns: Focus on backend-system-architect agent
- Quick search: Skip to phases 1-2 only, return file list
STEP 0b: Select Orchestration Mode
MCP Probe
ToolSearch(query="select:mcp__memory__search_nodes")
Write(".claude/chain/capabilities.json", { memory, timestamp })
if capabilities.memory:
mcp__memory__search_nodes({ query: "architecture decisions for {path}" })
# Enrich exploration with past decisionsExploration Handoff
After exploration completes, write results for downstream skills:
Write(".claude/chain/exploration.json", JSON.stringify({
"phase": "explore", "skill": "explore",
"timestamp": now(), "status": "completed",
"outputs": {
"architecture_map": { ... },
"patterns_found": ["repository", "service-layer"],
"complexity_hotspots": ["src/auth/", "src/payments/"]
}
}))Choose Agent Teams (mesh) or Task tool (star):
- Agent Teams mode (GA since CC 2.1.33) → recommended for 4+ agents
- Task tool mode → for quick/single-focus exploration
ORCHESTKIT_FORCE_TASK_TOOL=1→ Task tool (override)
| Aspect | Task Tool | Agent Teams |
|---|---|---|
| Discovery sharing | Lead synthesizes after all complete | Explorers share discoveries as they go |
| Cross-referencing | Lead connects dots | Data flow explorer alerts architecture explorer |
| Cost | ~150K tokens | ~400K tokens |
| Best for | Quick/focused searches | Deep full-codebase exploration |
Fallback: If Agent Teams encounters issues, fall back to Task tool for remaining exploration.
Task Management (MANDATORY)
BEFORE doing ANYTHING else, create tasks to show progress:
# 1. Create main task IMMEDIATELY
TaskCreate(subject="Explore: {topic}", description="Deep codebase exploration for {topic}", activeForm="Exploring {topic}")
# 2. Create subtasks for each phase
TaskCreate(subject="Initial file search", activeForm="Searching files") # id=2
TaskCreate(subject="Check knowledge graph", activeForm="Checking memory") # id=3
TaskCreate(subject="Launch exploration agents", activeForm="Dispatching explorers") # id=4
TaskCreate(subject="Assess code health (0-10)", activeForm="Assessing code health") # id=5
TaskCreate(subject="Map dependency hotspots", activeForm="Mapping dependencies") # id=6
TaskCreate(subject="Add product perspective", activeForm="Adding product context") # id=7
TaskCreate(subject="Generate exploration report", activeForm="Generating report") # id=8
# 3. Set dependencies for sequential phases
TaskUpdate(taskId="3", addBlockedBy=["2"]) # Memory check needs file search first
TaskUpdate(taskId="4", addBlockedBy=["3"]) # Agents need memory context
TaskUpdate(taskId="5", addBlockedBy=["4"]) # Health needs exploration done
TaskUpdate(taskId="6", addBlockedBy=["4"]) # Hotspots need exploration done
TaskUpdate(taskId="7", addBlockedBy=["4"]) # Product needs exploration done
TaskUpdate(taskId="8", addBlockedBy=["5", "6", "7"]) # Report needs all analysis done
# 4. Before starting each task, verify it's unblocked
task = TaskGet(taskId="2") # Verify blockedBy is empty
# 5. Update status as you progress
TaskUpdate(taskId="2", status="in_progress") # When starting
TaskUpdate(taskId="2", status="completed") # When done — repeat for each subtaskWorkflow Overview
| Phase | Activities | Output |
|---|---|---|
| 1. Initial Search | Grep, Glob for matches | File locations |
| 2. Memory Check | Search knowledge graph | Prior context |
| 3. Deep Exploration | 4 parallel explorers | Multi-angle analysis |
| 4. AI System (if applicable) | LangGraph, prompts, RAG | AI-specific findings |
| 5. Code Health | Rate code 0-10 | Quality scores |
| 6. Dependency Hotspots | Identify coupling | Hotspot visualization |
| 7. Product Perspective | Business context | Findability suggestions |
| 8. Report Generation | Compile findings | Actionable report |
Progressive Output (CC 2.1.76)
Output findings incrementally as each phase completes — don't batch until the report:
| After Phase | Show User |
|---|---|
| 1. Initial Search | File matches, grep results |
| 2. Memory Check | Prior decisions and relevant context |
| 3. Deep Exploration | Each explorer agent's findings as they return |
| 5. Code Health | Health score with dimension breakdown |
For Phase 3 parallel agents, output each agent's findings as soon as it returns — don't wait for all 4 explorers. Early findings from one agent may answer the user's question before remaining agents complete, allowing early termination.
Phase 1: Initial Search
# PARALLEL - Quick searches
Grep(pattern="$ARGUMENTS[0]", output_mode="files_with_matches")
Glob(pattern="**/*$ARGUMENTS[0]*")Phase 2: Memory Check
mcp__memory__search_nodes(query="$ARGUMENTS[0]")
mcp__memory__search_nodes(query="architecture")Phase 3: Parallel Deep Exploration (4 Agents)
Load Read("$\{CLAUDE_SKILL_DIR\}/rules/exploration-agents.md") for Task tool mode prompts.
Load Read("$\{CLAUDE_SKILL_DIR\}/rules/agent-teams-mode.md") for Agent Teams alternative.
Phase 4: AI System Exploration (If Applicable)
For AI/ML topics, add exploration of: LangGraph workflows, prompt templates, RAG pipeline, caching strategies.
Phase 5: Code Health Assessment
Load Read("$\{CLAUDE_SKILL_DIR\}/rules/code-health-assessment.md") for agent prompt. Load Read("$\{CLAUDE_SKILL_DIR\}/references/code-health-rubric.md") for scoring criteria.
Phase 6: Dependency Hotspot Map
Load Read("$\{CLAUDE_SKILL_DIR\}/rules/dependency-hotspot-analysis.md") for agent prompt. Load Read("$\{CLAUDE_SKILL_DIR\}/references/dependency-analysis.md") for metrics.
Phase 7: Product Perspective
Load Read("$\{CLAUDE_SKILL_DIR\}/rules/product-perspective.md") for agent prompt. Load Read("$\{CLAUDE_SKILL_DIR\}/references/findability-patterns.md") for best practices.
Phase 8: Generate Report
Load Read("$\{CLAUDE_SKILL_DIR\}/references/exploration-report-template.md").
Common Exploration Queries
- "How does authentication work?"
- "Where are API endpoints defined?"
- "Find all usages of EventBroadcaster"
- "What's the workflow for content analysis?"
Related Skills
ork:implement: Implement after exploration
Version: 2.4.0 (April 2026) — Fork-eligible agents for 30-50% cost reduction (#1227)
Rules (5)
Coordinate multi-agent exploration teams with real-time discovery sharing — HIGH
Agent Teams Mode
In Agent Teams mode, form an exploration team where explorers share discoveries in real-time:
TeamCreate(team_name="explore-{topic}", description="Explore {topic}")
Agent(subagent_type="Explore", name="structure-explorer",
team_name="explore-{topic}",
prompt="""Find all files, classes, and functions related to: {topic}
When you discover key entry points, message data-flow-explorer so they
can trace data paths from those points.
When you find backend patterns, message backend-explorer.
When you find frontend components, message frontend-explorer.""")
Agent(subagent_type="Explore", name="data-flow-explorer",
team_name="explore-{topic}",
prompt="""Trace entry points, processing, and storage for: {topic}
When structure-explorer shares entry points, start tracing from those.
When you discover cross-boundary data flows (frontend→backend or vice versa),
message both backend-explorer and frontend-explorer.""")
Agent(subagent_type="backend-system-architect", name="backend-explorer",
team_name="explore-{topic}",
prompt="""Analyze backend architecture patterns for: {topic}
When structure-explorer or data-flow-explorer share backend findings,
investigate deeper — API design, database schema, service patterns.
Share integration points with frontend-explorer for consistency.""")
Agent(subagent_type="frontend-ui-developer", name="frontend-explorer",
team_name="explore-{topic}",
prompt="""Analyze frontend components, state, and routes for: {topic}
When structure-explorer shares component locations, investigate deeper.
When backend-explorer shares API patterns, verify frontend alignment.
Share component hierarchy with data-flow-explorer.""")Team Teardown
After report generation:
SendMessage(type="shutdown_request", recipient="structure-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="data-flow-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="backend-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="frontend-explorer", content="Exploration complete")
TeamDelete()
# Worktree cleanup (CC 2.1.72)
ExitWorktree(action="keep")Fallback: If team formation fails, use standard Task tool spawns. See exploration-agents.md.
Incorrect — Sequential exploration without coordination:
Agent(subagent_type="Explore", prompt="Find auth files")
# Wait for result...
Agent(subagent_type="Explore", prompt="Trace auth data flow")
# Sequential, no sharing between agentsCorrect — Team mode with real-time discovery sharing:
TeamCreate(team_name="explore-auth")
Agent(subagent_type="Explore", name="structure-explorer",
team_name="explore-auth",
prompt="Find auth files. Message data-flow-explorer with entry points.")
Agent(subagent_type="Explore", name="data-flow-explorer",
team_name="explore-auth",
prompt="When structure-explorer shares entry points, trace data flows.")
# Parallel execution, coordinated via messagesScore code health across five quality dimensions with structured assessment criteria — MEDIUM
Code Health Assessment
Rate found code quality 0-10 with specific dimensions. See code-health-rubric.md for scoring criteria.
Agent(
subagent_type="code-quality-reviewer",
prompt="""CODE HEALTH ASSESSMENT for files related to: $ARGUMENTS
Rate each dimension 0-10:
1. READABILITY (0-10)
- Clear naming conventions?
- Appropriate comments?
- Logical organization?
2. MAINTAINABILITY (0-10)
- Single responsibility?
- Low coupling?
- Easy to modify?
3. TESTABILITY (0-10)
- Pure functions where possible?
- Dependency injection?
- Existing test coverage?
4. COMPLEXITY (0-10, inverted: 10=simple, 0=complex)
- Cyclomatic complexity?
- Nesting depth?
- Function length?
5. DOCUMENTATION (0-10)
- API docs present?
- Usage examples?
- Architecture notes?
Output:
{
"overall_score": N.N,
"dimensions": {
"readability": N,
"maintainability": N,
"testability": N,
"complexity": N,
"documentation": N
},
"hotspots": ["file:line - issue"],
"recommendations": ["improvement suggestion"]
}
SUMMARY: End with: "HEALTH: [N.N]/10 - [best dimension] strong, [worst dimension] needs work"
""",
run_in_background=True,
max_turns=25
)Incorrect — Vague code quality feedback:
Code Review: The code looks okay. Some parts are complex.
Maybe add more tests.Correct — Structured health assessment with scores:
{
"overall_score": 6.2,
"dimensions": {
"readability": 8,
"maintainability": 5,
"testability": 4,
"complexity": 6,
"documentation": 8
},
"hotspots": [
"auth.ts:45 - nested if/else 5 levels deep",
"utils.ts:120 - 200-line function, no SRP"
],
"recommendations": [
"Extract auth.ts:45-80 to separate validation functions",
"Add unit tests for utils.ts edge cases"
]
}Identify highly-coupled code and dependency bottlenecks to reduce change risk — MEDIUM
Dependency Hotspot Analysis
Identify highly-coupled code and dependency bottlenecks. See dependency-analysis.md for metrics and formulas.
Agent(
subagent_type="backend-system-architect",
prompt="""DEPENDENCY HOTSPOT ANALYSIS for: $ARGUMENTS
Analyze coupling and dependencies:
1. IMPORT ANALYSIS
- Which files import this code?
- What does this code import?
- Circular dependencies?
2. COUPLING SCORE (0-10, 10=highly coupled)
- How many files would break if this changes?
- Fan-in (incoming dependencies)
- Fan-out (outgoing dependencies)
3. CHANGE IMPACT
- Blast radius of modifications
- Files that always change together
4. HOTSPOT VISUALIZATION[Module A] --depends--> [Target] <--depends-- [Module B] | v [Module C]
Output:
{
"coupling_score": N,
"fan_in": N,
"fan_out": N,
"circular_deps": [],
"change_impact": ["file - reason"],
"hotspot_diagram": "ASCII diagram"
}
SUMMARY: End with: "COUPLING: [N]/10 - [N] incoming, [M] outgoing deps - [key concern]"
""",
run_in_background=True,
max_turns=25
)Incorrect — Listing imports without analysis:
auth.ts imports:
- utils.ts
- config.ts
- db.tsCorrect — Hotspot analysis with coupling score:
{
"coupling_score": 8,
"fan_in": 12,
"fan_out": 5,
"circular_deps": ["auth.ts → user.ts → auth.ts"],
"change_impact": [
"auth.ts change breaks 12 files",
"utils.ts and auth.ts always change together"
],
"hotspot_diagram": "
[12 files] --depend on--> [auth.ts]
|
depends on
v
[utils, config, db, user, session]
"
}Spawn parallel exploration agents using Task tool for concurrent codebase analysis — HIGH
Exploration Agents (Task Tool Mode)
Launch 4 specialized explorers in ONE message with run_in_background: true:
# PARALLEL - All 4 in ONE message
Agent(
subagent_type="Explore",
prompt="""Code Structure: Find all files, classes, functions related to: $ARGUMENTS
Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.
SUMMARY: End with: "RESULT: [N] files, [M] classes - [key location, e.g., 'src/auth/']"
""",
run_in_background=True,
max_turns=25
)
Agent(
subagent_type="Explore",
prompt="""Data Flow: Trace entry points, processing, storage for: $ARGUMENTS
Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.
SUMMARY: End with: "RESULT: [entry] → [processing] → [storage] - [N] hop flow"
""",
run_in_background=True,
max_turns=25
)
Agent(
subagent_type="backend-system-architect",
prompt="""Backend Patterns: Analyze architecture patterns, integrations, dependencies for: $ARGUMENTS
Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.
SUMMARY: End with: "RESULT: [pattern name] - [N] integrations, [M] dependencies"
""",
run_in_background=True,
max_turns=25
)
Agent(
subagent_type="frontend-ui-developer",
prompt="""Frontend Analysis: Find components, state management, routes for: $ARGUMENTS
Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.
SUMMARY: End with: "RESULT: [N] components, [state lib] - [key route]"
""",
run_in_background=True,
max_turns=25
)Fork Pattern (CC 2.1.89 — #1227)
These agents are fork-eligible: short prompts (<500 words), no custom model, no worktree isolation. CC automatically shares the parent's cached API prefix across all 4 forks, reducing cost by ~60%.
See
chain-patterns/references/fork-pattern.mdfor full details.
Do NOT add model= or isolation="worktree" to these agents — it breaks cache sharing.
Explorer Roles
- Code Structure Explorer - Files, classes, functions
- Data Flow Explorer - Entry points, processing, storage
- Backend Architect - Patterns, integration, dependencies
- Frontend Developer - Components, state, routes
Incorrect — Sequential exploration:
Agent(subagent_type="Explore", prompt="Find auth files")
# Wait...
Agent(subagent_type="Explore", prompt="Trace auth flow")
# Wait...
Agent(subagent_type="backend-system-architect", prompt="Analyze patterns")
# Slow, sequentialCorrect — Parallel exploration in one message:
# All 4 in ONE message with run_in_background: true
Agent(subagent_type="Explore", prompt="Code Structure: Find all files related to auth",
run_in_background=True, max_turns=25)
Agent(subagent_type="Explore", prompt="Data Flow: Trace auth entry→storage",
run_in_background=True, max_turns=25)
Agent(subagent_type="backend-system-architect", prompt="Backend Patterns: Analyze auth architecture",
run_in_background=True, max_turns=25)
Agent(subagent_type="frontend-ui-developer", prompt="Frontend: Find auth components",
run_in_background=True, max_turns=25)
# Parallel executionAdd business context and findability analysis to technical codebase exploration — MEDIUM
Product Perspective
Add business context and findability suggestions. See findability-patterns.md for discoverability best practices.
Agent(
subagent_type="product-strategist",
prompt="""PRODUCT PERSPECTIVE for: $ARGUMENTS
Analyze from a product/business viewpoint:
1. BUSINESS CONTEXT
- What user problem does this code solve?
- What feature/capability does it enable?
- Who are the users of this code?
2. FINDABILITY SUGGESTIONS
- Better naming for discoverability?
- Missing documentation entry points?
- Where should someone look first?
3. KNOWLEDGE GAPS
- What context is missing for new developers?
- What tribal knowledge exists?
- What should be documented?
4. SEARCH OPTIMIZATION
- Keywords someone might use to find this
- Alternative terms for the same concept
- Related concepts to cross-reference
Output:
{
"business_purpose": "description",
"primary_users": ["user type"],
"findability_issues": ["issue - suggestion"],
"recommended_entry_points": ["file - why start here"],
"search_keywords": ["keyword"],
"documentation_gaps": ["gap"]
}
SUMMARY: End with: "FINDABILITY: [N] issues - start at [recommended entry point]"
""",
run_in_background=True,
max_turns=25)Incorrect — Technical analysis without business context:
Found auth.ts, user.ts, session.ts
Uses JWT tokens, bcrypt hashing
Database: PostgreSQL users tableCorrect — Product perspective with findability:
{
"business_purpose": "Secure user authentication and session management",
"primary_users": ["End users logging in", "Developers integrating auth"],
"findability_issues": [
"auth.ts - generic name, try auth/core.ts",
"Missing README in auth/ - devs don't know where to start"
],
"recommended_entry_points": [
"auth/README.md (missing - create this!)",
"auth/core.ts - main authentication flow"
],
"search_keywords": ["login", "authentication", "session", "JWT", "security"],
"documentation_gaps": [
"No auth flow diagram",
"Token refresh logic undocumented"
]
}References (4)
Code Health Rubric
Code Health Rubric
Standardized 0-10 scoring criteria for assessing code quality across five dimensions.
Scoring Scale
| Score | Rating | Description |
|---|---|---|
| 9-10 | Excellent | Production-ready, exemplary code |
| 7-8 | Good | Minor improvements possible |
| 5-6 | Adequate | Functional but needs attention |
| 3-4 | Poor | Significant issues, refactor recommended |
| 0-2 | Critical | Major problems, immediate action required |
1. Readability (0-10)
| Score | Criteria |
|---|---|
| 10 | Self-documenting, intuitive naming, perfect structure |
| 7-8 | Clear names, logical flow, minimal cognitive load |
| 5-6 | Understandable with effort, some unclear sections |
| 3-4 | Confusing logic, poor naming, requires context |
| 0-2 | Incomprehensible, magic numbers, no conventions |
2. Maintainability (0-10)
| Score | Criteria |
|---|---|
| 10 | SRP adherence, loose coupling, DRY, easy to modify |
| 7-8 | Good separation, minor duplication, clear boundaries |
| 5-6 | Some coupling, moderate duplication, changes ripple |
| 3-4 | High coupling, significant duplication, fragile |
| 0-2 | Spaghetti code, any change breaks multiple areas |
3. Testability (0-10)
| Score | Criteria |
|---|---|
| 10 | Pure functions, DI, 90%+ coverage, mocks easy |
| 7-8 | Most logic testable, some DI, 70%+ coverage |
| 5-6 | Testable with effort, some hidden dependencies |
| 3-4 | Hard to isolate, global state, 30% coverage |
| 0-2 | Untestable, tightly coupled, no test infrastructure |
4. Complexity (0-10, inverted: 10=simple)
| Score | Criteria |
|---|---|
| 10 | Cyclomatic <5, max 2 nesting, <20 line functions |
| 7-8 | Cyclomatic 5-10, 3 nesting, <40 line functions |
| 5-6 | Cyclomatic 10-15, 4 nesting, some long functions |
| 3-4 | Cyclomatic 15-25, deep nesting, 100+ line functions |
| 0-2 | Cyclomatic >25, 6+ nesting, god functions |
5. Documentation (0-10)
| Score | Criteria |
|---|---|
| 10 | Complete API docs, examples, architecture notes |
| 7-8 | Public API documented, inline comments where needed |
| 5-6 | Some docstrings, missing edge cases |
| 3-4 | Sparse comments, outdated documentation |
| 0-2 | No documentation, misleading comments |
Overall Score Calculation
overall = (readability + maintainability + testability + complexity + documentation) / 5Score Interpretation:
- 8.0+: Ship it
- 6.0-7.9: Acceptable, plan improvements
- 4.0-5.9: Technical debt, prioritize refactoring
- <4.0: Stop and fix before proceeding
Dependency Analysis
Dependency Analysis
Identify coupling hotspots and dependency patterns in codebases.
Fan-In / Fan-Out Metrics
| Metric | Definition | Implication |
|---|---|---|
| Fan-In | Files that import this module | High = many dependents, changes risky |
| Fan-Out | Modules this file imports | High = many dependencies, fragile |
| Instability | Fan-Out / (Fan-In + Fan-Out) | 0 = stable, 1 = unstable |
Ideal Patterns:
- Core utilities: High fan-in, low fan-out (stable)
- Feature modules: Low fan-in, moderate fan-out
- Entry points: Low fan-in, high fan-out
Hotspot Identification
High-Risk Indicators
| Pattern | Risk | Action |
|---|---|---|
| Fan-in > 10 | Blast radius large | Add interface/abstraction |
| Fan-out > 8 | Too many dependencies | Extract facades |
| Instability = 1, Fan-in > 5 | Unstable core | Stabilize or decouple |
Coupling Score Formula
coupling_score = min(10, (fan_in + fan_out) / 3)- 0-3: Low coupling (healthy)
- 4-6: Moderate coupling (monitor)
- 7-10: High coupling (refactor)
Circular Dependency Detection
Signs of Circular Dependencies:
- Import errors at runtime
- Mysterious
Nonevalues - Files that always change together
- Cannot extract to separate package
Detection Approach:
A imports B
B imports C
C imports A <- CIRCULARResolution Strategies:
- Extract shared interface
- Dependency inversion (depend on abstractions)
- Merge tightly coupled modules
- Event-driven decoupling
Change Impact Analysis
Questions to Answer:
- If I modify this file, what breaks?
- Which files always change together?
- What is the blast radius of a refactor?
Measuring Impact:
- Direct Impact: Files importing the changed module
- Transitive Impact: Files importing those files
- Co-Change Frequency: Git history of files changed together
High Impact Indicators:
-
5 direct dependents
-
20 transitive dependents
-
80% co-change frequency with another file
Exploration Report Template
Exploration Report Template
Use this template for Phase 8 report generation.
# Exploration Report: $ARGUMENTS
## Quick Answer
[1-2 sentence summary]
## File Locations
| File | Purpose | Health Score |
|------|---------|--------------|
| `path/to/file.py` | [description] | [N.N/10] |
## Code Health Summary
| Dimension | Score | Notes |
|-----------|-------|-------|
| Readability | [N/10] | [note] |
| Maintainability | [N/10] | [note] |
| Testability | [N/10] | [note] |
| Complexity | [N/10] | [note] |
| Documentation | [N/10] | [note] |
| **Overall** | **[N.N/10]** | |
## Architecture Overview
[ASCII diagram]
## Dependency Hotspot Map[Incoming deps] → [TARGET] → [Outgoing deps]
- **Coupling Score:** [N/10]
- **Fan-in:** [N] files depend on this
- **Fan-out:** [M] dependencies
- **Circular Dependencies:** [list or "None"]
## Data Flow
1. [Entry] → 2. [Processing] → 3. [Storage]
## Findability & Entry Points
| Entry Point | Why Start Here |
|-------------|----------------|
| `path/to/file.py` | [reason] |
**Search Keywords:** [keyword1], [keyword2], [keyword3]
## Product Context
- **Business Purpose:** [what problem this solves]
- **Primary Users:** [who uses this]
- **Documentation Gaps:** [what's missing]
## How to Modify
1. [Step 1]
2. [Step 2]
## Recommendations
1. [Health improvement]
2. [Findability improvement]
3. [Documentation improvement]Findability Patterns
Findability Patterns
Improve code discoverability for developers exploring the codebase.
Naming Conventions for Searchability
| Pattern | Example | Searchability |
|---|---|---|
| Domain prefix | auth_login(), auth_logout() | Grep "auth_" finds all |
| Feature suffix | UserService, UserRepository | Grep "User" finds related |
| Action verbs | create_user, delete_order | Grep "create_" finds patterns |
| Consistent pluralization | users/, orders/ | Predictable directory names |
Anti-Patterns:
- Abbreviations:
usr,mgr,svc(hard to search) - Generic names:
utils.py,helpers.js(too broad) - Inconsistent casing:
getUserData,get_user_data
Documentation Placement
| Location | Purpose | Findability |
|---|---|---|
README.md in directory | Module overview | First thing developers see |
| Inline docstrings | Function behavior | IDE tooltips, grep |
docs/architecture/ | System design | High-level understanding |
CLAUDE.md / CONTRIBUTING.md | Development guide | Onboarding entry |
Entry Point Strategy:
- Every directory should have a README or index
- Complex modules need architecture diagrams
- Public APIs need usage examples
- Workflows need sequence diagrams
Module Organization
Vertical Slice Architecture
features/
auth/
api.py # Entry point
service.py # Business logic
repository.py # Data access
models.py # Domain models
tests/ # Co-located testsBenefits:
- Related code together
- Easy to find all auth-related files
- Clear boundaries
Horizontal Layer Architecture
api/
auth.py
users.py
services/
auth.py
users.pyBenefits:
- Technical cohesion
- Easier cross-cutting concerns
Improving Discoverability
Quick Wins
- Add index files: Export public API from
__init__.pyorindex.ts - Use consistent prefixes:
handle_,on_,create_,get_ - Create README per directory: Brief purpose + key files
- Tag with keywords: Add searchable comments for concepts
Search Optimization
# Keywords: authentication, login, JWT, OAuth, session
# See also: user_service.py, token_handler.pyMetadata in Files:
- Related files cross-reference
- Alternative terms for the concept
- Links to documentation
Expect
Diff-aware AI browser testing — analyzes git changes, generates targeted test plans, and executes them via agent-browser (Rust daemon + CDP, ARIA-tree-first). Reads git diff to determine what changed, maps changes to affected pages via route map, generates a test plan scoped to the diff, and runs it with pass/fail reporting. Use when testing UI changes, verifying PRs before merge, running regression checks on changed components, or validating that recent code changes don't break the user-facing experience.
Feedback
Manages OrchestKit learning system including feedback status, usage pattern tracking, and privacy/analytics consent. Supports pause/resume learning, data export, privacy policy display, and bug reporting. Tracks learned patterns and agent performance metrics. Use when reviewing learned patterns, pausing learning, or managing data consent.
Last updated on