Explore
explore — Deep codebase exploration with parallel agents. Use when exploring a repo, discovering architecture, finding files, or analyzing design patterns.
Related Skills
Codebase Exploration
Multi-angle codebase exploration using 3-5 parallel agents.
Quick Start
/ork:explore authenticationOpus 4.6: Exploration agents use native adaptive thinking for deeper pattern recognition across large codebases.
STEP 0: Verify User Intent with AskUserQuestion
BEFORE creating tasks, clarify what the user wants to explore:
AskUserQuestion(
questions=[{
"question": "What aspect do you want to explore?",
"header": "Focus",
"options": [
{"label": "Full exploration (Recommended)", "description": "Code structure + data flow + architecture + health assessment"},
{"label": "Code structure only", "description": "Find files, classes, functions related to topic"},
{"label": "Data flow", "description": "Trace how data moves through the system"},
{"label": "Architecture patterns", "description": "Identify design patterns and integrations"},
{"label": "Quick search", "description": "Just find relevant files, skip deep analysis"}
],
"multiSelect": false
}]
)Based on answer, adjust workflow:
- Full exploration: All phases, all parallel agents
- Code structure only: Skip phases 5-7 (health, dependencies, product)
- Data flow: Focus phase 3 agents on data tracing
- Architecture patterns: Focus on backend-system-architect agent
- Quick search: Skip to phases 1-2 only, return file list
STEP 0b: Select Orchestration Mode
Choose Agent Teams (mesh) or Task tool (star):
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1→ Agent Teams mode- Agent Teams unavailable → Task tool mode (default)
- Full exploration with 4+ agents → recommend Agent Teams; Quick/single-focus → Task tool
| Aspect | Task Tool | Agent Teams |
|---|---|---|
| Discovery sharing | Lead synthesizes after all complete | Explorers share discoveries as they go |
| Cross-referencing | Lead connects dots | Data flow explorer alerts architecture explorer |
| Cost | ~150K tokens | ~400K tokens |
| Best for | Quick/focused searches | Deep full-codebase exploration |
Fallback: If Agent Teams encounters issues, fall back to Task tool for remaining exploration.
Task Management (MANDATORY)
BEFORE doing ANYTHING else, create tasks to show progress:
TaskCreate(subject="Explore: {topic}", description="Deep codebase exploration for {topic}", activeForm="Exploring {topic}")
TaskCreate(subject="Initial file search", activeForm="Searching files")
TaskCreate(subject="Check knowledge graph", activeForm="Checking memory")
TaskCreate(subject="Launch exploration agents", activeForm="Dispatching explorers")
TaskCreate(subject="Assess code health (0-10)", activeForm="Assessing code health")
TaskCreate(subject="Map dependency hotspots", activeForm="Mapping dependencies")
TaskCreate(subject="Add product perspective", activeForm="Adding product context")
TaskCreate(subject="Generate exploration report", activeForm="Generating report")Workflow Overview
| Phase | Activities | Output |
|---|---|---|
| 1. Initial Search | Grep, Glob for matches | File locations |
| 2. Memory Check | Search knowledge graph | Prior context |
| 3. Deep Exploration | 4 parallel explorers | Multi-angle analysis |
| 4. AI System (if applicable) | LangGraph, prompts, RAG | AI-specific findings |
| 5. Code Health | Rate code 0-10 | Quality scores |
| 6. Dependency Hotspots | Identify coupling | Hotspot visualization |
| 7. Product Perspective | Business context | Findability suggestions |
| 8. Report Generation | Compile findings | Actionable report |
Phase 1: Initial Search
# PARALLEL - Quick searches
Grep(pattern="$ARGUMENTS", output_mode="files_with_matches")
Glob(pattern="**/*$ARGUMENTS*")Phase 2: Memory Check
mcp__memory__search_nodes(query="$ARGUMENTS")
mcp__memory__search_nodes(query="architecture")Phase 3: Parallel Deep Exploration (4 Agents)
See Exploration Agents for Task tool mode prompts.
See Agent Teams Mode for Agent Teams alternative.
Phase 4: AI System Exploration (If Applicable)
For AI/ML topics, add exploration of: LangGraph workflows, prompt templates, RAG pipeline, caching strategies.
Phase 5: Code Health Assessment
See Code Health Assessment for agent prompt. See Code Health Rubric for scoring criteria.
Phase 6: Dependency Hotspot Map
See Dependency Hotspot Analysis for agent prompt. See Dependency Analysis for metrics.
Phase 7: Product Perspective
See Product Perspective for agent prompt. See Findability Patterns for best practices.
Phase 8: Generate Report
See Exploration Report Template.
Common Exploration Queries
- "How does authentication work?"
- "Where are API endpoints defined?"
- "Find all usages of EventBroadcaster"
- "What's the workflow for content analysis?"
Related Skills
ork:implement: Implement after exploration
Version: 2.1.0 (February 2026)
Rules (5)
Coordinate multi-agent exploration teams with real-time discovery sharing — HIGH
Agent Teams Mode
In Agent Teams mode, form an exploration team where explorers share discoveries in real-time:
TeamCreate(team_name="explore-{topic}", description="Explore {topic}")
Task(subagent_type="Explore", name="structure-explorer",
team_name="explore-{topic}",
prompt="""Find all files, classes, and functions related to: {topic}
When you discover key entry points, message data-flow-explorer so they
can trace data paths from those points.
When you find backend patterns, message backend-explorer.
When you find frontend components, message frontend-explorer.""")
Task(subagent_type="Explore", name="data-flow-explorer",
team_name="explore-{topic}",
prompt="""Trace entry points, processing, and storage for: {topic}
When structure-explorer shares entry points, start tracing from those.
When you discover cross-boundary data flows (frontend→backend or vice versa),
message both backend-explorer and frontend-explorer.""")
Task(subagent_type="backend-system-architect", name="backend-explorer",
team_name="explore-{topic}",
prompt="""Analyze backend architecture patterns for: {topic}
When structure-explorer or data-flow-explorer share backend findings,
investigate deeper — API design, database schema, service patterns.
Share integration points with frontend-explorer for consistency.""")
Task(subagent_type="frontend-ui-developer", name="frontend-explorer",
team_name="explore-{topic}",
prompt="""Analyze frontend components, state, and routes for: {topic}
When structure-explorer shares component locations, investigate deeper.
When backend-explorer shares API patterns, verify frontend alignment.
Share component hierarchy with data-flow-explorer.""")Team Teardown
After report generation:
SendMessage(type="shutdown_request", recipient="structure-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="data-flow-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="backend-explorer", content="Exploration complete")
SendMessage(type="shutdown_request", recipient="frontend-explorer", content="Exploration complete")
TeamDelete()Fallback: If team formation fails, use standard Task tool spawns. See exploration-agents.md.
Incorrect — Sequential exploration without coordination:
Task(subagent_type="Explore", prompt="Find auth files")
# Wait for result...
Task(subagent_type="Explore", prompt="Trace auth data flow")
# Sequential, no sharing between agentsCorrect — Team mode with real-time discovery sharing:
TeamCreate(team_name="explore-auth")
Task(subagent_type="Explore", name="structure-explorer",
team_name="explore-auth",
prompt="Find auth files. Message data-flow-explorer with entry points.")
Task(subagent_type="Explore", name="data-flow-explorer",
team_name="explore-auth",
prompt="When structure-explorer shares entry points, trace data flows.")
# Parallel execution, coordinated via messagesScore code health across five quality dimensions with structured assessment criteria — MEDIUM
Code Health Assessment
Rate found code quality 0-10 with specific dimensions. See code-health-rubric.md for scoring criteria.
Task(
subagent_type="code-quality-reviewer",
prompt="""CODE HEALTH ASSESSMENT for files related to: $ARGUMENTS
Rate each dimension 0-10:
1. READABILITY (0-10)
- Clear naming conventions?
- Appropriate comments?
- Logical organization?
2. MAINTAINABILITY (0-10)
- Single responsibility?
- Low coupling?
- Easy to modify?
3. TESTABILITY (0-10)
- Pure functions where possible?
- Dependency injection?
- Existing test coverage?
4. COMPLEXITY (0-10, inverted: 10=simple, 0=complex)
- Cyclomatic complexity?
- Nesting depth?
- Function length?
5. DOCUMENTATION (0-10)
- API docs present?
- Usage examples?
- Architecture notes?
Output:
{
"overall_score": N.N,
"dimensions": {
"readability": N,
"maintainability": N,
"testability": N,
"complexity": N,
"documentation": N
},
"hotspots": ["file:line - issue"],
"recommendations": ["improvement suggestion"]
}
SUMMARY: End with: "HEALTH: [N.N]/10 - [best dimension] strong, [worst dimension] needs work"
""",
run_in_background=True,
max_turns=25
)Incorrect — Vague code quality feedback:
Code Review: The code looks okay. Some parts are complex.
Maybe add more tests.Correct — Structured health assessment with scores:
{
"overall_score": 6.2,
"dimensions": {
"readability": 8,
"maintainability": 5,
"testability": 4,
"complexity": 6,
"documentation": 8
},
"hotspots": [
"auth.ts:45 - nested if/else 5 levels deep",
"utils.ts:120 - 200-line function, no SRP"
],
"recommendations": [
"Extract auth.ts:45-80 to separate validation functions",
"Add unit tests for utils.ts edge cases"
]
}Identify highly-coupled code and dependency bottlenecks to reduce change risk — MEDIUM
Dependency Hotspot Analysis
Identify highly-coupled code and dependency bottlenecks. See dependency-analysis.md for metrics and formulas.
Task(
subagent_type="backend-system-architect",
prompt="""DEPENDENCY HOTSPOT ANALYSIS for: $ARGUMENTS
Analyze coupling and dependencies:
1. IMPORT ANALYSIS
- Which files import this code?
- What does this code import?
- Circular dependencies?
2. COUPLING SCORE (0-10, 10=highly coupled)
- How many files would break if this changes?
- Fan-in (incoming dependencies)
- Fan-out (outgoing dependencies)
3. CHANGE IMPACT
- Blast radius of modifications
- Files that always change together
4. HOTSPOT VISUALIZATION[Module A] --depends--> [Target] <--depends-- [Module B] | v [Module C]
Output:
{
"coupling_score": N,
"fan_in": N,
"fan_out": N,
"circular_deps": [],
"change_impact": ["file - reason"],
"hotspot_diagram": "ASCII diagram"
}
SUMMARY: End with: "COUPLING: [N]/10 - [N] incoming, [M] outgoing deps - [key concern]"
""",
run_in_background=True,
max_turns=25
)Incorrect — Listing imports without analysis:
auth.ts imports:
- utils.ts
- config.ts
- db.tsCorrect — Hotspot analysis with coupling score:
{
"coupling_score": 8,
"fan_in": 12,
"fan_out": 5,
"circular_deps": ["auth.ts → user.ts → auth.ts"],
"change_impact": [
"auth.ts change breaks 12 files",
"utils.ts and auth.ts always change together"
],
"hotspot_diagram": "
[12 files] --depend on--> [auth.ts]
|
depends on
v
[utils, config, db, user, session]
"
}Spawn parallel exploration agents using Task tool for concurrent codebase analysis — HIGH
Exploration Agents (Task Tool Mode)
Launch 4 specialized explorers in ONE message with run_in_background: true:
# PARALLEL - All 4 in ONE message
Task(
subagent_type="Explore",
prompt="""Code Structure: Find all files, classes, functions related to: $ARGUMENTS
Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.
SUMMARY: End with: "RESULT: [N] files, [M] classes - [key location, e.g., 'src/auth/']"
""",
run_in_background=True,
max_turns=25
)
Task(
subagent_type="Explore",
prompt="""Data Flow: Trace entry points, processing, storage for: $ARGUMENTS
Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.
SUMMARY: End with: "RESULT: [entry] → [processing] → [storage] - [N] hop flow"
""",
run_in_background=True,
max_turns=25
)
Task(
subagent_type="backend-system-architect",
prompt="""Backend Patterns: Analyze architecture patterns, integrations, dependencies for: $ARGUMENTS
Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.
SUMMARY: End with: "RESULT: [pattern name] - [N] integrations, [M] dependencies"
""",
run_in_background=True,
max_turns=25
)
Task(
subagent_type="frontend-ui-developer",
prompt="""Frontend Analysis: Find components, state management, routes for: $ARGUMENTS
Scope: ONLY read files directly relevant to the topic. Do NOT explore the entire codebase.
SUMMARY: End with: "RESULT: [N] components, [state lib] - [key route]"
""",
run_in_background=True,
max_turns=25
)Explorer Roles
- Code Structure Explorer - Files, classes, functions
- Data Flow Explorer - Entry points, processing, storage
- Backend Architect - Patterns, integration, dependencies
- Frontend Developer - Components, state, routes
Incorrect — Sequential exploration:
Task(subagent_type="Explore", prompt="Find auth files")
# Wait...
Task(subagent_type="Explore", prompt="Trace auth flow")
# Wait...
Task(subagent_type="backend-system-architect", prompt="Analyze patterns")
# Slow, sequentialCorrect — Parallel exploration in one message:
# All 4 in ONE message with run_in_background: true
Task(subagent_type="Explore", prompt="Code Structure: Find all files related to auth",
run_in_background=True, max_turns=25)
Task(subagent_type="Explore", prompt="Data Flow: Trace auth entry→storage",
run_in_background=True, max_turns=25)
Task(subagent_type="backend-system-architect", prompt="Backend Patterns: Analyze auth architecture",
run_in_background=True, max_turns=25)
Task(subagent_type="frontend-ui-developer", prompt="Frontend: Find auth components",
run_in_background=True, max_turns=25)
# Parallel executionAdd business context and findability analysis to technical codebase exploration — MEDIUM
Product Perspective
Add business context and findability suggestions. See findability-patterns.md for discoverability best practices.
Task(
subagent_type="product-strategist",
prompt="""PRODUCT PERSPECTIVE for: $ARGUMENTS
Analyze from a product/business viewpoint:
1. BUSINESS CONTEXT
- What user problem does this code solve?
- What feature/capability does it enable?
- Who are the users of this code?
2. FINDABILITY SUGGESTIONS
- Better naming for discoverability?
- Missing documentation entry points?
- Where should someone look first?
3. KNOWLEDGE GAPS
- What context is missing for new developers?
- What tribal knowledge exists?
- What should be documented?
4. SEARCH OPTIMIZATION
- Keywords someone might use to find this
- Alternative terms for the same concept
- Related concepts to cross-reference
Output:
{
"business_purpose": "description",
"primary_users": ["user type"],
"findability_issues": ["issue - suggestion"],
"recommended_entry_points": ["file - why start here"],
"search_keywords": ["keyword"],
"documentation_gaps": ["gap"]
}
SUMMARY: End with: "FINDABILITY: [N] issues - start at [recommended entry point]"
""",
run_in_background=True,
max_turns=25)Incorrect — Technical analysis without business context:
Found auth.ts, user.ts, session.ts
Uses JWT tokens, bcrypt hashing
Database: PostgreSQL users tableCorrect — Product perspective with findability:
{
"business_purpose": "Secure user authentication and session management",
"primary_users": ["End users logging in", "Developers integrating auth"],
"findability_issues": [
"auth.ts - generic name, try auth/core.ts",
"Missing README in auth/ - devs don't know where to start"
],
"recommended_entry_points": [
"auth/README.md (missing - create this!)",
"auth/core.ts - main authentication flow"
],
"search_keywords": ["login", "authentication", "session", "JWT", "security"],
"documentation_gaps": [
"No auth flow diagram",
"Token refresh logic undocumented"
]
}References (4)
Code Health Rubric
Code Health Rubric
Standardized 0-10 scoring criteria for assessing code quality across five dimensions.
Scoring Scale
| Score | Rating | Description |
|---|---|---|
| 9-10 | Excellent | Production-ready, exemplary code |
| 7-8 | Good | Minor improvements possible |
| 5-6 | Adequate | Functional but needs attention |
| 3-4 | Poor | Significant issues, refactor recommended |
| 0-2 | Critical | Major problems, immediate action required |
1. Readability (0-10)
| Score | Criteria |
|---|---|
| 10 | Self-documenting, intuitive naming, perfect structure |
| 7-8 | Clear names, logical flow, minimal cognitive load |
| 5-6 | Understandable with effort, some unclear sections |
| 3-4 | Confusing logic, poor naming, requires context |
| 0-2 | Incomprehensible, magic numbers, no conventions |
2. Maintainability (0-10)
| Score | Criteria |
|---|---|
| 10 | SRP adherence, loose coupling, DRY, easy to modify |
| 7-8 | Good separation, minor duplication, clear boundaries |
| 5-6 | Some coupling, moderate duplication, changes ripple |
| 3-4 | High coupling, significant duplication, fragile |
| 0-2 | Spaghetti code, any change breaks multiple areas |
3. Testability (0-10)
| Score | Criteria |
|---|---|
| 10 | Pure functions, DI, 90%+ coverage, mocks easy |
| 7-8 | Most logic testable, some DI, 70%+ coverage |
| 5-6 | Testable with effort, some hidden dependencies |
| 3-4 | Hard to isolate, global state, 30% coverage |
| 0-2 | Untestable, tightly coupled, no test infrastructure |
4. Complexity (0-10, inverted: 10=simple)
| Score | Criteria |
|---|---|
| 10 | Cyclomatic <5, max 2 nesting, <20 line functions |
| 7-8 | Cyclomatic 5-10, 3 nesting, <40 line functions |
| 5-6 | Cyclomatic 10-15, 4 nesting, some long functions |
| 3-4 | Cyclomatic 15-25, deep nesting, 100+ line functions |
| 0-2 | Cyclomatic >25, 6+ nesting, god functions |
5. Documentation (0-10)
| Score | Criteria |
|---|---|
| 10 | Complete API docs, examples, architecture notes |
| 7-8 | Public API documented, inline comments where needed |
| 5-6 | Some docstrings, missing edge cases |
| 3-4 | Sparse comments, outdated documentation |
| 0-2 | No documentation, misleading comments |
Overall Score Calculation
overall = (readability + maintainability + testability + complexity + documentation) / 5Score Interpretation:
- 8.0+: Ship it
- 6.0-7.9: Acceptable, plan improvements
- 4.0-5.9: Technical debt, prioritize refactoring
- <4.0: Stop and fix before proceeding
Dependency Analysis
Dependency Analysis
Identify coupling hotspots and dependency patterns in codebases.
Fan-In / Fan-Out Metrics
| Metric | Definition | Implication |
|---|---|---|
| Fan-In | Files that import this module | High = many dependents, changes risky |
| Fan-Out | Modules this file imports | High = many dependencies, fragile |
| Instability | Fan-Out / (Fan-In + Fan-Out) | 0 = stable, 1 = unstable |
Ideal Patterns:
- Core utilities: High fan-in, low fan-out (stable)
- Feature modules: Low fan-in, moderate fan-out
- Entry points: Low fan-in, high fan-out
Hotspot Identification
High-Risk Indicators
| Pattern | Risk | Action |
|---|---|---|
| Fan-in > 10 | Blast radius large | Add interface/abstraction |
| Fan-out > 8 | Too many dependencies | Extract facades |
| Instability = 1, Fan-in > 5 | Unstable core | Stabilize or decouple |
Coupling Score Formula
coupling_score = min(10, (fan_in + fan_out) / 3)- 0-3: Low coupling (healthy)
- 4-6: Moderate coupling (monitor)
- 7-10: High coupling (refactor)
Circular Dependency Detection
Signs of Circular Dependencies:
- Import errors at runtime
- Mysterious
Nonevalues - Files that always change together
- Cannot extract to separate package
Detection Approach:
A imports B
B imports C
C imports A <- CIRCULARResolution Strategies:
- Extract shared interface
- Dependency inversion (depend on abstractions)
- Merge tightly coupled modules
- Event-driven decoupling
Change Impact Analysis
Questions to Answer:
- If I modify this file, what breaks?
- Which files always change together?
- What is the blast radius of a refactor?
Measuring Impact:
- Direct Impact: Files importing the changed module
- Transitive Impact: Files importing those files
- Co-Change Frequency: Git history of files changed together
High Impact Indicators:
-
5 direct dependents
-
20 transitive dependents
-
80% co-change frequency with another file
Exploration Report Template
Exploration Report Template
Use this template for Phase 8 report generation.
# Exploration Report: $ARGUMENTS
## Quick Answer
[1-2 sentence summary]
## File Locations
| File | Purpose | Health Score |
|------|---------|--------------|
| `path/to/file.py` | [description] | [N.N/10] |
## Code Health Summary
| Dimension | Score | Notes |
|-----------|-------|-------|
| Readability | [N/10] | [note] |
| Maintainability | [N/10] | [note] |
| Testability | [N/10] | [note] |
| Complexity | [N/10] | [note] |
| Documentation | [N/10] | [note] |
| **Overall** | **[N.N/10]** | |
## Architecture Overview
[ASCII diagram]
## Dependency Hotspot Map[Incoming deps] → [TARGET] → [Outgoing deps]
- **Coupling Score:** [N/10]
- **Fan-in:** [N] files depend on this
- **Fan-out:** [M] dependencies
- **Circular Dependencies:** [list or "None"]
## Data Flow
1. [Entry] → 2. [Processing] → 3. [Storage]
## Findability & Entry Points
| Entry Point | Why Start Here |
|-------------|----------------|
| `path/to/file.py` | [reason] |
**Search Keywords:** [keyword1], [keyword2], [keyword3]
## Product Context
- **Business Purpose:** [what problem this solves]
- **Primary Users:** [who uses this]
- **Documentation Gaps:** [what's missing]
## How to Modify
1. [Step 1]
2. [Step 2]
## Recommendations
1. [Health improvement]
2. [Findability improvement]
3. [Documentation improvement]Findability Patterns
Findability Patterns
Improve code discoverability for developers exploring the codebase.
Naming Conventions for Searchability
| Pattern | Example | Searchability |
|---|---|---|
| Domain prefix | auth_login(), auth_logout() | Grep "auth_" finds all |
| Feature suffix | UserService, UserRepository | Grep "User" finds related |
| Action verbs | create_user, delete_order | Grep "create_" finds patterns |
| Consistent pluralization | users/, orders/ | Predictable directory names |
Anti-Patterns:
- Abbreviations:
usr,mgr,svc(hard to search) - Generic names:
utils.py,helpers.js(too broad) - Inconsistent casing:
getUserData,get_user_data
Documentation Placement
| Location | Purpose | Findability |
|---|---|---|
README.md in directory | Module overview | First thing developers see |
| Inline docstrings | Function behavior | IDE tooltips, grep |
docs/architecture/ | System design | High-level understanding |
CLAUDE.md / CONTRIBUTING.md | Development guide | Onboarding entry |
Entry Point Strategy:
- Every directory should have a README or index
- Complex modules need architecture diagrams
- Public APIs need usage examples
- Workflows need sequence diagrams
Module Organization
Vertical Slice Architecture
features/
auth/
api.py # Entry point
service.py # Business logic
repository.py # Data access
models.py # Domain models
tests/ # Co-located testsBenefits:
- Related code together
- Easy to find all auth-related files
- Clear boundaries
Horizontal Layer Architecture
api/
auth.py
users.py
services/
auth.py
users.pyBenefits:
- Technical cohesion
- Easier cross-cutting concerns
Improving Discoverability
Quick Wins
- Add index files: Export public API from
__init__.pyorindex.ts - Use consistent prefixes:
handle_,on_,create_,get_ - Create README per directory: Brief purpose + key files
- Tag with keywords: Add searchable comments for concepts
Search Optimization
# Keywords: authentication, login, JWT, OAuth, session
# See also: user_service.py, token_handler.pyMetadata in Files:
- Related files cross-reference
- Alternative terms for the concept
- Links to documentation
Errors
Error pattern analysis and troubleshooting for Claude Code sessions. Use when handling errors, fixing failures, troubleshooting issues.
Feedback
Manages OrchestKit usage analytics, learning preferences, and privacy settings. Use when reviewing patterns, pausing learning, or managing consent.
Last updated on