Skip to main content
OrchestKit v7.43.0 — 104 skills, 36 agents, 173 hooks · Claude Code 2.1.105+
OrchestKit
Skills

Audit Full

Full-codebase audit using 1M context window. Security, architecture, and dependency analysis in a single pass. Use when you need whole-project analysis.

Reference max

Auto-activated — this skill loads automatically when Claude detects matching context.

Full-Codebase Audit

Single-pass whole-project analysis leveraging Opus 4.6's extended context window. Loads entire codebases (~50K LOC) into context for cross-file vulnerability detection, architecture review, and dependency analysis.

Quick Start

/ork:audit-full                          # Full audit (all modes)
/ork:audit-full security                 # Security-focused audit
/ork:audit-full architecture             # Architecture review
/ork:audit-full dependencies             # Dependency audit

Opus 4.6: Uses complexity: max for extended thinking across entire codebases. 1M context (GA) enables cross-file reasoning that chunked approaches miss.

1M Context Required: If CLAUDE_CODE_DISABLE_1M_CONTEXT is set, audit-full cannot perform full-codebase analysis. Check: echo $CLAUDE_CODE_DISABLE_1M_CONTEXT — if non-empty, either unset it (unset CLAUDE_CODE_DISABLE_1M_CONTEXT) or use /ork:verify for chunked analysis instead.


STEP 0: Verify User Intent with AskUserQuestion

BEFORE creating tasks, clarify audit scope using the interactive dialog.

Load: Read("$\{CLAUDE_SKILL_DIR\}/references/audit-scope-dialog.md") for the full AskUserQuestion dialog with mode options (Full/Security/Architecture/Dependencies) and scope options (Entire codebase/Specific directory/Changed files).


CRITICAL: Task Management is MANDATORY

# 1. Create main task IMMEDIATELY
TaskCreate(
  subject="Full-codebase audit",
  description="Single-pass audit using extended context",
  activeForm="Running full-codebase audit"
)

# 2. Create subtasks for each phase
TaskCreate(subject="Estimate token budget and plan loading", activeForm="Estimating token budget")  # id=2
TaskCreate(subject="Load codebase into context", activeForm="Loading codebase")                    # id=3
TaskCreate(subject="Run audit analysis", activeForm="Analyzing codebase")                          # id=4
TaskCreate(subject="Generate audit report", activeForm="Generating report")                        # id=5

# 3. Set dependencies for sequential phases
TaskUpdate(taskId="3", addBlockedBy=["2"])  # Loading needs budget estimate
TaskUpdate(taskId="4", addBlockedBy=["3"])  # Analysis needs codebase loaded
TaskUpdate(taskId="5", addBlockedBy=["4"])  # Report needs analysis done

# 4. Before starting each task, verify it's unblocked
task = TaskGet(taskId="2")  # Verify blockedBy is empty

# 5. Update status as you progress
TaskUpdate(taskId="2", status="in_progress")  # When starting
TaskUpdate(taskId="2", status="completed")    # When done — repeat for each subtask

STEP 1: Estimate Token Budget

Before loading files, estimate whether the codebase fits in context.

Load: Read("$\{CLAUDE_SKILL_DIR\}/references/token-budget-planning.md") for estimation rules (tokens/line by file type), budget allocation tables, auto-exclusion list, and fallback dialog when codebase exceeds budget.

Run estimation: bash $\{CLAUDE_SKILL_DIR\}/scripts/estimate-tokens.sh /path/to/project


STEP 2: Load Codebase into Context

Load: Read("$\{CLAUDE_SKILL_DIR\}/references/report-structure.md") for loading strategy, inclusion patterns by language (TS/JS, Python, Config), and batch reading patterns.


STEP 3: Audit Analysis

With codebase loaded, perform the selected audit mode(s).

Security Audit

Load: Read("$\{CLAUDE_SKILL_DIR\}/references/security-audit-guide.md") for the full checklist.

Key cross-file analysis patterns:

  1. Data flow tracing: Track user input from entry point → processing → storage
  2. Auth boundary verification: Ensure all protected routes check auth
  3. Secret detection: Scan for hardcoded credentials, API keys, tokens
  4. Injection surfaces: SQL, command, template injection across file boundaries
  5. OWASP Top 10 mapping: Classify findings by OWASP category

Architecture Review

Load: Read("$\{CLAUDE_SKILL_DIR\}/references/architecture-review-guide.md") for the full guide.

Key analysis patterns:

  1. Dependency direction: Verify imports flow inward (clean architecture)
  2. Circular dependencies: Detect import cycles across modules
  3. Layer violations: Business logic in controllers, DB in routes, etc.
  4. Pattern consistency: Same problem solved differently across codebase
  5. Coupling analysis: Count cross-module imports, identify tight coupling

Dependency Audit

Load: Read("$\{CLAUDE_SKILL_DIR\}/references/dependency-audit-guide.md") for the full guide.

Key analysis patterns:

  1. Known CVEs: Check versions against known vulnerabilities
  2. License compliance: Identify copyleft licenses in proprietary code
  3. Version currency: Flag significantly outdated dependencies
  4. Transitive risk: Identify deep dependency chains
  5. Unused dependencies: Detect installed but never imported packages

Progressive Output (CC 2.1.76)

Output findings incrementally as each audit mode completes — don't batch until the report:

  1. Security findings first — show critical/high vulnerabilities immediately, don't wait for architecture review
  2. Architecture findings — show dependency direction violations, circular deps as they surface
  3. Dependency findings — show CVE matches, license compliance issues

For multi-mode audits (Full), each mode's findings appear as they complete. This lets users act on critical security findings while architecture analysis is still running.


STEP 4: Generate Report

Load the report template: Read("$\{CLAUDE_SKILL_DIR\}/assets/audit-report-template.md").

Report structure and severity classification: Read("$\{CLAUDE_SKILL_DIR\}/references/report-structure.md") for finding table format, severity breakdown (CRITICAL/HIGH/MEDIUM/LOW with timelines), and architecture diagram conventions.

Severity matrix: Read("$\{CLAUDE_SKILL_DIR\}/assets/severity-matrix.md") for classification criteria.

Completion Checklist

Before finalizing the report, verify with Read("$\{CLAUDE_SKILL_DIR\}/checklists/audit-completion.md").


When NOT to Use

SituationUse Instead
Small targeted check (1-5 files)Direct Read + analysis
CI/CD automated scanningsecurity-scanning skill
Multi-agent graded verification/ork:verify
Exploring unfamiliar codebase/ork:explore
Codebase > 125K LOC (exceeds 1M)/ork:verify (chunked approach)

  • security-scanning — Automated scanner integration (npm audit, Semgrep, etc.)
  • ork:security-patterns — Security architecture patterns and OWASP vulnerability classification
  • ork:architecture-patterns — Architectural pattern reference
  • ork:quality-gates — Quality assessment criteria
  • ork:verify — Multi-agent verification (fallback for codebases exceeding 1M context)

References

Load on demand with Read("$\{CLAUDE_SKILL_DIR\}/references/<file>"):

FileContent
references/security-audit-guide.mdCross-file vulnerability patterns
references/architecture-review-guide.mdPattern and coupling analysis
references/dependency-audit-guide.mdCVE, license, currency checks
references/token-estimation.mdFile type ratios and budget planning
assets/audit-report-template.mdStructured output format
assets/severity-matrix.mdFinding classification criteria
checklists/audit-completion.mdPre-report verification
scripts/estimate-tokens.shAutomated LOC to token estimation

Rules (2)

Classify audit findings by severity with evidence from actual code locations — HIGH

Classify Findings by Severity with Evidence

Why

Without evidence-backed severity classification, findings are either all "CRITICAL" (causing alert fatigue) or uniformly "MEDIUM" (hiding real risks). Both patterns erode trust in audit reports.

Rule

Every finding must include:

  1. Severity level (CRITICAL / HIGH / MEDIUM / LOW)
  2. File path and line number
  3. Code snippet showing the vulnerability
  4. Exploitation scenario or impact statement
  5. OWASP/CWE classification where applicable

Incorrect — vague findings without evidence

## Findings

| # | Severity | Finding |
|---|----------|---------|
| 1 | HIGH | SQL injection possible |
| 2 | MEDIUM | Auth might be missing |
| 3 | HIGH | Dependencies outdated |

Problems:

  • No file paths — developer cannot locate the issue
  • No code evidence — finding cannot be verified
  • No exploitation scenario — severity is arbitrary
  • "might be missing" is not a finding, it is speculation

Correct — evidence-backed severity classification

## Findings

| # | Severity | Category | File(s) | Finding |
|---|----------|----------|---------|---------|
| 1 | CRITICAL | Injection (CWE-89) | src/api/users.ts:42 | SQL injection via string interpolation |

### Finding 1: SQL Injection (CRITICAL)

**Location:** `src/api/users.ts:42`
**OWASP:** A03:2021 Injection | **CWE:** CWE-89

**Vulnerable code:**
  ```typescript
  const query = `SELECT * FROM users WHERE id = $\{req.params.id\}`;
  await db.execute(query);

Exploitation: Attacker sends id=1; DROP TABLE users-- via GET /api/users/:id. No parameterization or input validation exists between the route handler (line 38) and the query execution (line 42).

Remediation:

const query = "SELECT * FROM users WHERE id = $1";
await db.execute(query, [req.params.id]);

## Severity Classification Criteria

| Severity | Criteria | Example |
|----------|----------|---------|
| CRITICAL | Exploitable without auth, data loss/breach | SQL injection, RCE, auth bypass |
| HIGH | Exploitable with auth, significant impact | IDOR, privilege escalation, SSRF |
| MEDIUM | Requires specific conditions to exploit | CSRF, info disclosure, weak crypto |
| LOW | Minimal impact, defense-in-depth | Missing headers, verbose errors |


### Declare audit scope upfront before loading files to avoid context window exhaustion — HIGH


# Declare Audit Scope Before Loading

## Why

The 1M context window is large but finite. Loading every file without a scope declaration means generated code, test fixtures, and vendor files consume tokens that should go to critical source files.

## Rule

Before loading any files, produce a scope declaration that includes:
1. Audit mode (security / architecture / dependency / full)
2. Directory inclusion list
3. File exclusion patterns
4. Estimated token budget vs available budget

## Incorrect — audit everything without scoping

```markdown
## Audit Plan
1. Load all files in the repository
2. Analyze everything
3. Generate report
# Loads everything including generated files
find . -name "*.ts" -o -name "*.js" | xargs cat

Problems:

  • Generated files (dist/, plugins/) consume 40%+ of context
  • Test fixtures and snapshots add noise
  • No priority ordering means entry points may be truncated

Correct — declare scope with budget allocation

## Audit Scope Declaration

**Mode:** Security audit
**Target directories:** src/api/, src/auth/, src/middleware/
**Exclusions:** dist/, node_modules/, *.test.ts, *.spec.ts, __snapshots__/
**Token budget:** ~950K available (1M GA), estimated usage: ~85K (9%)
**Priority order:**
  1. Entry points (src/index.ts, src/app.ts)
  2. Auth boundary (src/auth/*, src/middleware/auth*)
  3. Data access layer (src/db/*, src/repositories/*)
  4. API routes (src/api/*)
# Scoped file discovery with exclusions
find src/api src/auth src/middleware \
  -name "*.ts" \
  ! -name "*.test.ts" \
  ! -name "*.spec.ts" \
  ! -path "*/__snapshots__/*"

Checklist

CheckRequired
Audit mode declaredYes
Target directories listedYes
Exclusion patterns definedYes
Token budget estimatedYes
Priority loading order setYes

References (7)

Architecture Review Guide

Architecture Review Guide

Pattern consistency, coupling analysis, and structural health assessment.

Dependency Direction Analysis

Clean Architecture Layers

┌─────────────────────────────┐
│  Presentation (routes, UI)  │  ← Outermost
├─────────────────────────────┤
│  Application (use cases)    │
├─────────────────────────────┤
│  Domain (entities, rules)   │
├─────────────────────────────┤
│  Infrastructure (DB, APIs)  │  ← Outermost
└─────────────────────────────┘

Rule: Dependencies point INWARD only.
Violation: Domain importing from Infrastructure.

Detection Method

  1. Map each file to a layer based on directory structure
  2. Parse imports/requires in each file
  3. Flag imports that point outward (wrong direction)
# VIOLATION EXAMPLE:
# src/domain/user.ts imports from src/infrastructure/db.ts
import { dbPool } from '../infrastructure/db'  // Wrong direction!

# CORRECT:
# src/domain/user.ts defines interface
# src/infrastructure/db.ts implements it

Circular Dependency Detection

What to Look For

# File A imports File B, File B imports File A
// src/auth/service.ts
import { UserRepo } from '../users/repo'

// src/users/repo.ts
import { AuthService } from '../auth/service'  // Circular!

Resolution Patterns

PatternWhen to Use
Extract interfaceBoth modules depend on abstraction
Merge modulesModules are conceptually one unit
Event-basedDecouple with pub/sub or event emitter
Dependency injectionInject at runtime, not import time

Pattern Consistency Check

Look for the same problem solved differently across the codebase:

AreaInconsistency Example
Error handlingSome files throw, others return Result, others use callbacks
ValidationZod in some files, Joi in others, manual checks elsewhere
Data accessRaw SQL in some, ORM in others, mixed in same file
Loggingconsole.log, winston, pino, custom logger all present
Configenv vars, config files, hardcoded, mixed approaches
HTTP clientsfetch, axios, got, node-fetch all imported

Scoring

ConsistencyScore
Single pattern everywhere10/10
Primary + 1 legacy pattern7/10
2-3 competing patterns4/10
No discernible pattern1/10

Coupling Analysis

Metrics to Calculate

MetricFormulaHealthy Range
Afferent coupling (Ca)Modules that depend ON this module< 10
Efferent coupling (Ce)Modules this module depends ON< 8
Instability (I)Ce / (Ca + Ce)Varies by layer
Abstractness (A)Interfaces / Total types> 0.3 for core

Module Boundary Health

# Count imports between directories:
src/auth/ → src/users/    : 5 imports (acceptable)
src/auth/ → src/payments/  : 12 imports (high coupling!)
src/utils/ → src/auth/     : 0 imports (good, utils is generic)

Red Flags

  • Module with > 15 external dependents (God module)
  • Utility file with > 500 lines (needs splitting)
  • Circular import chains > 2 files deep
  • Config/env imported in > 20 files (use DI instead)

Layering Violations

ViolationExampleFix
DB in route handlerrouter.get('/', async (req, res) => \{ db.query(...) \})Extract to service layer
Business logic in middlewareAuth middleware doing role-based access with complex rulesMove to use-case layer
HTTP in domainDomain entity calling external APIInject via port/adapter
UI logic in APIAPI returning HTML-formatted stringsReturn data, format in frontend

Architecture Diagram Output

Generate ASCII diagram showing module dependencies:

┌──────────┐     ┌──────────┐     ┌──────────┐
│  routes   │────▶│ services │────▶│  repos   │
└──────────┘     └──────────┘     └──────────┘
      │                │                │
      ▼                ▼                ▼
┌──────────┐     ┌──────────┐     ┌──────────┐
│middleware│     │  domain  │     │    db     │
└──────────┘     └──────────┘     └──────────┘

Legend: ──▶ = imports from
Violations marked with ✗

Audit Scope Dialog

Audit Scope Dialog

STEP 0: Verify User Intent with AskUserQuestion

BEFORE creating tasks, clarify audit scope:

AskUserQuestion(
  questions=[
    {
      "question": "What type of audit do you want to run?",
      "header": "Audit mode",
      "options": [
        {"label": "Full audit (Recommended)", "description": "Security + architecture + dependencies in one pass", "markdown": "```\nFull Audit (1M context)\n───────────────────────\n  Load entire codebase ──▶\n  ┌────────────────────────┐\n  │ Security    OWASP Top10│\n  │ Architecture  patterns │\n  │ Dependencies  CVEs     │\n  │ Cross-file   data flow │\n  └────────────────────────┘\n  Single pass: Opus 4.6 sees\n  ALL files simultaneously\n  Output: Prioritized findings\n```"},
        {"label": "Security audit", "description": "Cross-file vulnerability analysis, data flow tracing, OWASP mapping", "markdown": "```\nSecurity Audit\n──────────────\n  ┌──────────────────────┐\n  │ OWASP mapping        │\n  │ Data flow tracing    │\n  │   input ──▶ DB ──▶ output\n  │ Cross-file vulns     │\n  │ Auth/AuthZ review    │\n  │ Secret detection     │\n  └──────────────────────┘\n  Finds vulns that chunked\n  analysis misses\n```"},
        {"label": "Architecture review", "description": "Pattern consistency, coupling analysis, dependency violations", "markdown": "```\nArchitecture Review\n───────────────────\n  ┌──────────────────────┐\n  │ Pattern consistency  │\n  │ Coupling metrics     │\n  │   A ←→ B  (tight)   │\n  │   C ──▶ D  (clean)  │\n  │ Dependency violations│\n  │ Layer enforcement    │\n  └──────────────────────┘\n  Cross-file analysis of\n  architectural integrity\n```"},
        {"label": "Dependency audit", "description": "License compliance, CVE checking, version currency", "markdown": "```\nDependency Audit\n────────────────\n  ┌──────────────────────┐\n  │ CVE scan       N vuls│\n  │ License check  ✓/✗   │\n  │ Version drift  N old │\n  │ Unused deps    N     │\n  │ Transitive risk      │\n  └──────────────────────┘\n  npm audit + pip-audit +\n  license compatibility\n```"}
      ],
      "multiSelect": true
    },
    {
      "question": "What should be audited?",
      "header": "Scope",
      "options": [
        {"label": "Entire codebase", "description": "Load all source files into context", "markdown": "```\nEntire Codebase\n───────────────\n  Load ALL source files\n  into 1M context window\n\n  Best for: first audit,\n  full security review,\n  architecture assessment\n  ⚠ Requires Tier 4+ API\n```"},
        {"label": "Specific directory", "description": "Focus on a subdirectory (e.g., src/api/)", "markdown": "```\nSpecific Directory\n──────────────────\n  Load one subtree:\n  src/api/ or src/auth/\n\n  Best for: targeted review,\n  post-change validation,\n  smaller context budget\n```"},
        {"label": "Changed files only", "description": "Audit only files changed vs main branch", "markdown": "```\nChanged Files Only\n──────────────────\n  git diff main...HEAD\n  Load only modified files\n\n  Best for: pre-merge check,\n  PR-scoped audit,\n  incremental review\n```"}
      ],
      "multiSelect": false
    }
  ]
)

Based on answers, adjust workflow:

  • Full audit: All 3 domains, maximum context usage
  • Security only: Focus token budget on source + config files
  • Architecture only: Focus on module boundaries, imports, interfaces
  • Dependency only: Focus on lock files, manifests, import maps
  • Changed files only: Use git diff --name-only main...HEAD to scope

Dependency Audit Guide

Dependency Audit Guide

License compliance, CVE checking, and version currency analysis.

CVE Checking

Automated Scanners

# JavaScript/TypeScript
npm audit --json
npx better-npm-audit audit

# Python
pip-audit --format=json
safety check --json

# Go
govulncheck ./...

# Rust
cargo audit

Manual CVE Check

For dependencies without scanner coverage:

  1. Check version in package.json / pyproject.toml / go.mod
  2. Search NVD or OSV for package name
  3. Compare installed version against affected version ranges
  4. Classify by CVSS score
CVSS ScoreSeverityAction
9.0-10.0CRITICALUpdate immediately
7.0-8.9HIGHUpdate within 1 week
4.0-6.9MEDIUMUpdate within 1 month
0.1-3.9LOWTrack in backlog

License Compliance

License Risk Tiers

TierLicensesRisk in Proprietary Code
PermissiveMIT, BSD-2, BSD-3, ISC, Apache-2.0Safe
Weak copyleftLGPL-2.1, LGPL-3.0, MPL-2.0Safe if dynamically linked
Strong copyleftGPL-2.0, GPL-3.0, AGPL-3.0Requires source disclosure
UnknownUNLICENSED, customReview manually

Detection Method

# JavaScript
npx license-checker --json --production

# Python
pip-licenses --format=json --with-urls

# Check for problematic licenses
npx license-checker --failOn "GPL-2.0;GPL-3.0;AGPL-3.0"

What to Flag

  • Any GPL/AGPL dependency in proprietary codebase → CRITICAL
  • UNLICENSED dependencies → HIGH (legal risk)
  • Dependencies with no license file → MEDIUM
  • License changed between versions → LOW (track)

Version Currency

Currency Classification

StatusDefinitionExample
CurrentWithin 1 minor version of latestreact 19.1 when 19.2 is latest
Stale1+ major version behindreact 18.x when 19.x is latest
Outdated2+ major versions behindreact 17.x when 19.x is latest
EOLNo longer maintainedmoment.js, request

High-Risk Outdated Patterns

PatternRisk
Framework 2+ majors behindMissing security patches
Auth library outdatedKnown vulnerabilities
TLS/crypto library outdatedWeak algorithms
ORM/DB driver outdatedSQL injection patches missing

Transitive Dependency Risk

Deep Chains

your-app
  └── package-a@1.0.0
       └── package-b@2.0.0
            └── package-c@3.0.0  ← vulnerability here

Risk: You don't control package-c, and updating package-a may not update it.

Detection

# Show dependency tree
npm ls --all --json | jq '.dependencies'

# Find deep chains (>4 levels)
npm ls --all 2>/dev/null | grep -E "^.{16,}" | head -20

Mitigation

StrategyWhen
overrides (npm) / resolutions (yarn)Force specific version
Replace parent packageIf parent is unmaintained
Vendor and patchLast resort for critical fixes

Unused Dependencies

Detection

# JavaScript
npx depcheck

# Python
pip-extra-reqs --ignore-module=tests .

What to Flag

  • Installed but never imported → MEDIUM (bloat, attack surface)
  • Dev dependency in production deps → LOW (no runtime risk)
  • Multiple packages for same purpose → LOW (e.g., both lodash and underscore)

Report Structure

Audit Report Structure

Report Format

# Audit Report: {project-name}
**Date:** {date} | **Mode:** {mode} | **Files loaded:** {count} | **LOC:** {loc}

## Executive Summary
{1-3 sentences: overall health, critical findings count}

## Findings

| # | Severity | Category | File(s) | Finding | Remediation |
|---|----------|----------|---------|---------|-------------|
| 1 | CRITICAL | Security | src/auth.ts:42 | ... | ... |

## Severity Breakdown
- CRITICAL: {n} (must fix before deploy)
- HIGH: {n} (fix within sprint)
- MEDIUM: {n} (fix within quarter)
- LOW: {n} (track and address)

## Architecture Diagram
{ASCII diagram of module dependencies}

## Recommendations
{Prioritized action items}

Severity Classification

LevelCriteriaTimeline
CRITICALExploitable vulnerability, data loss risk, auth bypassMust fix before deploy
HIGHSecurity weakness, major arch violation, EOL dependencyFix within sprint
MEDIUMCode smell, minor arch inconsistency, stale dependencyFix within quarter
LOWStyle issue, minor improvement, documentation gapTrack and address

Codebase Loading Strategy

  1. Glob all source files matching inclusion patterns
  2. Sort by priority: entry points -> core modules -> utilities -> config
  3. Read files in parallel using multiple Read tool calls per message
  4. Track loaded tokens to stay within budget

Inclusion Patterns (by language)

# TypeScript/JavaScript
**/*.ts **/*.tsx **/*.js **/*.jsx
**/package.json **/tsconfig.json

# Python
**/*.py
**/pyproject.toml **/setup.cfg **/requirements*.txt

# Config
**/.env.example **/docker-compose*.yml **/Dockerfile
**/*.yaml **/*.yml (non-lock)

Reading Pattern

Read files in batches of 10-15 per message for efficiency:

# Batch 1: Entry points and config
Read("src/index.ts")
Read("src/app.ts")
Read("package.json")
Read("tsconfig.json")
# ... up to 15 files

# Batch 2: Core modules
Read("src/api/routes.ts")
Read("src/db/connection.ts")
# ... next batch

Security Audit Guide

Security Audit Guide

Cross-file vulnerability analysis patterns for whole-codebase audits.

Data Flow Tracing

Trace user input from entry to storage across file boundaries:

Entry Point → Validation → Processing → Storage
(route.ts)    (middleware)   (service.ts)  (repo.ts)

What to Check at Each Stage

StageCheckSeverity if Missing
EntryInput validation, type coercionHIGH
ValidationSchema validation, sanitizationCRITICAL
ProcessingBusiness logic auth checksHIGH
StorageParameterized queries, encodingCRITICAL

Cross-File Vulnerability Patterns

1. Auth Bypass via Missing Middleware

# PATTERN: Route defined without auth middleware
router.get('/admin/users', getUsersHandler)  # No authMiddleware!

# Compare against protected routes:
router.get('/admin/settings', authMiddleware, getSettingsHandler)

Detection: Glob all route files, check each handler has auth middleware.

2. SQL Injection via String Interpolation

# PATTERN: Variable in SQL string (any file)
query(`SELECT * FROM users WHERE id = '${userId}'`)

# Safe pattern:
query('SELECT * FROM users WHERE id = $1', [userId])

Detection: Grep for template literals containing SQL keywords.

3. Command Injection via Shell Exec

# PATTERN: User input in exec/spawn
exec(`git log --author="${username}"`)

# Safe pattern:
execFile('git', ['log', `--author=${username}`])

Detection: Grep for exec(, execSync(, spawn( with template literals.

4. Secret Leakage

# PATTERN: Hardcoded secrets
const API_KEY = 'sk-live-abc123...'
const password = 'admin123'

# PATTERN: Secrets in error messages
throw new Error(`Auth failed for ${password}`)

# PATTERN: Secrets in logs
console.log(`Connecting with key: ${apiKey}`)

Detection: Grep for common secret patterns (sk-, ghp_, Bearer , password assignments).

5. SSRF via Unvalidated URLs

# PATTERN: User-controlled URL in fetch/axios
const response = await fetch(req.body.url)

# Safe pattern:
const url = new URL(req.body.url)
if (!ALLOWED_HOSTS.includes(url.hostname)) throw new Error('Blocked')

6. Path Traversal

# PATTERN: User input in file path
const filePath = path.join(uploadDir, req.params.filename)
// filename could be '../../etc/passwd'

# Safe pattern:
const resolved = path.resolve(uploadDir, req.params.filename)
if (!resolved.startsWith(uploadDir)) throw new Error('Blocked')

OWASP Top 10 Mapping

OWASPWhat to Look For
A01 Broken Access ControlMissing auth middleware, IDOR, privilege escalation
A02 Cryptographic FailuresWeak hashing, HTTP for sensitive data, hardcoded keys
A03 InjectionSQL, command, template injection across boundaries
A04 Insecure DesignMissing rate limiting, no abuse prevention
A05 Security MisconfigurationDebug mode in prod, default credentials, CORS *
A06 Vulnerable ComponentsOutdated deps with known CVEs
A07 Auth FailuresWeak passwords, no MFA, session fixation
A08 Data IntegrityUnsigned updates, CI/CD without verification
A09 Logging FailuresMissing audit logs, secrets in logs
A10 SSRFUnvalidated URLs in server-side requests

Severity Classification

SeverityCriteria
CRITICALExploitable without authentication, data breach risk
HIGHExploitable with low-privilege access, system compromise
MEDIUMRequires specific conditions, limited impact
LOWInformational, defense-in-depth improvement

Token Budget Planning

Token Budget Planning

Run Token Estimation

# Use the estimation script
bash ${CLAUDE_SKILL_DIR}/scripts/estimate-tokens.sh /path/to/project

Manual Estimation Rules

File TypeTokens per Line (approx)
TypeScript/JavaScript~8 tokens/line
Python~7 tokens/line
JSON/YAML config~5 tokens/line
Markdown docs~6 tokens/line
CSS/SCSS~6 tokens/line

Budget Allocation

Context SizeAvailable for CodeFits LOC (approx)
200K~150K tokens~20K LOC
1M (standard)~950K tokens~125K LOC

Auto-Exclusion List

Always exclude from loading:

  • node_modules/, vendor/, .venv/, __pycache__/
  • dist/, build/, .next/, out/
  • *.min.js, *.map, *.lock (read lock files separately for deps audit)
  • Binary files, images, fonts
  • Test fixtures and snapshots (unless auditing tests)
  • Generated files (protobuf, graphql codegen)

If Codebase Exceeds Budget

  1. Priority loading: Entry points first, then imported modules
  2. Directory scoping: Ask user to narrow to specific directories
  3. Fallback: Recommend /ork:verify for multi-agent approach (only needed for codebases > 125K LOC)
# Fallback suggestion
AskUserQuestion(
  questions=[{
    "question": "Codebase exceeds context window. How to proceed?",
    "header": "Too large",
    "options": [
      {"label": "Narrow scope", "description": "Audit specific directories only"},
      {"label": "Use /ork:verify instead", "description": "Chunked multi-agent approach (works with any context size)"},
      {"label": "Priority loading", "description": "Load entry points + critical paths only"}
    ],
    "multiSelect": false
  }]
)

Token Estimation

Token Estimation Guide

Planning context budget for whole-codebase loading.

Token Ratios by File Type

File TypeTokens/LineTokens/KBNotes
TypeScript/JavaScript~8~320Variable names inflate count
Python~7~280Indentation is efficient
Go~7~280Verbose but predictable
JSON~5~200High repetition, low entropy
YAML~5~200Similar to JSON
Markdown~6~240Prose-heavy content
CSS/SCSS~6~240Property-value pairs
SQL~6~240Keyword-heavy
HTML/JSX~9~360Attribute-heavy markup
Protobuf/GraphQL schema~5~200Declarative, repetitive

Quick Estimation Formula

Total tokens ≈ Total LOC × 7.5 (average)

For more precision:

Total tokens ≈ (TS lines × 8) + (Py lines × 7) + (Config lines × 5) + (Other × 7)

Context Budget Planning

Context SizeTotalReserved for PromptAvailable for CodeMax LOC
1M (standard)1,000,000~50,000~950,000~125,000
200K (legacy)200,000~50,000~150,000~20,000

Reserved for prompt includes: system prompt, skill content, analysis instructions, and output space.

Exclusion Priority

When codebase exceeds budget, exclude in this order:

  1. Always exclude: node_modules/, vendor/, .venv/, dist/, build/, .next/
  2. Exclude first: Test fixtures, snapshots, migration files, generated code
  3. Exclude second: Test files (unless auditing test quality)
  4. Exclude third: Documentation, README files
  5. Keep last: Source files, config, entry points

Loading Priority

When partially loading, prioritize in this order:

  1. Entry points: index.ts, main.py, app.ts, server.ts
  2. Route definitions: API routes, page routes
  3. Middleware/interceptors: Auth, validation, error handling
  4. Business logic: Services, use cases, domain models
  5. Data access: Repositories, ORM models, migrations
  6. Config: Environment config, feature flags, secrets management
  7. Utilities: Shared helpers, common functions

Real-World Sizing Examples

Project TypeTypical LOCEstimated TokensFits in
Microservice5-15K40-120K1M (single-pass)
Small app15-30K120-240K1M (single-pass)
Medium app30-60K240-480K1M (single-pass)
Large app60-125K480K-950K1M (fits with scoping)
Large monolith125K+950K+Directory-scoped or /ork:verify

Checklists (1)

Audit Completion

Audit Completion Checklist

Verify before finalizing the audit report.

Pre-Report Verification

Coverage

  • All source files in scope were loaded (check file count vs glob count)
  • Entry points identified and traced
  • Configuration files reviewed (env, docker, CI)
  • Lock files checked (package-lock.json, poetry.lock, go.sum)

Security (if applicable)

  • All public endpoints checked for auth middleware
  • Data flow traced from input → storage for at least 3 critical paths
  • Secret detection scan completed (grep for API keys, passwords, tokens)
  • OWASP Top 10 categories all considered (mark N/A if not applicable)
  • Third-party integrations checked for SSRF risk
  • File upload/download paths checked for traversal

Architecture (if applicable)

  • Dependency direction verified (imports flow inward)
  • Circular dependencies checked
  • Pattern consistency evaluated (error handling, validation, data access)
  • Module coupling analyzed (cross-directory import counts)
  • Layer violations identified
  • ASCII architecture diagram generated

Dependencies (if applicable)

  • npm audit / pip-audit / equivalent run
  • License compliance checked (no GPL in proprietary)
  • Outdated packages identified with severity
  • Unused dependencies flagged
  • Transitive dependency risks assessed

Report Quality

  • Every finding has specific file:line references
  • Every finding has a remediation suggestion with code
  • Severity classifications match the severity matrix
  • No duplicate findings (same root cause reported once)
  • False positives verified and removed
  • Recommendations prioritized by impact and effort
  • Executive summary accurately reflects findings
  • Health score calculated correctly

Completeness

  • All selected audit modes completed
  • Context utilization reported (tokens used / available)
  • Files list in appendix matches loaded files
  • Report follows the template structure
Edit on GitHub

Last updated on

On this page

Full-Codebase AuditQuick StartSTEP 0: Verify User Intent with AskUserQuestionCRITICAL: Task Management is MANDATORYSTEP 1: Estimate Token BudgetSTEP 2: Load Codebase into ContextSTEP 3: Audit AnalysisSecurity AuditArchitecture ReviewDependency AuditProgressive Output (CC 2.1.76)STEP 4: Generate ReportCompletion ChecklistWhen NOT to UseRelated SkillsReferencesRules (2)Classify audit findings by severity with evidence from actual code locations — HIGHClassify Findings by Severity with EvidenceWhyRuleIncorrect — vague findings without evidenceCorrect — evidence-backed severity classificationCorrect — declare scope with budget allocationChecklistReferences (7)Architecture Review GuideArchitecture Review GuideDependency Direction AnalysisClean Architecture LayersDetection MethodCircular Dependency DetectionWhat to Look ForResolution PatternsPattern Consistency CheckScoringCoupling AnalysisMetrics to CalculateModule Boundary HealthRed FlagsLayering ViolationsArchitecture Diagram OutputAudit Scope DialogAudit Scope DialogSTEP 0: Verify User Intent with AskUserQuestionDependency Audit GuideDependency Audit GuideCVE CheckingAutomated ScannersManual CVE CheckLicense ComplianceLicense Risk TiersDetection MethodWhat to FlagVersion CurrencyCurrency ClassificationHigh-Risk Outdated PatternsTransitive Dependency RiskDeep ChainsDetectionMitigationUnused DependenciesDetectionWhat to FlagReport StructureAudit Report StructureReport FormatSeverity ClassificationCodebase Loading StrategyInclusion Patterns (by language)Reading PatternSecurity Audit GuideSecurity Audit GuideData Flow TracingWhat to Check at Each StageCross-File Vulnerability Patterns1. Auth Bypass via Missing Middleware2. SQL Injection via String Interpolation3. Command Injection via Shell Exec4. Secret Leakage5. SSRF via Unvalidated URLs6. Path TraversalOWASP Top 10 MappingSeverity ClassificationToken Budget PlanningToken Budget PlanningRun Token EstimationManual Estimation RulesBudget AllocationAuto-Exclusion ListIf Codebase Exceeds BudgetToken EstimationToken Estimation GuideToken Ratios by File TypeQuick Estimation FormulaContext Budget PlanningExclusion PriorityLoading PriorityReal-World Sizing ExamplesChecklists (1)Audit CompletionAudit Completion ChecklistPre-Report VerificationCoverageSecurity (if applicable)Architecture (if applicable)Dependencies (if applicable)Report QualityCompleteness