Audit Full
Full-codebase audit using 1M context window. Security, architecture, and dependency analysis in a single pass. Use when you need whole-project analysis.
Auto-activated — this skill loads automatically when Claude detects matching context.
Full-Codebase Audit
Single-pass whole-project analysis leveraging Opus 4.6's extended context window. Loads entire codebases (~50K LOC) into context for cross-file vulnerability detection, architecture review, and dependency analysis.
Quick Start
/ork:audit-full # Full audit (all modes)
/ork:audit-full security # Security-focused audit
/ork:audit-full architecture # Architecture review
/ork:audit-full dependencies # Dependency auditOpus 4.6: Uses
complexity: maxfor extended thinking across entire codebases. 1M context (GA) enables cross-file reasoning that chunked approaches miss.
1M Context Required: If
CLAUDE_CODE_DISABLE_1M_CONTEXTis set, audit-full cannot perform full-codebase analysis. Check:echo $CLAUDE_CODE_DISABLE_1M_CONTEXT— if non-empty, either unset it (unset CLAUDE_CODE_DISABLE_1M_CONTEXT) or use/ork:verifyfor chunked analysis instead.
STEP 0: Verify User Intent with AskUserQuestion
BEFORE creating tasks, clarify audit scope using the interactive dialog.
Load: Read("$\{CLAUDE_SKILL_DIR\}/references/audit-scope-dialog.md") for the full AskUserQuestion dialog with mode options (Full/Security/Architecture/Dependencies) and scope options (Entire codebase/Specific directory/Changed files).
CRITICAL: Task Management is MANDATORY
# 1. Create main task IMMEDIATELY
TaskCreate(
subject="Full-codebase audit",
description="Single-pass audit using extended context",
activeForm="Running full-codebase audit"
)
# 2. Create subtasks for each phase
TaskCreate(subject="Estimate token budget and plan loading", activeForm="Estimating token budget") # id=2
TaskCreate(subject="Load codebase into context", activeForm="Loading codebase") # id=3
TaskCreate(subject="Run audit analysis", activeForm="Analyzing codebase") # id=4
TaskCreate(subject="Generate audit report", activeForm="Generating report") # id=5
# 3. Set dependencies for sequential phases
TaskUpdate(taskId="3", addBlockedBy=["2"]) # Loading needs budget estimate
TaskUpdate(taskId="4", addBlockedBy=["3"]) # Analysis needs codebase loaded
TaskUpdate(taskId="5", addBlockedBy=["4"]) # Report needs analysis done
# 4. Before starting each task, verify it's unblocked
task = TaskGet(taskId="2") # Verify blockedBy is empty
# 5. Update status as you progress
TaskUpdate(taskId="2", status="in_progress") # When starting
TaskUpdate(taskId="2", status="completed") # When done — repeat for each subtaskSTEP 1: Estimate Token Budget
Before loading files, estimate whether the codebase fits in context.
Load: Read("$\{CLAUDE_SKILL_DIR\}/references/token-budget-planning.md") for estimation rules (tokens/line by file type), budget allocation tables, auto-exclusion list, and fallback dialog when codebase exceeds budget.
Run estimation: bash $\{CLAUDE_SKILL_DIR\}/scripts/estimate-tokens.sh /path/to/project
STEP 2: Load Codebase into Context
Load: Read("$\{CLAUDE_SKILL_DIR\}/references/report-structure.md") for loading strategy, inclusion patterns by language (TS/JS, Python, Config), and batch reading patterns.
STEP 3: Audit Analysis
With codebase loaded, perform the selected audit mode(s).
Security Audit
Load: Read("$\{CLAUDE_SKILL_DIR\}/references/security-audit-guide.md") for the full checklist.
Key cross-file analysis patterns:
- Data flow tracing: Track user input from entry point → processing → storage
- Auth boundary verification: Ensure all protected routes check auth
- Secret detection: Scan for hardcoded credentials, API keys, tokens
- Injection surfaces: SQL, command, template injection across file boundaries
- OWASP Top 10 mapping: Classify findings by OWASP category
Architecture Review
Load: Read("$\{CLAUDE_SKILL_DIR\}/references/architecture-review-guide.md") for the full guide.
Key analysis patterns:
- Dependency direction: Verify imports flow inward (clean architecture)
- Circular dependencies: Detect import cycles across modules
- Layer violations: Business logic in controllers, DB in routes, etc.
- Pattern consistency: Same problem solved differently across codebase
- Coupling analysis: Count cross-module imports, identify tight coupling
Dependency Audit
Load: Read("$\{CLAUDE_SKILL_DIR\}/references/dependency-audit-guide.md") for the full guide.
Key analysis patterns:
- Known CVEs: Check versions against known vulnerabilities
- License compliance: Identify copyleft licenses in proprietary code
- Version currency: Flag significantly outdated dependencies
- Transitive risk: Identify deep dependency chains
- Unused dependencies: Detect installed but never imported packages
Progressive Output (CC 2.1.76)
Output findings incrementally as each audit mode completes — don't batch until the report:
- Security findings first — show critical/high vulnerabilities immediately, don't wait for architecture review
- Architecture findings — show dependency direction violations, circular deps as they surface
- Dependency findings — show CVE matches, license compliance issues
For multi-mode audits (Full), each mode's findings appear as they complete. This lets users act on critical security findings while architecture analysis is still running.
STEP 4: Generate Report
Load the report template: Read("$\{CLAUDE_SKILL_DIR\}/assets/audit-report-template.md").
Report structure and severity classification: Read("$\{CLAUDE_SKILL_DIR\}/references/report-structure.md") for finding table format, severity breakdown (CRITICAL/HIGH/MEDIUM/LOW with timelines), and architecture diagram conventions.
Severity matrix: Read("$\{CLAUDE_SKILL_DIR\}/assets/severity-matrix.md") for classification criteria.
Completion Checklist
Before finalizing the report, verify with Read("$\{CLAUDE_SKILL_DIR\}/checklists/audit-completion.md").
When NOT to Use
| Situation | Use Instead |
|---|---|
| Small targeted check (1-5 files) | Direct Read + analysis |
| CI/CD automated scanning | security-scanning skill |
| Multi-agent graded verification | /ork:verify |
| Exploring unfamiliar codebase | /ork:explore |
| Codebase > 125K LOC (exceeds 1M) | /ork:verify (chunked approach) |
Related Skills
security-scanning— Automated scanner integration (npm audit, Semgrep, etc.)ork:security-patterns— Security architecture patterns and OWASP vulnerability classificationork:architecture-patterns— Architectural pattern referenceork:quality-gates— Quality assessment criteriaork:verify— Multi-agent verification (fallback for codebases exceeding 1M context)
References
Load on demand with Read("$\{CLAUDE_SKILL_DIR\}/references/<file>"):
| File | Content |
|---|---|
references/security-audit-guide.md | Cross-file vulnerability patterns |
references/architecture-review-guide.md | Pattern and coupling analysis |
references/dependency-audit-guide.md | CVE, license, currency checks |
references/token-estimation.md | File type ratios and budget planning |
assets/audit-report-template.md | Structured output format |
assets/severity-matrix.md | Finding classification criteria |
checklists/audit-completion.md | Pre-report verification |
scripts/estimate-tokens.sh | Automated LOC to token estimation |
Rules (2)
Classify audit findings by severity with evidence from actual code locations — HIGH
Classify Findings by Severity with Evidence
Why
Without evidence-backed severity classification, findings are either all "CRITICAL" (causing alert fatigue) or uniformly "MEDIUM" (hiding real risks). Both patterns erode trust in audit reports.
Rule
Every finding must include:
- Severity level (CRITICAL / HIGH / MEDIUM / LOW)
- File path and line number
- Code snippet showing the vulnerability
- Exploitation scenario or impact statement
- OWASP/CWE classification where applicable
Incorrect — vague findings without evidence
## Findings
| # | Severity | Finding |
|---|----------|---------|
| 1 | HIGH | SQL injection possible |
| 2 | MEDIUM | Auth might be missing |
| 3 | HIGH | Dependencies outdated |Problems:
- No file paths — developer cannot locate the issue
- No code evidence — finding cannot be verified
- No exploitation scenario — severity is arbitrary
- "might be missing" is not a finding, it is speculation
Correct — evidence-backed severity classification
## Findings
| # | Severity | Category | File(s) | Finding |
|---|----------|----------|---------|---------|
| 1 | CRITICAL | Injection (CWE-89) | src/api/users.ts:42 | SQL injection via string interpolation |
### Finding 1: SQL Injection (CRITICAL)
**Location:** `src/api/users.ts:42`
**OWASP:** A03:2021 Injection | **CWE:** CWE-89
**Vulnerable code:**
```typescript
const query = `SELECT * FROM users WHERE id = $\{req.params.id\}`;
await db.execute(query);Exploitation: Attacker sends id=1; DROP TABLE users-- via
GET /api/users/:id. No parameterization or input validation exists
between the route handler (line 38) and the query execution (line 42).
Remediation:
const query = "SELECT * FROM users WHERE id = $1";
await db.execute(query, [req.params.id]);
## Severity Classification Criteria
| Severity | Criteria | Example |
|----------|----------|---------|
| CRITICAL | Exploitable without auth, data loss/breach | SQL injection, RCE, auth bypass |
| HIGH | Exploitable with auth, significant impact | IDOR, privilege escalation, SSRF |
| MEDIUM | Requires specific conditions to exploit | CSRF, info disclosure, weak crypto |
| LOW | Minimal impact, defense-in-depth | Missing headers, verbose errors |
### Declare audit scope upfront before loading files to avoid context window exhaustion — HIGH
# Declare Audit Scope Before Loading
## Why
The 1M context window is large but finite. Loading every file without a scope declaration means generated code, test fixtures, and vendor files consume tokens that should go to critical source files.
## Rule
Before loading any files, produce a scope declaration that includes:
1. Audit mode (security / architecture / dependency / full)
2. Directory inclusion list
3. File exclusion patterns
4. Estimated token budget vs available budget
## Incorrect — audit everything without scoping
```markdown
## Audit Plan
1. Load all files in the repository
2. Analyze everything
3. Generate report# Loads everything including generated files
find . -name "*.ts" -o -name "*.js" | xargs catProblems:
- Generated files (
dist/,plugins/) consume 40%+ of context - Test fixtures and snapshots add noise
- No priority ordering means entry points may be truncated
Correct — declare scope with budget allocation
## Audit Scope Declaration
**Mode:** Security audit
**Target directories:** src/api/, src/auth/, src/middleware/
**Exclusions:** dist/, node_modules/, *.test.ts, *.spec.ts, __snapshots__/
**Token budget:** ~950K available (1M GA), estimated usage: ~85K (9%)
**Priority order:**
1. Entry points (src/index.ts, src/app.ts)
2. Auth boundary (src/auth/*, src/middleware/auth*)
3. Data access layer (src/db/*, src/repositories/*)
4. API routes (src/api/*)# Scoped file discovery with exclusions
find src/api src/auth src/middleware \
-name "*.ts" \
! -name "*.test.ts" \
! -name "*.spec.ts" \
! -path "*/__snapshots__/*"Checklist
| Check | Required |
|---|---|
| Audit mode declared | Yes |
| Target directories listed | Yes |
| Exclusion patterns defined | Yes |
| Token budget estimated | Yes |
| Priority loading order set | Yes |
References (7)
Architecture Review Guide
Architecture Review Guide
Pattern consistency, coupling analysis, and structural health assessment.
Dependency Direction Analysis
Clean Architecture Layers
┌─────────────────────────────┐
│ Presentation (routes, UI) │ ← Outermost
├─────────────────────────────┤
│ Application (use cases) │
├─────────────────────────────┤
│ Domain (entities, rules) │
├─────────────────────────────┤
│ Infrastructure (DB, APIs) │ ← Outermost
└─────────────────────────────┘
Rule: Dependencies point INWARD only.
Violation: Domain importing from Infrastructure.Detection Method
- Map each file to a layer based on directory structure
- Parse imports/requires in each file
- Flag imports that point outward (wrong direction)
# VIOLATION EXAMPLE:
# src/domain/user.ts imports from src/infrastructure/db.ts
import { dbPool } from '../infrastructure/db' // Wrong direction!
# CORRECT:
# src/domain/user.ts defines interface
# src/infrastructure/db.ts implements itCircular Dependency Detection
What to Look For
# File A imports File B, File B imports File A
// src/auth/service.ts
import { UserRepo } from '../users/repo'
// src/users/repo.ts
import { AuthService } from '../auth/service' // Circular!Resolution Patterns
| Pattern | When to Use |
|---|---|
| Extract interface | Both modules depend on abstraction |
| Merge modules | Modules are conceptually one unit |
| Event-based | Decouple with pub/sub or event emitter |
| Dependency injection | Inject at runtime, not import time |
Pattern Consistency Check
Look for the same problem solved differently across the codebase:
| Area | Inconsistency Example |
|---|---|
| Error handling | Some files throw, others return Result, others use callbacks |
| Validation | Zod in some files, Joi in others, manual checks elsewhere |
| Data access | Raw SQL in some, ORM in others, mixed in same file |
| Logging | console.log, winston, pino, custom logger all present |
| Config | env vars, config files, hardcoded, mixed approaches |
| HTTP clients | fetch, axios, got, node-fetch all imported |
Scoring
| Consistency | Score |
|---|---|
| Single pattern everywhere | 10/10 |
| Primary + 1 legacy pattern | 7/10 |
| 2-3 competing patterns | 4/10 |
| No discernible pattern | 1/10 |
Coupling Analysis
Metrics to Calculate
| Metric | Formula | Healthy Range |
|---|---|---|
| Afferent coupling (Ca) | Modules that depend ON this module | < 10 |
| Efferent coupling (Ce) | Modules this module depends ON | < 8 |
| Instability (I) | Ce / (Ca + Ce) | Varies by layer |
| Abstractness (A) | Interfaces / Total types | > 0.3 for core |
Module Boundary Health
# Count imports between directories:
src/auth/ → src/users/ : 5 imports (acceptable)
src/auth/ → src/payments/ : 12 imports (high coupling!)
src/utils/ → src/auth/ : 0 imports (good, utils is generic)Red Flags
- Module with > 15 external dependents (God module)
- Utility file with > 500 lines (needs splitting)
- Circular import chains > 2 files deep
- Config/env imported in > 20 files (use DI instead)
Layering Violations
| Violation | Example | Fix |
|---|---|---|
| DB in route handler | router.get('/', async (req, res) => \{ db.query(...) \}) | Extract to service layer |
| Business logic in middleware | Auth middleware doing role-based access with complex rules | Move to use-case layer |
| HTTP in domain | Domain entity calling external API | Inject via port/adapter |
| UI logic in API | API returning HTML-formatted strings | Return data, format in frontend |
Architecture Diagram Output
Generate ASCII diagram showing module dependencies:
┌──────────┐ ┌──────────┐ ┌──────────┐
│ routes │────▶│ services │────▶│ repos │
└──────────┘ └──────────┘ └──────────┘
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│middleware│ │ domain │ │ db │
└──────────┘ └──────────┘ └──────────┘
Legend: ──▶ = imports from
Violations marked with ✗Audit Scope Dialog
Audit Scope Dialog
STEP 0: Verify User Intent with AskUserQuestion
BEFORE creating tasks, clarify audit scope:
AskUserQuestion(
questions=[
{
"question": "What type of audit do you want to run?",
"header": "Audit mode",
"options": [
{"label": "Full audit (Recommended)", "description": "Security + architecture + dependencies in one pass", "markdown": "```\nFull Audit (1M context)\n───────────────────────\n Load entire codebase ──▶\n ┌────────────────────────┐\n │ Security OWASP Top10│\n │ Architecture patterns │\n │ Dependencies CVEs │\n │ Cross-file data flow │\n └────────────────────────┘\n Single pass: Opus 4.6 sees\n ALL files simultaneously\n Output: Prioritized findings\n```"},
{"label": "Security audit", "description": "Cross-file vulnerability analysis, data flow tracing, OWASP mapping", "markdown": "```\nSecurity Audit\n──────────────\n ┌──────────────────────┐\n │ OWASP mapping │\n │ Data flow tracing │\n │ input ──▶ DB ──▶ output\n │ Cross-file vulns │\n │ Auth/AuthZ review │\n │ Secret detection │\n └──────────────────────┘\n Finds vulns that chunked\n analysis misses\n```"},
{"label": "Architecture review", "description": "Pattern consistency, coupling analysis, dependency violations", "markdown": "```\nArchitecture Review\n───────────────────\n ┌──────────────────────┐\n │ Pattern consistency │\n │ Coupling metrics │\n │ A ←→ B (tight) │\n │ C ──▶ D (clean) │\n │ Dependency violations│\n │ Layer enforcement │\n └──────────────────────┘\n Cross-file analysis of\n architectural integrity\n```"},
{"label": "Dependency audit", "description": "License compliance, CVE checking, version currency", "markdown": "```\nDependency Audit\n────────────────\n ┌──────────────────────┐\n │ CVE scan N vuls│\n │ License check ✓/✗ │\n │ Version drift N old │\n │ Unused deps N │\n │ Transitive risk │\n └──────────────────────┘\n npm audit + pip-audit +\n license compatibility\n```"}
],
"multiSelect": true
},
{
"question": "What should be audited?",
"header": "Scope",
"options": [
{"label": "Entire codebase", "description": "Load all source files into context", "markdown": "```\nEntire Codebase\n───────────────\n Load ALL source files\n into 1M context window\n\n Best for: first audit,\n full security review,\n architecture assessment\n ⚠ Requires Tier 4+ API\n```"},
{"label": "Specific directory", "description": "Focus on a subdirectory (e.g., src/api/)", "markdown": "```\nSpecific Directory\n──────────────────\n Load one subtree:\n src/api/ or src/auth/\n\n Best for: targeted review,\n post-change validation,\n smaller context budget\n```"},
{"label": "Changed files only", "description": "Audit only files changed vs main branch", "markdown": "```\nChanged Files Only\n──────────────────\n git diff main...HEAD\n Load only modified files\n\n Best for: pre-merge check,\n PR-scoped audit,\n incremental review\n```"}
],
"multiSelect": false
}
]
)Based on answers, adjust workflow:
- Full audit: All 3 domains, maximum context usage
- Security only: Focus token budget on source + config files
- Architecture only: Focus on module boundaries, imports, interfaces
- Dependency only: Focus on lock files, manifests, import maps
- Changed files only: Use
git diff --name-only main...HEADto scope
Dependency Audit Guide
Dependency Audit Guide
License compliance, CVE checking, and version currency analysis.
CVE Checking
Automated Scanners
# JavaScript/TypeScript
npm audit --json
npx better-npm-audit audit
# Python
pip-audit --format=json
safety check --json
# Go
govulncheck ./...
# Rust
cargo auditManual CVE Check
For dependencies without scanner coverage:
- Check version in
package.json/pyproject.toml/go.mod - Search NVD or OSV for package name
- Compare installed version against affected version ranges
- Classify by CVSS score
| CVSS Score | Severity | Action |
|---|---|---|
| 9.0-10.0 | CRITICAL | Update immediately |
| 7.0-8.9 | HIGH | Update within 1 week |
| 4.0-6.9 | MEDIUM | Update within 1 month |
| 0.1-3.9 | LOW | Track in backlog |
License Compliance
License Risk Tiers
| Tier | Licenses | Risk in Proprietary Code |
|---|---|---|
| Permissive | MIT, BSD-2, BSD-3, ISC, Apache-2.0 | Safe |
| Weak copyleft | LGPL-2.1, LGPL-3.0, MPL-2.0 | Safe if dynamically linked |
| Strong copyleft | GPL-2.0, GPL-3.0, AGPL-3.0 | Requires source disclosure |
| Unknown | UNLICENSED, custom | Review manually |
Detection Method
# JavaScript
npx license-checker --json --production
# Python
pip-licenses --format=json --with-urls
# Check for problematic licenses
npx license-checker --failOn "GPL-2.0;GPL-3.0;AGPL-3.0"What to Flag
- Any GPL/AGPL dependency in proprietary codebase → CRITICAL
- UNLICENSED dependencies → HIGH (legal risk)
- Dependencies with no license file → MEDIUM
- License changed between versions → LOW (track)
Version Currency
Currency Classification
| Status | Definition | Example |
|---|---|---|
| Current | Within 1 minor version of latest | react 19.1 when 19.2 is latest |
| Stale | 1+ major version behind | react 18.x when 19.x is latest |
| Outdated | 2+ major versions behind | react 17.x when 19.x is latest |
| EOL | No longer maintained | moment.js, request |
High-Risk Outdated Patterns
| Pattern | Risk |
|---|---|
| Framework 2+ majors behind | Missing security patches |
| Auth library outdated | Known vulnerabilities |
| TLS/crypto library outdated | Weak algorithms |
| ORM/DB driver outdated | SQL injection patches missing |
Transitive Dependency Risk
Deep Chains
your-app
└── package-a@1.0.0
└── package-b@2.0.0
└── package-c@3.0.0 ← vulnerability hereRisk: You don't control package-c, and updating package-a may not update it.
Detection
# Show dependency tree
npm ls --all --json | jq '.dependencies'
# Find deep chains (>4 levels)
npm ls --all 2>/dev/null | grep -E "^.{16,}" | head -20Mitigation
| Strategy | When |
|---|---|
overrides (npm) / resolutions (yarn) | Force specific version |
| Replace parent package | If parent is unmaintained |
| Vendor and patch | Last resort for critical fixes |
Unused Dependencies
Detection
# JavaScript
npx depcheck
# Python
pip-extra-reqs --ignore-module=tests .What to Flag
- Installed but never imported → MEDIUM (bloat, attack surface)
- Dev dependency in production deps → LOW (no runtime risk)
- Multiple packages for same purpose → LOW (e.g., both lodash and underscore)
Report Structure
Audit Report Structure
Report Format
# Audit Report: {project-name}
**Date:** {date} | **Mode:** {mode} | **Files loaded:** {count} | **LOC:** {loc}
## Executive Summary
{1-3 sentences: overall health, critical findings count}
## Findings
| # | Severity | Category | File(s) | Finding | Remediation |
|---|----------|----------|---------|---------|-------------|
| 1 | CRITICAL | Security | src/auth.ts:42 | ... | ... |
## Severity Breakdown
- CRITICAL: {n} (must fix before deploy)
- HIGH: {n} (fix within sprint)
- MEDIUM: {n} (fix within quarter)
- LOW: {n} (track and address)
## Architecture Diagram
{ASCII diagram of module dependencies}
## Recommendations
{Prioritized action items}Severity Classification
| Level | Criteria | Timeline |
|---|---|---|
| CRITICAL | Exploitable vulnerability, data loss risk, auth bypass | Must fix before deploy |
| HIGH | Security weakness, major arch violation, EOL dependency | Fix within sprint |
| MEDIUM | Code smell, minor arch inconsistency, stale dependency | Fix within quarter |
| LOW | Style issue, minor improvement, documentation gap | Track and address |
Codebase Loading Strategy
- Glob all source files matching inclusion patterns
- Sort by priority: entry points -> core modules -> utilities -> config
- Read files in parallel using multiple Read tool calls per message
- Track loaded tokens to stay within budget
Inclusion Patterns (by language)
# TypeScript/JavaScript
**/*.ts **/*.tsx **/*.js **/*.jsx
**/package.json **/tsconfig.json
# Python
**/*.py
**/pyproject.toml **/setup.cfg **/requirements*.txt
# Config
**/.env.example **/docker-compose*.yml **/Dockerfile
**/*.yaml **/*.yml (non-lock)Reading Pattern
Read files in batches of 10-15 per message for efficiency:
# Batch 1: Entry points and config
Read("src/index.ts")
Read("src/app.ts")
Read("package.json")
Read("tsconfig.json")
# ... up to 15 files
# Batch 2: Core modules
Read("src/api/routes.ts")
Read("src/db/connection.ts")
# ... next batchSecurity Audit Guide
Security Audit Guide
Cross-file vulnerability analysis patterns for whole-codebase audits.
Data Flow Tracing
Trace user input from entry to storage across file boundaries:
Entry Point → Validation → Processing → Storage
(route.ts) (middleware) (service.ts) (repo.ts)What to Check at Each Stage
| Stage | Check | Severity if Missing |
|---|---|---|
| Entry | Input validation, type coercion | HIGH |
| Validation | Schema validation, sanitization | CRITICAL |
| Processing | Business logic auth checks | HIGH |
| Storage | Parameterized queries, encoding | CRITICAL |
Cross-File Vulnerability Patterns
1. Auth Bypass via Missing Middleware
# PATTERN: Route defined without auth middleware
router.get('/admin/users', getUsersHandler) # No authMiddleware!
# Compare against protected routes:
router.get('/admin/settings', authMiddleware, getSettingsHandler)Detection: Glob all route files, check each handler has auth middleware.
2. SQL Injection via String Interpolation
# PATTERN: Variable in SQL string (any file)
query(`SELECT * FROM users WHERE id = '${userId}'`)
# Safe pattern:
query('SELECT * FROM users WHERE id = $1', [userId])Detection: Grep for template literals containing SQL keywords.
3. Command Injection via Shell Exec
# PATTERN: User input in exec/spawn
exec(`git log --author="${username}"`)
# Safe pattern:
execFile('git', ['log', `--author=${username}`])Detection: Grep for exec(, execSync(, spawn( with template literals.
4. Secret Leakage
# PATTERN: Hardcoded secrets
const API_KEY = 'sk-live-abc123...'
const password = 'admin123'
# PATTERN: Secrets in error messages
throw new Error(`Auth failed for ${password}`)
# PATTERN: Secrets in logs
console.log(`Connecting with key: ${apiKey}`)Detection: Grep for common secret patterns (sk-, ghp_, Bearer , password assignments).
5. SSRF via Unvalidated URLs
# PATTERN: User-controlled URL in fetch/axios
const response = await fetch(req.body.url)
# Safe pattern:
const url = new URL(req.body.url)
if (!ALLOWED_HOSTS.includes(url.hostname)) throw new Error('Blocked')6. Path Traversal
# PATTERN: User input in file path
const filePath = path.join(uploadDir, req.params.filename)
// filename could be '../../etc/passwd'
# Safe pattern:
const resolved = path.resolve(uploadDir, req.params.filename)
if (!resolved.startsWith(uploadDir)) throw new Error('Blocked')OWASP Top 10 Mapping
| OWASP | What to Look For |
|---|---|
| A01 Broken Access Control | Missing auth middleware, IDOR, privilege escalation |
| A02 Cryptographic Failures | Weak hashing, HTTP for sensitive data, hardcoded keys |
| A03 Injection | SQL, command, template injection across boundaries |
| A04 Insecure Design | Missing rate limiting, no abuse prevention |
| A05 Security Misconfiguration | Debug mode in prod, default credentials, CORS * |
| A06 Vulnerable Components | Outdated deps with known CVEs |
| A07 Auth Failures | Weak passwords, no MFA, session fixation |
| A08 Data Integrity | Unsigned updates, CI/CD without verification |
| A09 Logging Failures | Missing audit logs, secrets in logs |
| A10 SSRF | Unvalidated URLs in server-side requests |
Severity Classification
| Severity | Criteria |
|---|---|
| CRITICAL | Exploitable without authentication, data breach risk |
| HIGH | Exploitable with low-privilege access, system compromise |
| MEDIUM | Requires specific conditions, limited impact |
| LOW | Informational, defense-in-depth improvement |
Token Budget Planning
Token Budget Planning
Run Token Estimation
# Use the estimation script
bash ${CLAUDE_SKILL_DIR}/scripts/estimate-tokens.sh /path/to/projectManual Estimation Rules
| File Type | Tokens per Line (approx) |
|---|---|
| TypeScript/JavaScript | ~8 tokens/line |
| Python | ~7 tokens/line |
| JSON/YAML config | ~5 tokens/line |
| Markdown docs | ~6 tokens/line |
| CSS/SCSS | ~6 tokens/line |
Budget Allocation
| Context Size | Available for Code | Fits LOC (approx) |
|---|---|---|
| 200K | ~150K tokens | ~20K LOC |
| 1M (standard) | ~950K tokens | ~125K LOC |
Auto-Exclusion List
Always exclude from loading:
node_modules/,vendor/,.venv/,__pycache__/dist/,build/,.next/,out/*.min.js,*.map,*.lock(read lock files separately for deps audit)- Binary files, images, fonts
- Test fixtures and snapshots (unless auditing tests)
- Generated files (protobuf, graphql codegen)
If Codebase Exceeds Budget
- Priority loading: Entry points first, then imported modules
- Directory scoping: Ask user to narrow to specific directories
- Fallback: Recommend
/ork:verifyfor multi-agent approach (only needed for codebases > 125K LOC)
# Fallback suggestion
AskUserQuestion(
questions=[{
"question": "Codebase exceeds context window. How to proceed?",
"header": "Too large",
"options": [
{"label": "Narrow scope", "description": "Audit specific directories only"},
{"label": "Use /ork:verify instead", "description": "Chunked multi-agent approach (works with any context size)"},
{"label": "Priority loading", "description": "Load entry points + critical paths only"}
],
"multiSelect": false
}]
)Token Estimation
Token Estimation Guide
Planning context budget for whole-codebase loading.
Token Ratios by File Type
| File Type | Tokens/Line | Tokens/KB | Notes |
|---|---|---|---|
| TypeScript/JavaScript | ~8 | ~320 | Variable names inflate count |
| Python | ~7 | ~280 | Indentation is efficient |
| Go | ~7 | ~280 | Verbose but predictable |
| JSON | ~5 | ~200 | High repetition, low entropy |
| YAML | ~5 | ~200 | Similar to JSON |
| Markdown | ~6 | ~240 | Prose-heavy content |
| CSS/SCSS | ~6 | ~240 | Property-value pairs |
| SQL | ~6 | ~240 | Keyword-heavy |
| HTML/JSX | ~9 | ~360 | Attribute-heavy markup |
| Protobuf/GraphQL schema | ~5 | ~200 | Declarative, repetitive |
Quick Estimation Formula
Total tokens ≈ Total LOC × 7.5 (average)For more precision:
Total tokens ≈ (TS lines × 8) + (Py lines × 7) + (Config lines × 5) + (Other × 7)Context Budget Planning
| Context Size | Total | Reserved for Prompt | Available for Code | Max LOC |
|---|---|---|---|---|
| 1M (standard) | 1,000,000 | ~50,000 | ~950,000 | ~125,000 |
| 200K (legacy) | 200,000 | ~50,000 | ~150,000 | ~20,000 |
Reserved for prompt includes: system prompt, skill content, analysis instructions, and output space.
Exclusion Priority
When codebase exceeds budget, exclude in this order:
- Always exclude:
node_modules/,vendor/,.venv/,dist/,build/,.next/ - Exclude first: Test fixtures, snapshots, migration files, generated code
- Exclude second: Test files (unless auditing test quality)
- Exclude third: Documentation, README files
- Keep last: Source files, config, entry points
Loading Priority
When partially loading, prioritize in this order:
- Entry points:
index.ts,main.py,app.ts,server.ts - Route definitions: API routes, page routes
- Middleware/interceptors: Auth, validation, error handling
- Business logic: Services, use cases, domain models
- Data access: Repositories, ORM models, migrations
- Config: Environment config, feature flags, secrets management
- Utilities: Shared helpers, common functions
Real-World Sizing Examples
| Project Type | Typical LOC | Estimated Tokens | Fits in |
|---|---|---|---|
| Microservice | 5-15K | 40-120K | 1M (single-pass) |
| Small app | 15-30K | 120-240K | 1M (single-pass) |
| Medium app | 30-60K | 240-480K | 1M (single-pass) |
| Large app | 60-125K | 480K-950K | 1M (fits with scoping) |
| Large monolith | 125K+ | 950K+ | Directory-scoped or /ork:verify |
Checklists (1)
Audit Completion
Audit Completion Checklist
Verify before finalizing the audit report.
Pre-Report Verification
Coverage
- All source files in scope were loaded (check file count vs glob count)
- Entry points identified and traced
- Configuration files reviewed (env, docker, CI)
- Lock files checked (package-lock.json, poetry.lock, go.sum)
Security (if applicable)
- All public endpoints checked for auth middleware
- Data flow traced from input → storage for at least 3 critical paths
- Secret detection scan completed (grep for API keys, passwords, tokens)
- OWASP Top 10 categories all considered (mark N/A if not applicable)
- Third-party integrations checked for SSRF risk
- File upload/download paths checked for traversal
Architecture (if applicable)
- Dependency direction verified (imports flow inward)
- Circular dependencies checked
- Pattern consistency evaluated (error handling, validation, data access)
- Module coupling analyzed (cross-directory import counts)
- Layer violations identified
- ASCII architecture diagram generated
Dependencies (if applicable)
-
npm audit/pip-audit/ equivalent run - License compliance checked (no GPL in proprietary)
- Outdated packages identified with severity
- Unused dependencies flagged
- Transitive dependency risks assessed
Report Quality
- Every finding has specific file:line references
- Every finding has a remediation suggestion with code
- Severity classifications match the severity matrix
- No duplicate findings (same root cause reported once)
- False positives verified and removed
- Recommendations prioritized by impact and effort
- Executive summary accurately reflects findings
- Health score calculated correctly
Completeness
- All selected audit modes completed
- Context utilization reported (tokens used / available)
- Files list in appendix matches loaded files
- Report follows the template structure
Async Jobs
Async job processing patterns for background tasks, Celery workflows, task scheduling, retry strategies, and distributed task execution. Use when implementing background job processing, task queues, or scheduled task systems.
Audit Skills
Audits all OrchestKit skills for quality, completeness, and compliance with authoring standards. Use when checking skill health, before releases, or after bulk skill edits to surface SKILL.md files that are too long, have missing frontmatter, lack rules/references, or are unregistered in manifests.
Last updated on