Analytics
Query cross-project usage analytics. Use when reviewing agent, skill, hook, or team performance across OrchestKit projects. Also replay sessions, estimate costs, and view model delegation trends.
Auto-activated — this skill loads automatically when Claude detects matching context.
Cross-Project Analytics
Query local analytics data from ~/.claude/analytics/. All data is local-only, privacy-safe (hashed project IDs, no PII).
Subcommands
Parse the user's argument to determine which report to show. If no argument provided, use AskUserQuestion to let them pick.
| Subcommand | Description | Data Source | Reference |
|---|---|---|---|
agents | Top agents by frequency, duration, model breakdown | agent-usage.jsonl | $\{CLAUDE_SKILL_DIR\}/references/jq-queries.md |
models | Model delegation breakdown (opus/sonnet/haiku) | agent-usage.jsonl | $\{CLAUDE_SKILL_DIR\}/references/jq-queries.md |
skills | Top skills by invocation count | skill-usage.jsonl | $\{CLAUDE_SKILL_DIR\}/references/jq-queries.md |
hooks | Slowest hooks and failure rates | hook-timing.jsonl | $\{CLAUDE_SKILL_DIR\}/references/jq-queries.md |
teams | Team spawn counts, idle time, task completions | team-activity.jsonl | $\{CLAUDE_SKILL_DIR\}/references/jq-queries.md |
session | Replay a session timeline with tools, tokens, timing | CC session JSONL | $\{CLAUDE_SKILL_DIR\}/references/session-replay.md |
cost | Token cost estimation with cache savings | stats-cache.json | $\{CLAUDE_SKILL_DIR\}/references/cost-estimation.md |
trends | Daily activity, model delegation, peak hours | stats-cache.json | $\{CLAUDE_SKILL_DIR\}/references/trends-analysis.md |
summary | Unified view of all categories | All files | $\{CLAUDE_SKILL_DIR\}/references/jq-queries.md |
Quick Start Example
# Top agents with model breakdown
jq -s 'group_by(.agent) | map({agent: .[0].agent, count: length}) | sort_by(-.count)' ~/.claude/analytics/agent-usage.jsonl
# All-time token costs
jq '.modelUsage | to_entries | map({model: .key, input: .value.inputTokens, output: .value.outputTokens})' ~/.claude/stats-cache.jsonQuick Subcommand Guide
agents, models, skills, hooks, teams, summary — Run the jq query from Read("$\{CLAUDE_SKILL_DIR\}/references/jq-queries.md") for the matching subcommand. Present results as a markdown table.
session — Follow the 4-step process in Read("$\{CLAUDE_SKILL_DIR\}/references/session-replay.md"): locate session file, resolve reference (latest/partial/full ID), parse JSONL, present timeline.
cost — Apply model-specific pricing from Read("$\{CLAUDE_SKILL_DIR\}/references/cost-estimation.md") to CC's stats-cache.json. Show per-model breakdown, totals, and cache savings.
trends — Follow the 4-step process in Read("$\{CLAUDE_SKILL_DIR\}/references/trends-analysis.md"): daily activity, model delegation, peak hours, all-time stats.
summary — Run all subcommands and present a unified view: total sessions, top 5 agents, top 5 skills, team activity, unique projects.
Data Files
Load Read("$\{CLAUDE_SKILL_DIR\}/references/data-locations.md") for complete data source documentation.
| File | Contents |
|---|---|
agent-usage.jsonl | Agent spawn events with model, duration, success |
skill-usage.jsonl | Skill invocations |
hook-timing.jsonl | Hook execution timing and failure rates |
session-summary.jsonl | Session end summaries |
task-usage.jsonl | Task completions |
team-activity.jsonl | Team spawns and idle events |
Rules
Each category has individual rule files in rules/ loaded on-demand:
| Category | Rule | Impact | Key Pattern |
|---|---|---|---|
| Data Integrity | $\{CLAUDE_SKILL_DIR\}/rules/data-privacy.md | CRITICAL | Hash project IDs, never log PII, local-only |
| Cost & Tokens | $\{CLAUDE_SKILL_DIR\}/rules/cost-calculation.md | HIGH | Separate pricing per token type, cache savings |
| Performance | $\{CLAUDE_SKILL_DIR\}/rules/large-file-streaming.md | HIGH | Streaming jq for >50MB, rotation-aware queries |
| Visualization | $\{CLAUDE_SKILL_DIR\}/rules/visualization-recharts.md | HIGH | Recharts charts, ResponsiveContainer, tooltips |
| Visualization | $\{CLAUDE_SKILL_DIR\}/rules/visualization-dashboards.md | HIGH | Dashboard grids, stat cards, widget registry |
Total: 5 rules across 4 categories
References
| Reference | Contents |
|---|---|
$\{CLAUDE_SKILL_DIR\}/references/jq-queries.md | Ready-to-run jq queries for all JSONL subcommands |
$\{CLAUDE_SKILL_DIR\}/references/session-replay.md | Session JSONL parsing, timeline extraction, presentation |
$\{CLAUDE_SKILL_DIR\}/references/cost-estimation.md | Pricing table, cost formula, daily cost queries |
$\{CLAUDE_SKILL_DIR\}/references/trends-analysis.md | Daily activity, model delegation, peak hours queries |
$\{CLAUDE_SKILL_DIR\}/references/data-locations.md | All data sources, file formats, CC session structure |
Important Notes
- All files are JSONL (newline-delimited JSON) format
- For large files (>50MB), use streaming
jqwithout-s— loadRead("$\{CLAUDE_SKILL_DIR\}/rules/large-file-streaming.md") - Rotated files:
<name>.<YYYY-MM>.jsonl— include for historical queries teamfield only present during team/swarm sessionspidis a 12-char SHA256 hash — irreversible, for grouping only
Output Format
Present results as clean markdown tables. Include counts, percentages, and averages. If a file doesn't exist, note that no data has been collected yet for that category.
Related Skills
ork:explore- Codebase exploration and analysisork:feedback- Capture user feedbackork:remember- Store project knowledgeork:doctor- Health check diagnostics
Rules (5)
Calculate token costs accurately by separating cache reads from regular input pricing — HIGH
Token Cost Calculation
Calculate accurate token costs using model-specific pricing with cache-aware formulas.
Incorrect — treating all tokens equally:
// WRONG: ignores cache pricing difference (10x cheaper for reads)
const cost = totalTokens / 1_000_000 * 5.00;Correct — separate pricing per token type:
const mtok = 1_000_000;
const pricing = { input: 5.00, output: 25.00, cache_read: 0.50, cache_write: 6.25 };
const cost =
(tokens.input / mtok) * pricing.input +
(tokens.output / mtok) * pricing.output +
(tokens.cache_read / mtok) * pricing.cache_read +
(tokens.cache_write / mtok) * pricing.cache_write;
// Cache savings: what it would cost if cache reads were full-price input
const withoutCache =
((tokens.input + tokens.cache_read) / mtok) * pricing.input +
(tokens.output / mtok) * pricing.output;
const savings = withoutCache - cost;Key rules:
- Always calculate 4 token types separately: input, output, cache_read, cache_write
- Cache reads are 10x cheaper than regular input — this is the biggest cost factor
- Show cache savings prominently — users want to know caching is working
- When daily data only has total tokens (no split), estimate 70% input / 30% output
- Use
formatCost()fromcost-estimator.tsfor consistent formatting - Pricing is user-overridable via
~/.claude/orchestkit-pricing.json
Protect analytics data privacy by hashing identifiers and stripping sensitive fields — CRITICAL
Analytics Data Privacy
All analytics data must be local-only and privacy-safe. Never log PII or reversible identifiers.
Incorrect — logging raw paths and usernames:
// WRONG: raw project path is PII
appendAnalytics('agent-usage.jsonl', {
project: process.env.CLAUDE_PROJECT_DIR, // /Users/john/secret-project
user: os.userInfo().username, // john
file: input.file_path, // /Users/john/secret-project/auth.ts
});Correct — hashed identifiers, no PII:
// RIGHT: irreversible 12-char hash, no PII
appendAnalytics('agent-usage.jsonl', {
ts: new Date().toISOString(),
pid: hashProject(process.env.CLAUDE_PROJECT_DIR || ''), // "a3f8b2c1d4e5"
agent: agentType, // "code-quality-reviewer" (not PII)
model: modelName, // "claude-opus-4-6" (not PII)
duration_ms: durationMs,
success: true,
});Key rules:
- Use
hashProject()(12-char SHA256 truncation) for project identifiers — irreversible - Never log file paths, usernames, environment variables, or file contents
- Agent names, skill names, and hook names are safe to log (not PII)
- All data stays in
~/.claude/analytics/— never transmitted externally - The
teamfield uses team names (user-chosen), not paths
Stream large analytics files with jq instead of slurping to prevent OOM crashes — HIGH
Large File Streaming
Handle large JSONL files (>50MB) with streaming queries and rotation-aware patterns.
Incorrect — slurping large files into memory:
# WRONG: -s loads entire file into memory — OOM on 500MB file
jq -s 'group_by(.agent) | map({agent: .[0].agent, count: length})' ~/.claude/analytics/agent-usage.jsonlCorrect — streaming without slurp:
# RIGHT: stream-process line by line, then aggregate
jq -r '.agent' ~/.claude/analytics/agent-usage.jsonl | sort | uniq -c | sort -rn
# RIGHT: for complex aggregations, use reduce
jq -n '[inputs | .agent] | group_by(.) | map({agent: .[0], count: length}) | sort_by(-.count)' ~/.claude/analytics/agent-usage.jsonlIncluding rotated files for historical queries:
# Rotated files follow pattern: <name>.<YYYY-MM>.jsonl
# Include all months for full history
jq -r '.agent' ~/.claude/analytics/agent-usage.*.jsonl ~/.claude/analytics/agent-usage.jsonl 2>/dev/null | sort | uniq -c | sort -rnKey rules:
- Check file size before querying:
ls -lhthe target file - Files >50MB: use streaming
jqwithout-s(slurp) flag - Files <50MB:
-sis fine forgroup_byoperations - Include rotated files (
*.YYYY-MM.jsonl) when user asks for historical data - For date-range queries, filter by
tsfield before aggregating
Design dashboard layouts with shared query keys and grid widgets for performance — HIGH
Dashboard Layout & Widgets
Build responsive dashboard grids with stat cards, widget composition, and real-time data patterns.
Incorrect — each widget fetches independently:
// WRONG: 5 widgets = 5 duplicate API calls
function Dashboard() {
return (
<div>
<RevenueWidget /> {/* fetches /api/metrics */}
<UsersWidget /> {/* fetches /api/metrics AGAIN */}
<OrdersWidget /> {/* fetches /api/metrics AGAIN */}
</div>
);
}Correct — shared query with responsive grid layout:
// Dashboard grid with responsive breakpoints
function DashboardGrid() {
return (
<div className="grid gap-4 grid-cols-1 sm:grid-cols-2 lg:grid-cols-4">
<StatCard title="Revenue" value="$45,231" change="+12%" trend="up" />
<StatCard title="Users" value="2,350" change="+5.2%" trend="up" />
<StatCard title="Orders" value="1,234" change="-2.1%" trend="down" />
<StatCard title="Conversion" value="3.2%" change="+0.4%" trend="up" />
{/* Full-width chart spanning all columns */}
<div className="col-span-full">
<RevenueChart />
</div>
{/* Two-column layout for secondary charts */}
<div className="col-span-1 lg:col-span-2">
<TrafficChart />
</div>
<div className="col-span-1 lg:col-span-2">
<TopProductsTable />
</div>
</div>
);
}
// Stat card component
function StatCard({
title, value, change, trend,
}: {
title: string; value: string; change: string; trend: 'up' | 'down';
}) {
return (
<div className="rounded-lg border bg-card p-6">
<p className="text-sm text-muted-foreground">{title}</p>
<p className="text-2xl font-bold">{value}</p>
<p className={trend === 'up' ? 'text-green-600' : 'text-red-600'}>
{change}
</p>
</div>
);
}Widget registry pattern for dynamic dashboards:
const widgetRegistry: Record<string, React.ComponentType<WidgetProps>> = {
'stat-card': StatCard,
'line-chart': LineChartWidget,
'bar-chart': BarChartWidget,
'data-table': DataTableWidget,
};
function DynamicDashboard({ config }: { config: DashboardConfig }) {
return (
<div className="grid gap-4 grid-cols-12">
{config.widgets.map((widget) => {
const Widget = widgetRegistry[widget.type];
return (
<div key={widget.id} className={`col-span-${widget.colSpan}`}>
<Suspense fallback={<WidgetSkeleton />}>
<Widget {...widget.props} />
</Suspense>
</div>
);
})}
</div>
);
}Real-time updates with SSE + TanStack Query:
function useRealtimeMetrics() {
const queryClient = useQueryClient();
useEffect(() => {
const source = new EventSource('/api/metrics/stream');
source.onmessage = (event) => {
const metric = JSON.parse(event.data);
// Update specific query, not entire dashboard
queryClient.setQueryData(['metrics', metric.key], metric.value);
};
return () => source.close();
}, [queryClient]);
}Key rules:
- Use CSS Grid with responsive breakpoints (
grid-cols-1 sm:grid-cols-2 lg:grid-cols-4) - Share data via TanStack Query with granular query keys (not per-widget fetch)
- Use
col-span-fullfor full-width charts,col-span-2for half-width - Skeleton loading for content areas during initial load
- SSE for server-to-client real-time, WebSocket for bidirectional
- Update specific query keys on real-time events, not entire cache
Configure Recharts with ResponsiveContainer and animation control for stable rendering — HIGH
Recharts Chart Components
Build Recharts 3.x chart components with responsive containers, custom tooltips, and accessibility.
Incorrect — chart without responsive container:
// WRONG: Fixed width, no container, animations on real-time data
function BrokenChart({ data }: { data: ChartData[] }) {
return (
<LineChart width={800} height={400} data={data}>
{/* Fixed width overflows on mobile */}
{/* Animation on every data update = jank */}
<Line type="monotone" dataKey="value" />
</LineChart>
);
}Correct — responsive chart with proper setup:
import {
LineChart, Line, BarChart, Bar, PieChart, Pie, Cell,
CartesianGrid, XAxis, YAxis, Tooltip, Legend,
ResponsiveContainer, AreaChart, Area,
} from 'recharts';
// Line chart (trends over time)
function RevenueChart({ data }: { data: ChartData[] }) {
return (
<div className="h-[400px]"> {/* Parent MUST have height */}
<ResponsiveContainer width="100%" height="100%">
<LineChart data={data}>
<CartesianGrid strokeDasharray="3 3" />
<XAxis dataKey="date" />
<YAxis />
<Tooltip content={<CustomTooltip />} />
<Legend />
<Line
type="monotone"
dataKey="revenue"
stroke="#8884d8"
strokeWidth={2}
dot={{ r: 4 }}
/>
</LineChart>
</ResponsiveContainer>
</div>
);
}
// Custom tooltip for branded UX
function CustomTooltip({ active, payload, label }: any) {
if (!active || !payload?.length) return null;
return (
<div className="rounded-lg border bg-background p-3 shadow-md">
<p className="font-medium">{label}</p>
{payload.map((entry: any, i: number) => (
<p key={i} style={{ color: entry.color }}>
{entry.name}: {entry.value.toLocaleString()}
</p>
))}
</div>
);
}
// Real-time chart: disable animations
function LiveMetricChart({ data }: { data: MetricData[] }) {
return (
<ResponsiveContainer width="100%" height={300}>
<AreaChart data={data}>
<Area
type="monotone"
dataKey="value"
isAnimationActive={false} // No animation on real-time data
dot={false} // No dots for performance
/>
</AreaChart>
</ResponsiveContainer>
);
}
// Accessible chart with figure role
function AccessibleChart({ data, title }: { data: ChartData[]; title: string }) {
return (
<figure role="figure" aria-label={title}>
<figcaption className="sr-only">{title}</figcaption>
<ResponsiveContainer width="100%" height={400}>
<BarChart data={data}>
<Bar dataKey="value" fill="#8884d8" />
</BarChart>
</ResponsiveContainer>
</figure>
);
}Chart type selection guide:
| Chart | Component | Best For |
|---|---|---|
| Line | LineChart | Trends over time |
| Bar | BarChart | Comparisons between categories |
| Pie/Donut | PieChart with innerRadius | Proportions/percentages |
| Area | AreaChart with gradient | Volume over time |
Key rules:
- Always wrap charts in
ResponsiveContainerwith a parent that has explicit height - Disable animations on real-time/frequently-updating charts (
isAnimationActive=\{false\}) - Use custom tooltips for branded UX instead of default
- Add
figurerole andaria-labelfor accessibility - Limit data points to prevent rendering performance issues
- Memoize data calculations outside the render function
References (5)
Cost Estimation
Cost Estimation
Estimate token costs from CC's ~/.claude/stats-cache.json using model-specific pricing.
Pricing Table (Feb 2026)
| Model | Input/MTok | Output/MTok | Cache Read/MTok | Cache Write/MTok |
|---|---|---|---|---|
| claude-opus-4-6 | $5.00 | $25.00 | $0.50 | $6.25 |
| claude-sonnet-4-6 | $3.00 | $15.00 | $0.30 | $3.75 |
| claude-haiku-4-5 | $1.00 | $5.00 | $0.10 | $1.25 |
Cost Formula
cost = (input_tokens / 1M * input_price)
+ (output_tokens / 1M * output_price)
+ (cache_read_tokens / 1M * cache_read_price)
+ (cache_write_tokens / 1M * cache_write_price)Cache savings = cost if all cache reads were full-price input minus actual cost.
All-Time Model Usage Query
jq '.modelUsage | to_entries | map({
model: .key,
input: .value.inputTokens,
output: .value.outputTokens,
cache_read: .value.cacheReadInputTokens,
cache_write: .value.cacheCreationInputTokens
})' ~/.claude/stats-cache.jsonDaily Costs (Last 7 Days)
jq '.dailyModelTokens[-7:] | .[] | {date: .date, tokens: .tokensByModel}' ~/.claude/stats-cache.jsonNote: dailyModelTokens only has total tokens per model, not split by type. Estimate with 70% input / 30% output ratio as a rough average for CC usage.
Presentation Format
## Token Cost Estimate
| Model | Input Tokens | Output Tokens | Cache Read | Cache Write | Est. Cost |
|-------|-------------|--------------|------------|-------------|-----------|
| claude-opus-4-6 | 5.2M | 1.4M | 42.0M | 2.1M | $16.20 |
| claude-sonnet-4-6 | 200K | 50K | -- | -- | $1.85 |
| **Total** | | | | | **$18.50** |
**Cache savings:** $8.20 (what it would cost without prompt caching)
### Daily Costs (Last 7 Days)
| Date | Est. Cost |
|------|-----------|
| Feb 12 | $2.10 |
| Feb 13 | $1.85 |
| **Total** | **$18.50** |User-Overridable Config
Users can override pricing by creating ~/.claude/orchestkit-pricing.json — see src/hooks/src/lib/cost-estimator.ts for the schema.
Data Locations
Data Sources & File Locations
All analytics data sources used by the analytics skill.
OrchestKit Analytics Files
Location: ~/.claude/analytics/
| File | Contents | Key Fields |
|---|---|---|
agent-usage.jsonl | Agent spawn events | ts, pid, agent, model, duration_ms, success, output_len, team? |
skill-usage.jsonl | Skill invocations | ts, pid, skill, team? |
hook-timing.jsonl | Hook execution timing | ts, hook, duration_ms, ok, pid, team? |
session-summary.jsonl | Session end summaries | ts, pid, total_tools, team? |
task-usage.jsonl | Task completions | ts, pid, task_status, duration_ms, team? |
team-activity.jsonl | Team spawns and idle | ts, pid, event, agent, member?, idle_ms?, model?, team |
CC Native Data Sources
| Source | Path | Contents |
|---|---|---|
| CC session logs | ~/.claude/projects/\{encoded-path\}/*.jsonl | Full conversation with per-turn token usage |
| CC stats cache | ~/.claude/stats-cache.json | Pre-aggregated daily model tokens, session counts |
| CC history | ~/.claude/history.jsonl | Command history across all projects |
JSONL Format Notes
- All OrchestKit files use newline-delimited JSON (JSONL)
- Each line is a self-contained JSON object
- Rotated files follow pattern
<name>.<YYYY-MM>.jsonl— include them in queries for historical data - The
teamfield is only present for entries recorded during team/swarm sessions pidis a 12-char SHA256 hash of the project path — irreversible, used for grouping
CC Session JSONL Structure
Each line in a CC session JSONL file is a JSON object. Key entry types:
| Entry Pattern | How to Identify | Key Fields |
|---|---|---|
| Session metadata | Has sessionId, gitBranch, version | First entries in file |
| Assistant message | .message.role == "assistant" | .message.content[], .message.usage |
| User message | .message.role == "user" | .message.content |
| Tool use | .message.content[].type == "tool_use" | .name, .input |
| Hook progress | .type == "progress" + .data.type == "hook_progress" | .data.hookName |
Encoded Project Path
CC encodes project paths by replacing / with -:
/Users/foo/coding/barbecomes-Users-foo-coding-bar- The encoded path is the directory name under
~/.claude/projects/
Jq Queries
Analytics jq Queries
Ready-to-run jq queries for each analytics subcommand. All queries target ~/.claude/analytics/*.jsonl.
agents — Top agents by frequency and duration
jq -s 'group_by(.agent) | map({
agent: .[0].agent,
count: length,
avg_ms: (map(.duration_ms // 0) | add / length | floor),
success_rate: (map(select(.success)) | length) / length * 100 | floor,
models: (group_by(.model) | map({model: .[0].model, count: length}) | sort_by(-.count))
}) | sort_by(-.count)' ~/.claude/analytics/agent-usage.jsonlmodels — Model delegation breakdown
jq -s 'group_by(.model) | map({
model: .[0].model,
count: length,
avg_ms: (map(.duration_ms // 0) | add / length | floor),
agents: ([.[].agent] | unique)
}) | sort_by(-.count)' ~/.claude/analytics/agent-usage.jsonlskills — Top skills by invocation count
jq -s 'group_by(.skill) | map({skill: .[0].skill, count: length}) | sort_by(-.count)' ~/.claude/analytics/skill-usage.jsonlhooks — Slowest hooks and failure rates
jq -s 'group_by(.hook) | map({
hook: .[0].hook,
count: length,
avg_ms: (map(.duration_ms) | add / length | floor),
fail_rate: (map(select(.ok == false)) | length) / length * 100 | floor
}) | sort_by(-.avg_ms) | .[0:15]' ~/.claude/analytics/hook-timing.jsonlteams — Team spawn counts, idle time, task completions
# Team activity (spawns + idle)
jq -s 'group_by(.team) | map({
team: .[0].team,
spawns: [.[] | select(.event == "spawn")] | length,
idles: [.[] | select(.event == "idle")] | length,
agents: [.[].agent] | unique
}) | sort_by(-.spawns)' ~/.claude/analytics/team-activity.jsonl
# Task completions by team
jq -s '[.[] | select(.team != null)] | group_by(.team) | map({
team: .[0].team,
tasks: length,
avg_ms: (map(.duration_ms // 0) | add / length | floor)
})' ~/.claude/analytics/task-usage.jsonlsummary — Quick counts
# Total sessions (excluding zero-tool sessions)
jq -s '[.[] | select(.total_tools > 0)] | length' ~/.claude/analytics/session-summary.jsonl
# Line counts per file
wc -l ~/.claude/analytics/*.jsonl 2>/dev/null
# Unique projects
jq -r .pid ~/.claude/analytics/agent-usage.jsonl 2>/dev/null | sort -u | wc -lPresentation Format
Present all results as clean markdown tables with counts, percentages, and averages. If a file doesn't exist, note that no data has been collected yet for that category.
Example output:
| Agent | Count | Avg Duration | Success Rate | Top Model |
|-------|-------|-------------|-------------|-----------|
| code-quality-reviewer | 45 | 8.2s | 98% | opus |
| test-generator | 32 | 12.1s | 94% | sonnet |Session Replay
Session Replay
Parse and visualize CC session JSONL files to understand what happened in a session.
Usage
/ork:analytics session latest— most recent session/ork:analytics session <partial-id>— match by prefix (e.g.,08ed1436)/ork:analytics session <full-uuid>— exact match
Step 1: Locate the Session File
CC session logs live at ~/.claude/projects/\{encoded-project-path\}/.
The encoded path replaces / with - in the project directory path.
Example: /Users/foo/coding/bar becomes -Users-foo-coding-bar
# Find project session dir
PROJECT_DIR=$(echo "$CLAUDE_PROJECT_DIR" | sed 's|/|-|g')
SESSION_DIR="$HOME/.claude/projects/$PROJECT_DIR"
# List recent sessions (newest first)
ls -t "$SESSION_DIR"/*.jsonl 2>/dev/null | head -5
# For "latest": use the first result
LATEST=$(ls -t "$SESSION_DIR"/*.jsonl 2>/dev/null | head -1)Step 2: Resolve the Session Reference
latest— find the most recently modified.jsonlfile in the project directory- Partial ID (e.g.,
08ed1436) — find file starting with that prefix - Full UUID — exact match
Step 3: Parse JSONL and Extract Timeline
Each line is a JSON object. Key extraction patterns:
# Count messages by role
jq -r '.message.role // empty' "$SESSION_FILE" | sort | uniq -c | sort -rn
# Extract tool calls with timestamps
jq -r 'select(.message.role == "assistant") | .message.content[]? | select(.type == "tool_use") | .name' "$SESSION_FILE" | sort | uniq -c | sort -rn
# Sum token usage
jq -s '[.[].message.usage // empty | {
i: .input_tokens, o: .output_tokens,
cr: .cache_read_input_tokens, cw: .cache_creation_input_tokens
}] | {
input: (map(.i) | add), output: (map(.o) | add),
cache_read: (map(.cr) | add), cache_write: (map(.cw) | add)
}' "$SESSION_FILE"
# Get session metadata
jq -r 'select(.gitBranch) | .gitBranch' "$SESSION_FILE" | head -1
jq -r 'select(.version) | .version' "$SESSION_FILE" | head -1
# Get start/end timestamps
jq -r '.timestamp' "$SESSION_FILE" | head -1 # start
jq -r '.timestamp' "$SESSION_FILE" | tail -1 # end
# Count agent spawns by type
jq -r '.message.content[]? | select(.type == "tool_use" and .name == "Task") | .input.subagent_type' "$SESSION_FILE" | sort | uniq -c | sort -rnStep 4: Present as Timeline
## Session: 08ed1436 — 2026-02-18 10:50 -> 11:35 (45min)
**Branch:** bugfix/windows-spawn | **CC Version:** 2.1.45
**Tokens:** 152K in, 38K out | **Cache hit rate:** 89%
### Timeline
| Time | Event | Details |
|------|-------|---------|
| 10:50:00 | SESSION START | branch: bugfix/windows-spawn |
| 10:50:01 | HOOK | SessionStart:startup |
| 10:50:05 | Read | src/hooks/bin/spawn-worker.mjs |
| 10:50:08 | Grep | "spawn" in src/ |
| 10:50:15 | Task (agent) | code-quality-reviewer |
| 10:51:00 | Edit | src/hooks/bin/spawn-worker.mjs |
| 10:52:30 | Bash | npm test -> 8.3s |
| 11:35:00 | SESSION END | 23 tool calls, 3 agents |
### Tool Usage
| Tool | Count |
|------|-------|
| Read | 12 |
| Edit | 5 |
| Bash | 4 |
| Task | 2 |
### Token Breakdown
| Metric | Value |
|--------|-------|
| Input tokens | 152,340 |
| Output tokens | 38,210 |
| Cache read | 1,245,000 |
| Cache write | 18,500 |
| Cache hit rate | 89% |Trends Analysis
Trends Analysis
Show daily activity, model delegation trends, and cost patterns over time.
Usage
/ork:analytics trends— default 7 days/ork:analytics trends 30— last 30 days
Step 1: Daily Activity (sessions, messages, tool calls)
jq '.dailyActivity[-7:]' ~/.claude/stats-cache.jsonStep 2: Daily Model Token Breakdown
jq '.dailyModelTokens[-7:] | .[] | {
date: .date,
models: (.tokensByModel | to_entries | map({model: .key, tokens: .value}) | sort_by(-.tokens))
}' ~/.claude/stats-cache.jsonStep 3: Peak Productivity Hours
jq '.hourCounts | to_entries | sort_by(-.value) | .[0:5] | map({
hour: (.key + ":00"),
sessions: .value
})' ~/.claude/stats-cache.jsonStep 4: All-Time Stats
jq '{
totalSessions: .totalSessions,
totalMessages: .totalMessages,
longestSession: {
id: .longestSession.sessionId,
duration_min: (.longestSession.duration / 60000 | floor),
messages: .longestSession.messageCount
}
}' ~/.claude/stats-cache.jsonPresentation Format
## Trends -- Last 7 Days
### Daily Activity
| Date | Sessions | Messages | Tools | Est. Cost |
|------|----------|----------|-------|-----------|
| Feb 12 | 6 | 1,200 | 450 | $2.10 |
| Feb 13 | 5 | 980 | 380 | $1.85 |
| ... | ... | ... | ... | ... |
| **Total** | **42** | **8,380** | **3,390** | **$18.50** |
### Model Delegation Trend
| Date | opus | sonnet | haiku |
|------|------|--------|-------|
| Feb 12 | 452K | 31K | -- |
| Feb 13 | 380K | 25K | 12K |
| ... | ... | ... | ... |
### Peak Productivity Hours
| Hour | Sessions |
|------|----------|
| 10:00 | 78 |
| 9:00 | 71 |
| 14:00 | 65 |
### All-Time Stats
- **Total sessions:** [N]
- **Total messages:** [N]
- **Longest session:** [id] -- [N] min, [N] messagesCost Per Day
Apply pricing from references/cost-estimation.md to daily token counts:
- Split daily tokens by model
- Apply per-model pricing (70/30 input/output estimate for daily totals)
- Show daily cost in the activity table
Ai Ui Generation
AI-assisted UI generation patterns for json-render, v0, Bolt, and Cursor workflows. Covers prompt engineering for component generation, review checklists for AI-generated code, design token injection, refactoring for design system conformance, and CI gates for quality assurance. Use when generating UI components with AI tools, rendering multi-surface MCP visual output, reviewing AI-generated code, or integrating AI output into design systems.
Animation Motion Design
Animation and motion design patterns using Motion library (formerly Framer Motion) and View Transitions API. Use when implementing component animations, page transitions, micro-interactions, gesture-driven UIs, or ensuring motion accessibility with prefers-reduced-motion.
Last updated on