Design Context Extract
Extract design DNA from existing app screenshots or live URLs using Google Stitch. Produces color palettes, typography specs, spacing tokens, and component patterns as design-tokens.json or Tailwind config. Use when auditing an existing design, creating a design system from a live app, or ensuring new pages match an established visual identity.
/ork:design-context-extractDesign Context Extract
Extract the "Design DNA" from existing applications — colors, typography, spacing, and component patterns — and output as structured tokens.
/ork:design-context-extract /tmp/screenshot.png # From screenshot
/ork:design-context-extract https://example.com # From live URL
/ork:design-context-extract current project # Scan project's existing stylesPipeline
Input (screenshot/URL/project)
│
▼
┌──────────────────────────────┐
│ Capture │ Screenshot or fetch HTML/CSS
└──────────┬───────────────────┘
│
▼
┌──────────────────────────────┐
│ Extract │ Stitch extract_design_context
│ │ OR multimodal analysis (fallback)
│ → Colors (hex + oklch) │
│ → Typography (families, scale)│
│ → Spacing (padding, gaps) │
│ → Components (structure) │
└──────────┬───────────────────┘
│
▼
┌──────────────────────────────┐
│ Output │ Choose format:
│ → design-tokens.json (W3C) │
│ → tailwind.config.ts │
│ → tokens.css (CSS variables) │
│ → Markdown spec │
└──────────────────────────────┘Step 0: Detect Input and Context
INPUT = ""
# 1. Create main task IMMEDIATELY
TaskCreate(subject="Extract design context: {INPUT}", description="Extract design DNA", activeForm="Extracting design from {INPUT}")
# 2. Create subtasks for each phase
TaskCreate(subject="Detect input type and context", activeForm="Detecting input type") # id=2
TaskCreate(subject="Capture source material", activeForm="Capturing source") # id=3
TaskCreate(subject="Extract design tokens", activeForm="Extracting tokens") # id=4
TaskCreate(subject="Choose output format and generate", activeForm="Generating output") # id=5
TaskCreate(subject="Recommend shadcn/ui style", activeForm="Recommending style") # id=6
# 3. Set dependencies for sequential phases
TaskUpdate(taskId="3", addBlockedBy=["2"]) # Capture needs input type detected
TaskUpdate(taskId="4", addBlockedBy=["3"]) # Extraction needs captured source
TaskUpdate(taskId="5", addBlockedBy=["4"]) # Output needs extracted tokens
TaskUpdate(taskId="6", addBlockedBy=["5"]) # Style recommendation needs output
# 4. Before starting each task, verify it's unblocked
task = TaskGet(taskId="2") # Verify blockedBy is empty
# 5. Update status as you progress
TaskUpdate(taskId="2", status="in_progress") # When starting
TaskUpdate(taskId="2", status="completed") # When done — repeat for each subtask
# Determine input type
# "/path/to/file.png" → screenshot
# "http..." → URL
# "current project" → scan project stylesStep 1: Capture Source
For screenshots: Read the image directly (Claude is multimodal). Pasted/attached images are compressed to the same token budget as Read tool images (CC 2.1.97), so both workflows are equally efficient.
Resolution budget (Opus 4.7 / CC 2.1.111+): Max input is 2,576 px on the long edge (~3.75 MP) — roughly 3× Opus 4.6. Dense dashboards, dark-mode UIs, and technical diagrams benefit the most from the higher ceiling; extraction reads tiny labels, spacing ticks, and component boundaries that were previously blurred. Below 1,024 px, don't upscale — the source bitmap is the ceiling. Resize only when input exceeds 2,576 px.
For URLs:
# If stitch available: call build_site(prompt=<url + extraction goal>)
# then get_screen_code / get_screen_image per generated screen
# If not: WebFetch the URL and analyze HTML/CSSFor current project:
Glob("**/tailwind.config.*")
Glob("**/tokens.css")
Glob("**/*.css") # Look for design token files
Glob("**/theme.*")
# Read and analyze existing style definitionsStep 2: Extract Design Context
If stitch MCP is available:
# Official Stitch MCP tools (stitch.withgoogle.com/docs/mcp):
# - build_site(prompt) → generates the target design
# - get_screen_code(screenId) → React/HTML output per screen
# - get_screen_image(screenId) → PNG rasterization per screen
#
# Also consider Figma Dev Mode MCP as a complementary extraction path
# when the source is a Figma file:
# - get_variable_defs → design tokens straight from Figma variables
# - get_design_context → layout + typography + spacing
# - search_design_system → locate existing tokens/componentsIf stitch MCP is NOT available (fallback):
# Multimodal analysis of screenshot:
# - Identify dominant colors (sample from regions)
# - Detect font families and size hierarchy
# - Measure spacing patterns
# - Catalog component types (cards, buttons, headers, etc.)
#
# For URLs: parse CSS custom properties, Tailwind config, computed stylesExtracted data structure:
{
"colors": {
"primary": { "hex": "#3B82F6", "oklch": "oklch(0.62 0.21 255)" },
"secondary": { "hex": "#10B981", "oklch": "oklch(0.69 0.17 163)" },
"background": { "hex": "#FFFFFF" },
"text": { "hex": "#1F2937" },
"muted": { "hex": "#9CA3AF" }
},
"typography": {
"heading": { "family": "Inter", "weight": 700 },
"body": { "family": "Inter", "weight": 400 },
"scale": [12, 14, 16, 18, 24, 30, 36, 48]
},
"spacing": {
"base": 4,
"scale": [4, 8, 12, 16, 24, 32, 48, 64]
},
"components": ["navbar", "hero", "card", "button", "footer"]
}Step 3: Choose Output Format
AskUserQuestion(questions=[{
"question": "Output format for extracted tokens?",
"header": "Format",
"options": [
{"label": "Tailwind config (Recommended)", "description": "tailwind.config.ts with extracted theme values"},
{"label": "W3C Design Tokens", "description": "design-tokens.json following W3C DTCG spec"},
{"label": "CSS Variables", "description": "tokens.css with CSS custom properties"},
{"label": "Markdown spec", "description": "Human-readable design specification document"}
],
"multiSelect": false
}])Step 4: Generate Output
Write the extracted tokens in the chosen format. If the project already has tokens, show a diff of what's new vs existing.
Step 5: Recommend Best-Fit shadcn/ui Style
After extracting design DNA, map the extracted characteristics to the best-fit shadcn/ui v4 style:
# Map extracted design DNA → shadcn style recommendation
radius = extracted["radius"] # e.g., "large", "pill", "none", "small"
density = extracted["spacing"] # e.g., "generous", "balanced", "compact", "dense"
elevation = extracted["shadows"] # e.g., "layered", "subtle", "none"
STYLE_MAP = {
# (radius, density, elevation) → style
("pill/large", "generous", "layered"): "Luma — polished, macOS-like",
("medium", "balanced", "subtle"): "Vega — general purpose",
("medium", "compact", "subtle"): "Nova — dense dashboards",
("large", "generous", "subtle"): "Maia — soft, consumer-facing",
("none/sharp", "balanced", "none"): "Lyra — editorial, dev tools",
("small", "dense", "none"): "Mira — ultra-dense data",
}
# Present recommendation with the style picker URL:
# "Based on extracted design DNA, recommended style: Luma"
# "Pick and install: https://ui.shadcn.com/create (select 'Luma' style)"
# Apply to existing project (CLI v4 apply command, Apr 2026):
# "$ npx shadcn@latest apply luma"Skip condition: If the user only needs raw tokens (not a shadcn project), skip this step.
Anti-Patterns
- NEVER guess colors without analyzing the actual source — use precise extraction
- NEVER skip the oklch conversion — all colors must have oklch equivalents
- NEVER output flat token structures — use three-tier hierarchy (global/alias/component)
Related Skills
ork:design-to-code— Full pipeline that uses this as Stage 1ork:design-system-tokens— Token architecture and W3C spec complianceork:component-search— Find components that match extracted patterns
Demo Producer
Universal demo video creator for skills, agents, plugins, tutorials, CLI commands, and code walkthroughs. Generates scripts, storyboards, VHS terminal recordings, and Remotion video compositions with task-tracked production phases. Use when producing video showcases, marketing content, or terminal recordings.
Design Import
Imports a Claude Design (claude.ai/design) handoff bundle and scaffolds the proposed components into the project. Accepts a bundle URL or local file, parses and validates the schema, deduplicates components against the existing codebase via component-search, then pipes the survivors through the design-to-code pipeline. Writes provenance metadata so future imports can detect drift between design versions. Use after exporting a handoff bundle from claude.ai/design — this is the entry point that turns a design into code.
Last updated on