Skip to main content
OrchestKit v7.61.0 — 106 skills, 37 agents, 180 hooks · Claude Code 2.1.113+
OrchestKit
Skills

Notebooklm

NotebookLM integration patterns for external RAG, research synthesis, studio content generation (audio, cinematic video, slides, infographics, mind maps), and knowledge management. Use when creating notebooks, adding sources, generating audio/video, or querying NotebookLM via MCP.

Reference medium

Auto-activated — this skill loads automatically when Claude detects matching context.

NotebookLM

NotebookLM = external RAG engine that offloads reading from your context window. Uses the notebooklm-mcp-cli MCP server (PyPI, v0.5.25+) to create notebooks, manage sources, generate content, and query with grounded AI responses. Supports batch operations across notebooks, pipelines, and multilingual content generation.

Disclaimer: Uses internal undocumented Google APIs via browser authentication. Sessions last ~20 minutes. API may change without notice.

What's New (April 2026 — v0.5.25)

  • Video formats — 3 formats (explainer, brief, cinematic) + 9 visual styles (classic, whiteboard, kawaii, anime, watercolor, retro_print, heritage, paper_craft, auto_select)
  • Audio lengthaudio_length param: short, default, long (in addition to 4 audio formats)
  • PPTX exportdownload_artifact(slide_deck_format="pptx") alongside PDF
  • Bulk source opssource_add(urls=[...]) for multi-URL, source_delete(source_ids=[...]) for bulk delete, source_add(wait=True) to await processing
  • Studio artifact renamestudio_status(action="rename", artifact_id="...", new_title="...")
  • Slide/report/quiz paramsslide_format (detailed_deck|presenter_slides), report_format (Briefing Doc|Study Guide|Blog Post|Create Your Own), difficulty + question_count for quiz/flashcards
  • Infographic optionsorientation (landscape|portrait|square), detail_level (concise|standard|detailed), 11 infographic_style options
  • Audio sources — Upload m4a, wav, mp3, aac, ogg, opus as notebook sources
  • Async large queriesnotebook_query_start/notebook_query_status for 50+ source notebooks
  • Multi-browser auth — Arc, Brave, Edge, Chromium, Vivaldi, Opera + WSL2 support (nlm login --wsl)
  • EnterpriseNOTEBOOKLM_BASE_URL env var for Google Workspace deployments
  • Security — Download path traversal protection, 0o600 auth files, Chrome origins locked to localhost

Prerequisites

  1. Install: uv tool install notebooklm-mcp-cli (or pip install notebooklm-mcp-cli)
  2. Authenticate: nlm login (opens browser, session ~20 min)
  3. Configure MCP: nlm setup add claude-code (auto-configures .mcp.json) or nlm setup add all for multi-tool setup
  4. Verify: nlm login --check to confirm active session
  5. Upgrade: uv tool upgrade notebooklm-mcp-cli — restart MCP server after upgrade

CRITICAL: Task Management is MANDATORY (CC 2.1.16)

BEFORE doing ANYTHING else, create tasks to track progress:

# 1. Create main task IMMEDIATELY
TaskCreate(
  subject="NotebookLM: {operation}",
  description="Managing notebooks, sources, and content generation",
  activeForm="Managing NotebookLM resources"
)

# 2. Create subtasks for the notebook workflow
TaskCreate(subject="Notebook setup", activeForm="Creating/configuring notebook")
TaskCreate(subject="Source management", activeForm="Adding sources to notebook")
TaskCreate(subject="Content generation", activeForm="Generating studio content")

# 3. Set dependencies for sequential steps
TaskUpdate(taskId="3", addBlockedBy=["2"])
TaskUpdate(taskId="4", addBlockedBy=["3"])

# 4. Before starting each task, verify it's unblocked
task = TaskGet(taskId="2")  # Verify blockedBy is empty

# 5. Update status as you progress
TaskUpdate(taskId="2", status="in_progress")  # When starting
TaskUpdate(taskId="2", status="completed")    # When done

Decision Tree — Which Rule to Read

What are you trying to do?

├── Create / manage notebooks
│   ├── List / get / rename ──────► notebook_list, notebook_get, notebook_rename
│   ├── Create new notebook ──────► notebook_create
│   └── Delete notebook ──────────► notebook_delete (irreversible!)

├── Add sources to a notebook
│   ├── URL / YouTube ────────────► source_add(type=url)
│   ├── Plain text ───────────────► source_add(type=text)
│   ├── Local file ───────────────► source_add(type=file)
│   ├── Google Drive ─────────────► source_add(type=drive)
│   ├── Rename a source ──────────► source_rename
│   └── Manage sources ──────────► rules/setup-quickstart.md

├── Query a notebook (AI chat)
│   ├── Ask questions ────────────► notebook_query
│   └── Configure chat style ────► chat_configure

├── Generate studio content
│   ├── 10 artifact types ───────► rules/workflow-studio-content.md
│   ├── Revise slides ───────────► studio_revise (creates new deck)
│   └── Export to Docs/Sheets ──► export_artifact

├── Research & discovery
│   └── Web/Drive research ──────► rules/workflow-research-discovery.md

├── Notes (capture insights)
│   └── Create/list/update/delete ► note (unified tool)

├── Sharing & collaboration
│   └── Public links / invites / batch ► rules/workflow-sharing-collaboration.md

├── Batch & cross-notebook
│   ├── Query across notebooks ────► cross_notebook_query
│   ├── Bulk operations ───────────► batch (query, add-source, create, studio)
│   └── Multi-step pipelines ──────► rules/workflow-batch-pipelines.md

├── Organization
│   └── Tag notebooks ─────────────► tag

└── Workflow patterns
    ├── Second brain ─────────────► rules/workflow-second-brain.md
    ├── Research offload ─────────► rules/workflow-research-offload.md
    └── Knowledge base ──────────► rules/workflow-knowledge-base.md

Quick Reference

CategoryRuleImpactKey Pattern
Setupsetup-quickstart.mdHIGHAuth, MCP config, source management, session refresh
Workflowsworkflow-second-brain.mdHIGHDecision docs, project hub, agent interop
Workflowsworkflow-research-offload.mdHIGHSynthesis, onboarding, token savings
Workflowsworkflow-knowledge-base.mdHIGHDebugging KB, security handbook, team knowledge
Workflowsworkflow-studio-content.mdHIGH10 artifact types (audio, cinematic video, slides, infographics, mind maps...)
Researchworkflow-research-discovery.mdHIGHWeb/Drive research async flow
Collaborationworkflow-sharing-collaboration.mdMEDIUMPublic links, collaborator invites, batch sharing
Batchworkflow-batch-pipelines.mdHIGHCross-notebook queries, batch ops, pipelines
Releaseworkflow-versioned-notebooks.mdHIGHPer-release notebooks with changelog + diffs

Total: 9 rules across 5 categories

MCP Tools by API Group

GroupToolsCount
Notebooksnotebook_list, notebook_create, notebook_get, notebook_describe, notebook_rename, notebook_delete6
Sourcessource_add, source_rename, source_list_drive, source_sync_drive, source_delete, source_describe, source_get_content7
Queryingnotebook_query, chat_configure2
Studiostudio_create, studio_status (also: list_types, rename), studio_revise, studio_delete4
Researchresearch_start, research_status, research_import3
Sharingnotebook_share_status, notebook_share_public, notebook_share_invite, notebook_share_batch4
Notesnote (unified: list/create/update/delete)1 (4 actions)
Downloadsdownload_artifact1
Exportexport_artifact (Google Docs/Sheets)1
Batchbatch (multi-notebook ops), cross_notebook_query2
Pipelinespipeline (action: run|list; ingest-and-podcast, research-and-report, multi-format)1
Tagstag (action: add|remove|list|select)1
Authsave_auth_tokens, refresh_auth, server_info3

Total: 36 tools across 13 groups (v0.5.25+)

Key Decisions

DecisionRecommendation
New notebook vs existingOne notebook per project/topic; add sources to existing
Source typeURL for web, text for inline, file for local docs, drive for Google Docs
Large sourcesSplit >50K chars into multiple sources for better retrieval
Auth expired?nlm login --check; sessions last ~20 min, re-auth with nlm login
Studio contentUse studio_create, poll with studio_status (generation takes 2-5 min)
Cinematic videostudio_create(artifact_type="video", video_format="cinematic") — requires Plus/Ultra, English only, 20/day
Audio formatChoose brief/critique/debate/deep_dive via audio_format + short/default/long via audio_length
Research discoveryresearch_start for web/Drive discovery, then research_import (timeout=300s default)
Deep researchresearch_start(mode="deep") for multi-source synthesis (v0.5.1+, auto-retries)
Release notebooksOne notebook per minor version; upload CHANGELOG + key skill diffs as sources
Query vs searchnotebook_query for AI-grounded answers; source_get_content for raw text
Notes vs sourcesNotes for your insights/annotations; sources for external documents
Infographic style11 visual styles via infographic_style param on studio_create
Slide revisionUse studio_revise to edit individual slides (creates a new deck)
Export artifactsexport_artifact sends reports → Google Docs, data tables → Sheets
Languagelanguage param on studio_create accepts BCP-47 codes (e.g., he for Hebrew, en, es, ja)
Bulk source addsource_add(urls=["url1","url2"], wait=True) for multi-URL ingestion with processing wait
Batch operationsUse batch for multi-notebook ops; cross_notebook_query for aggregated answers
Pipelinespipeline(action="run", pipeline_name="ingest-and-podcast") for multi-step workflows

Example

# 1. Create a notebook for your project
notebook_create(title="Auth Refactor Research")

# 2. Add sources (docs, articles, existing code analysis)
source_add(notebook_id="...", type="url", url="https://oauth.net/2.1/")
source_add(notebook_id="...", type="text", content="Our current auth uses...")
source_add(notebook_id="...", type="file", path="/docs/auth-design.md")

# 3. Query with grounded AI responses
notebook_query(notebook_id="...", query="What are the key differences between OAuth 2.0 and 2.1?")

# 4. Generate a deep dive audio overview (supports language param)
studio_create(notebook_id="...", artifact_type="audio", audio_format="deep_dive", language="he", confirm=True)
studio_status(notebook_id="...")  # Poll until complete

# 5. Generate a cinematic video overview (Plus/Ultra, English)
studio_create(notebook_id="...", artifact_type="video", video_format="cinematic", visual_style="classic", confirm=True)
studio_status(notebook_id="...")  # Poll — takes 3-8 minutes

# 6. Capture insights as notes
note(notebook_id="...", action="create", content="Key takeaway: PKCE is mandatory in 2.1")

Common Mistakes

  • Forgetting auth expiry — Sessions last ~20 min. Always check with nlm login --check before long workflows. Re-auth with nlm login.
  • One giant notebook — Split by project/topic. One notebook with 50 sources degrades retrieval quality.
  • Huge single sources — Split documents >50K characters into logical sections for better chunking and retrieval.
  • Not polling studio_status — Studio content generation takes 2-5 minutes. Poll studio_status instead of assuming instant results.
  • Ignoring source types — Use type=url for web pages (auto-extracts), type=file for local files. Using type=text for a URL gives you the URL string, not the page content.
  • Deleting notebooks without checkingnotebook_delete is irreversible. List contents with source_list_drive and note(action=list) first.
  • Skipping research_importresearch_start discovers content but does not add it. Use research_import to actually add findings as sources.
  • Raw queries on empty notebooksnotebook_query returns poor results with no sources. Add sources before querying.
  • Ignoring language paramstudio_create supports BCP-47 language codes (e.g., he, ar, ja). Defaults to English if omitted.
  • Batch without purposebatch and cross_notebook_query are powerful but add latency. Use for multi-project synthesis, not single-notebook tasks.
  • ork:mcp-patterns — MCP server building, security, and composition patterns
  • ork:web-research-workflow — Web research strategies and source evaluation
  • ork:memory — Memory fabric for cross-session knowledge persistence
  • ork:security-patterns — Input sanitization and layered security

Rules (9)

NotebookLM Quick Setup — HIGH

Quick Setup

Authenticate with Google via nlm login, then register the MCP server with Claude Code. Auth sessions expire after ~20 minutes of inactivity. Supports any Chromium-based browser (Chrome, Arc, Brave, Edge, Vivaldi, Opera).

Incorrect -- manually editing .mcp.json with wrong server path:

{
  "mcpServers": {
    "notebooklm": {
      "command": "node",
      "args": ["./some/wrong/path/server.js"]
    }
  }
}

Correct -- using CLI setup command:

# 1. Authenticate (opens browser for Google OAuth)
nlm login

# 2. Register MCP server with Claude Code
nlm setup add claude-code

# 3. Verify auth is active
nlm login --check

Manual .mcp.json fallback (if nlm setup is unavailable):

{
  "mcpServers": {
    "notebooklm": {
      "command": "nlm",
      "args": ["mcp"]
    }
  }
}

Alternative -- skill-based install:

nlm skill install

Browser preference (v0.3.17+):

# Set preferred browser (auto-detected by default)
nlm config set auth.browser arc    # arc | chrome | brave | edge | vivaldi | opera | auto

# Or via environment variable
export NLM_BROWSER=arc

Key rules:

  • Always authenticate with nlm login before first use -- browser OAuth flow required
  • Auth sessions last ~20 minutes; re-run nlm login if tools start failing
  • Use nlm login --check to verify session status before long workflows
  • Prefer nlm setup add claude-code over manual .mcp.json editing
  • If setup command fails, use the manual .mcp.json fallback with "command": "nlm"
  • Any Chromium-based browser works: Chrome, Arc, Brave, Edge, Chromium, Vivaldi, Opera
  • Set a preferred browser with nlm config set auth.browser <name> or NLM_BROWSER env var

Batch Operations & Pipelines — HIGH

Batch Operations & Pipelines

v0.5.25 batch operations across multiple notebooks, cross-notebook queries with aggregated answers, and multi-step pipelines. Use for multi-project synthesis — not single-notebook tasks.

Incorrect — querying notebooks one at a time for synthesis:

# Slow: sequential queries across 5 notebooks
for nb_id in notebook_ids:
    result = notebook_query(notebook_id=nb_id, query="What are the security risks?")
    results.append(result)
# Manual aggregation required

Correct — cross-notebook query for aggregated answers:

# Single call: queries all notebooks, returns aggregated answer with per-notebook citations
result = cross_notebook_query(
    notebook_ids=["nb1", "nb2", "nb3"],
    query="What are the security risks across all projects?"
)
# result includes: aggregated_answer, per_notebook_citations[]

Batch operations — multi-notebook bulk actions:

# Add the same source to multiple notebooks at once
batch(
    action="add-source",
    notebook_ids=["nb1", "nb2", "nb3"],
    source_type="url",
    url="https://owasp.org/Top10/"
)

# Create studio content across notebooks
batch(
    action="studio",
    notebook_ids=["nb1", "nb2"],
    artifact_type="audio",
    audio_format="brief",
    language="he",
    confirm=True
)

Pipelines — multi-step workflows in a single call:

# Ingest sources and immediately generate a podcast
pipeline(
    action="run",
    pipeline_name="ingest-and-podcast",
    notebook_id="...",
    sources=[{"type": "url", "url": "https://example.com/article"}],
    audio_format="deep_dive",
    language="en",
    confirm=True
)

# Research a topic, add findings, generate a report
pipeline(
    action="run",
    pipeline_name="research-and-report",
    notebook_id="...",
    query="Latest trends in LLM safety",
    report_format="Briefing Doc",
    confirm=True
)

# Generate multiple artifact types from the same sources
pipeline(
    action="run",
    pipeline_name="multi-format",
    notebook_id="...",
    artifact_types=["audio", "infographic", "mind_map"],
    language="he",
    confirm=True
)

Tagging notebooks for organization:

# Tag notebooks for smart filtering
tag(notebook_id="...", action="add", tags="security,q1-2026")

# Use tags to select notebooks for batch operations
batch(
    action="query",
    tag_filter="security",
    query="What vulnerabilities were identified?"
)

Key rules:

  • Use cross_notebook_query for synthesis across projects — avoids sequential query overhead
  • batch operations run in parallel server-side — faster than sequential MCP calls
  • Pipelines combine ingest + generation — use when source addition and content creation are a single intent
  • Tag notebooks early — enables tag-based batch operations later
  • All batch/pipeline operations support the language parameter for multilingual output
  • confirm=True is required on destructive or generative batch actions

Knowledge Base Pattern — HIGH

Knowledge Base Pattern

Build dedicated notebooks as curated knowledge bases for debugging, security, and onboarding. Add incident reports, advisories, and runbooks as sources for grounded, verified answers.

Incorrect -- re-investigating a known issue from scratch:

User: "Production is throwing OOM errors again"
Claude: "Let me research possible causes..."
# Wastes time if this was already diagnosed and documented

Correct -- query the debugging knowledge base:

# 1. Create dedicated KB notebooks
notebook_create(name="Debugging KB")
notebook_create(name="Security Handbook")

# 2. Add incident reports and advisories as sources
source_add(notebook_id="debug_kb", content="INC-042: OOM caused by unbounded cache. Fix: add TTL...")
source_add(notebook_id="security_kb", content="SEC-007: SQL injection in search endpoint. Fix: parameterize...")

# 3. Query for grounded answers
notebook_query(notebook_id="debug_kb", query="What causes OOM errors and how were they fixed?")
notebook_query(notebook_id="security_kb", query="Known SQL injection patterns in our codebase")

Key rules:

  • Create separate notebooks for debugging, security, and onboarding domains
  • Add incident reports, post-mortems, and security advisories as sources
  • Query KB notebooks before re-investigating known issues
  • Keep sources current -- add new incidents as they are resolved
  • Use for onboarding: new team members query the KB instead of asking around

Research Discovery Pattern — HIGH

Research Discovery Pattern

Use the research API for automated web and Google Drive discovery. The flow is async: start a research task, poll for status, then import discovered sources into your notebook.

Incorrect -- manually searching and adding URLs one by one:

# Tedious and misses relevant content
source_add(notebook_id="...", url="https://example.com/article1")
source_add(notebook_id="...", url="https://example.com/article2")
source_add(notebook_id="...", url="https://example.com/article3")
# Missed 20 other relevant articles

Correct -- automated research flow:

# 1. Start research (searches web and/or Google Drive)
task = research_start(
    notebook_id="...",
    topic="Latest developments in WebAssembly component model",
    sources=["web", "drive"]
)

# 2. Poll for completion (uses Google API quota)
status = research_status(task_id=task.id)
# status: "searching" | "analyzing" | "completed"

# 3. Import discovered sources into notebook (timeout configurable, default 300s)
research_import(task_id=task.id, notebook_id="...")
# Adds the most relevant discovered sources automatically

Deep research mode (v0.5.1+):

# Deep research for comprehensive multi-source synthesis
research_start(notebook_id="...", topic="...", mode="deep")
# Transient API errors now auto-retry with RPCError handling

Key rules:

  • Use research_start for broad topic discovery instead of manual URL hunting
  • Always poll with research_status -- research takes 1-3 minutes
  • Research uses Google API quota -- avoid running many parallel research tasks
  • Import results with research_import to add discovered sources to your notebook
  • For notebooks with many sources, increase timeout on research_import (default 300s, was 120s)
  • Use mode="deep" for comprehensive synthesis -- transient errors are now auto-retried (v0.5.1+)
  • Combine web and Drive sources for comprehensive coverage
  • Follow up with notebook_query to synthesize the newly imported sources

Research Offload Pattern — HIGH

Research Offload Pattern

Add large documents, codebases, and references as notebook sources instead of pasting them into chat. Use notebook_query for targeted synthesis without consuming context window.

Incorrect -- pasting large content directly into chat:

User: "Here's our entire codebase (100K chars)... now explain the auth flow"
# Wastes context, may hit token limits, loses nuance in truncation

Correct -- add as source, query for synthesis:

# 1. Add large docs as sources (use RepoMix for codebases)
source_add(notebook_id="...", url="https://docs.example.com/api-reference")
source_add(notebook_id="...", content=repomix_output)

# 2. Query for specific synthesis
notebook_query(notebook_id="...", query="How does the authentication middleware chain work?")

# 3. Follow up with targeted questions
notebook_query(notebook_id="...", query="What error codes does the auth endpoint return?")

Key rules:

  • Add large documents as sources rather than pasting into chat context
  • Use RepoMix to bundle codebases into a single source for onboarding
  • Query the notebook for synthesis -- NotebookLM reads the full source each time
  • Multiple targeted queries are cheaper than one massive context load
  • Combine with second-brain pattern to build persistent project knowledge

Second Brain Pattern — HIGH

Second Brain Pattern

Create a dedicated notebook per project to capture decisions, design docs, and insights. Query the notebook for grounded answers instead of relying on ephemeral chat context.

Incorrect -- relying on Claude's memory for past decisions:

User: "What did we decide about the auth architecture last week?"
Claude: "I don't have context from previous sessions..."

Correct -- add decisions as sources, query later:

# 1. Create a project notebook
notebook_create(name="Project Alpha Decisions")

# 2. Add decision documents as sources
source_add(notebook_id="...", content="ADR-001: Use JWT for auth because...")
source_add(notebook_id="...", content="ADR-002: PostgreSQL over MongoDB for...")

# 3. Capture new insights with notes
note(notebook_id="...", content="Perf test showed 2x latency with Redis cache miss")

# 4. Query for grounded answers
notebook_query(notebook_id="...", query="What auth approach did we choose and why?")

Key rules:

  • One notebook per project or domain -- avoid mixing unrelated topics
  • Add decision records, design docs, and meeting notes as sources
  • Use note tool to capture in-session insights for future retrieval
  • Use notebook_query for grounded answers backed by actual sources
  • Periodically prune outdated sources to keep answers relevant

Sharing & Collaboration — MEDIUM

Sharing & Collaboration

Control notebook access with sharing tools. Always check current sharing status before modifying access -- public links expose all notebook content.

Incorrect -- sharing publicly without reviewing content:

# Dangerous: makes everything in the notebook publicly accessible
notebook_share_public(notebook_id="...", enabled=true)
# Notebook contains internal security advisories -- now exposed

Correct -- check status, review, then share deliberately:

# 1. Check current sharing settings
status = notebook_share_status(notebook_id="...")
# Shows: public link (on/off), list of collaborators, permission levels

# 2. Review notebook content for sensitive material
notebook_query(notebook_id="...", query="Does this notebook contain credentials or internal secrets?")

# 3a. Share with specific collaborators (preferred)
notebook_share_invite(
    notebook_id="...",
    email="colleague@company.com",
    role="reader"  # or "editor"
)

# 3b. Or enable public link (use with caution)
notebook_share_public(notebook_id="...", enabled=true)

Batch sharing — invite multiple collaborators at once:

# Single call for multiple invites with mixed roles
notebook_share_batch(
    notebook_id="...",
    invitations=[
        {"email": "dev@company.com", "role": "editor"},
        {"email": "pm@company.com", "role": "reader"},
        {"email": "lead@company.com", "role": "editor"}
    ],
    confirm=True
)

Key rules:

  • Always call notebook_share_status before modifying sharing settings
  • Prefer notebook_share_invite (or notebook_share_batch for multiple) over public links
  • Review notebook content for sensitive material before enabling public access
  • Use role="reader" by default -- only grant "editor" when collaboration is needed
  • Use notebook_share_batch for 3+ collaborators -- single API call vs. multiple invites
  • Disable public links when no longer needed: notebook_share_public(enabled=false)

Studio Content Generation — MEDIUM

Studio Content Generation

NotebookLM Studio generates 10 artifact types from notebook sources. All generation is async -- create, poll status, then download.

Incorrect -- calling studio_create and waiting synchronously:

# Blocks for 2-5 minutes with no feedback
result = studio_create(notebook_id="...", type="audio_overview")
# User sees nothing until completion or timeout

Correct -- create, poll, download:

# 1. Create the artifact (returns immediately with artifact ID)
artifact = studio_create(notebook_id="...", type="audio_overview")

# 2. Poll for completion
status = studio_status(artifact_id=artifact.id)
# status: "pending" | "processing" | "completed" | "failed"

# 3. Download when completed
download_artifact(artifact_id=artifact.id, path="./output/podcast.mp3")

All 10 studio artifact types:

TypeOutputUse case
audioMP3 podcast4 formats (brief, critique, debate, deep_dive) + 3 lengths (short, default, long)
video (explainer)MP4 videoVisual summary with narration (slides + voiceover). Use video_format="explainer"
videoMP4 video3 formats: explainer, brief, cinematic. 9 visual styles. Cinematic requires Plus/Ultra, English only, max 20/day
mind_mapSVG/PNGVisual topic relationships
quizJSONTest comprehension of sources
flashcardsJSONStudy aid from source material
slide_deckPDF/PPTXPresentation from sources
infographicPNGVisual data summary (11 visual styles available)
data_tableCSV/JSONStructured data extraction
reportPDF/MarkdownComprehensive written summary

Infographic visual styles (via infographic_style param on studio_create):

StyleDescription
auto_selectLet NotebookLM choose (default)
sketch_noteHand-drawn sketch style
professionalClean corporate layout
bento_gridModular grid layout
editorialMagazine-style design
instructionalStep-by-step educational
bricksBlock/brick composition
clay3D clay/plasticine look
animeAnime-inspired visuals
kawaiiCute Japanese style
scientificTechnical/scientific format

Revising slide decks with studio_revise:

# 1. Get the artifact_id from studio_status
status = studio_status(notebook_id="...")

# 2. Revise specific slides (creates a NEW deck, original is preserved)
studio_revise(
    notebook_id="...",
    artifact_id="existing-deck-id",
    slide_instructions=[
        {"slide": 1, "instruction": "Make the title more concise"},
        {"slide": 3, "instruction": "Add a comparison table"}
    ],
    confirm=True
)

# 3. Poll studio_status for the new revised deck
studio_status(notebook_id="...")

Exporting artifacts to Google Workspace with export_artifact:

# Export a report to Google Docs
export_artifact(notebook_id="...", artifact_id="...", export_type="docs")

# Export a data table to Google Sheets
export_artifact(notebook_id="...", artifact_id="...", export_type="sheets")

Multilingual content generation:

# Generate a Hebrew infographic
studio_create(
    notebook_id="...", artifact_type="infographic",
    infographic_style="bento_grid", language="he", confirm=True
)

# Generate a Japanese audio overview
studio_create(
    notebook_id="...", artifact_type="audio",
    audio_format="deep_dive", language="ja", confirm=True
)

Supported language values (BCP-47): en, he, ar, es, fr, de, ja, ko, pt, zh, ru, hi, and more. Defaults to en or NOTEBOOKLM_HL env var.

Audio format options:

FormatDescription
deep_diveExtended conversational podcast (default)
briefShort summary podcast
critiqueCritical analysis format
debateTwo-sided debate format

Video format and visual style options:

Video formatStyle options
explainerauto_select, classic, whiteboard, kawaii, anime, watercolor, retro_print, heritage, paper_craft
briefSame style options

Key rules:

  • Always use the poll pattern: studio_create -> studio_status -> download_artifact
  • Generation takes 2-5 minutes -- inform the user and poll periodically
  • Check studio_status before attempting download -- downloading a pending artifact fails
  • Use audio_overview for quick summaries, report for comprehensive analysis
  • All artifact types require at least one source in the notebook
  • studio_revise only works on slide decks -- it creates a new artifact, preserving the original
  • Use export_artifact to push reports to Google Docs or data tables to Google Sheets
  • Set language for non-English content -- affects generated text, narration, and labels
  • Use confirm=True on all studio_create calls (required parameter)

Versioned Notebooks Per Release — HIGH

Versioned Notebooks Per Release

Create a dedicated NotebookLM notebook for each OrchestKit release to preserve release context, changelog details, and key skill diffs as a queryable knowledge base.

When to Create

  • On every minor or major release (e.g., v7.0.0, v7.1.0)
  • Patch releases can be appended to the existing minor notebook

Notebook Naming

OrchestKit v{MAJOR}.{MINOR} Release Notes

Examples: OrchestKit v7.0 Release Notes, OrchestKit v7.1 Release Notes

Sources to Upload

For each release notebook, add these sources:

SourceTypePurpose
CHANGELOG.md (release section)textFull changelog for the release
Key skill diffstextBefore/after for skills with significant changes
Migration guide (if breaking)textBreaking changes and migration steps
PR descriptionstextMerged PR summaries for context
Updated CLAUDE.mdfileCurrent project instructions snapshot

Workflow

# 1. Create release notebook
notebook_create(title="OrchestKit v7.0 Release Notes")

# 2. Add changelog section
source_add(notebook_id="...", type="text",
  title="CHANGELOG v7.0.0",
  content="<paste relevant CHANGELOG.md section>")

# 3. Add key skill diffs (significant changes only)
source_add(notebook_id="...", type="text",
  title="Skill Changes: implement",
  content="<diff summary of implement skill changes>")

# 4. Add migration guide for breaking changes
source_add(notebook_id="...", type="text",
  title="Migration Guide v6 to v7",
  content="<breaking changes and migration steps>")

# 5. Share with team
notebook_share_invite(notebook_id="...",
  email="your-email@example.com", role="writer")

Incorrect:

# Dump everything into one shared notebook
source_add(notebook_id="shared", type="text", title="v7 + v6 + v5 notes",
  content="<all changelogs mixed together>")

Correct:

# One notebook per minor version with focused sources
notebook_create(title="OrchestKit v7.0 Release Notes")
source_add(notebook_id="v7", type="text", title="CHANGELOG v7.0.0",
  content="<v7.0.0 changelog section only>")

Key Rules

  • One notebook per minor version — do not mix v7.0 and v7.1 content
  • Upload CHANGELOG section as type=text (not file) for better chunking
  • Include skill diffs only for skills with significant functional changes (not cosmetic edits)
  • Add a note summarizing the release theme after uploading sources
  • Generate an audio overview via studio_create(type="deep_dive") for each release

Querying Release History

# Query specific release context
notebook_query(notebook_id="...",
  query="What changed in the implement skill for v7.0?")

# Compare across releases (query the relevant notebook)
notebook_query(notebook_id="...",
  query="What breaking changes were introduced?")
Edit on GitHub

Last updated on