Skip to main content
OrchestKit v7.0.1 — 69 skills, 38 agents, 95 hooks with Opus 4.6 support
OrchestKit
Skills

Notebooklm

NotebookLM integration patterns for external RAG, research synthesis, studio content generation, and knowledge management. Use when creating notebooks, adding sources, generating audio/video, or querying NotebookLM via MCP.

Reference medium

NotebookLM

NotebookLM = external RAG engine that offloads reading from your context window. Uses the notebooklm-mcp-cli MCP server (PyPI) to create notebooks, manage sources, generate content, and query with grounded AI responses.

Disclaimer: Uses internal undocumented Google APIs via browser authentication. Sessions last ~20 minutes. API may change without notice.

Prerequisites

  1. Install: uv tool install notebooklm-mcp-cli (or pip install notebooklm-mcp-cli)
  2. Authenticate: nlm login (opens browser, session ~20 min)
  3. Configure MCP: nlm setup add claude-code (auto-configures .mcp.json)
  4. Alternative: nlm skill install for guided setup with verification
  5. Verify: nlm login --check to confirm active session

Decision Tree — Which Rule to Read

What are you trying to do?

├── Create / manage notebooks
│   ├── List / get / rename ──────► notebook_list, notebook_get, notebook_rename
│   ├── Create new notebook ──────► notebook_create
│   └── Delete notebook ──────────► notebook_delete (irreversible!)

├── Add sources to a notebook
│   ├── URL / YouTube ────────────► source_add(type=url)
│   ├── Plain text ───────────────► source_add(type=text)
│   ├── Local file ───────────────► source_add(type=file)
│   ├── Google Drive ─────────────► source_add(type=drive)
│   └── Manage sources ──────────► rules/setup-quickstart.md

├── Query a notebook (AI chat)
│   ├── Ask questions ────────────► notebook_query
│   └── Configure chat style ────► chat_configure

├── Generate studio content
│   └── 9 artifact types ────────► rules/workflow-studio-content.md

├── Research & discovery
│   └── Web/Drive research ──────► rules/workflow-research-discovery.md

├── Notes (capture insights)
│   └── Create/list/update/delete ► note (unified tool)

├── Sharing & collaboration
│   └── Public links / invites ──► rules/workflow-sharing-collaboration.md

└── Workflow patterns
    ├── Second brain ─────────────► rules/workflow-second-brain.md
    ├── Research offload ─────────► rules/workflow-research-offload.md
    └── Knowledge base ──────────► rules/workflow-knowledge-base.md

Quick Reference

CategoryRuleImpactKey Pattern
Setupsetup-quickstart.mdHIGHAuth, MCP config, source management, session refresh
Workflowsworkflow-second-brain.mdHIGHDecision docs, project hub, agent interop
Workflowsworkflow-research-offload.mdHIGHSynthesis, onboarding, token savings
Workflowsworkflow-knowledge-base.mdHIGHDebugging KB, security handbook, team knowledge
Workflowsworkflow-studio-content.mdMEDIUM9 artifact types (audio overview, deep dive, slides...)
Researchworkflow-research-discovery.mdHIGHWeb/Drive research async flow
Collaborationworkflow-sharing-collaboration.mdMEDIUMPublic links, collaborator invites
Releaseworkflow-versioned-notebooks.mdHIGHPer-release notebooks with changelog + diffs

Total: 8 rules across 4 categories

MCP Tools by API Group

GroupToolsCount
Notebooksnotebook_list, notebook_create, notebook_get, notebook_describe, notebook_rename, notebook_delete6
Sourcessource_list, source_add, source_list_drive, source_sync_drive, source_delete, source_describe, source_get_content7
Queryingnotebook_query, chat_configure2
Studiostudio_create, studio_status, studio_delete3
Researchresearch_start, research_status, research_import3
Sharingnotebook_share_status, notebook_share_public, notebook_share_invite3
Notesnote (unified: list/create/update/delete)1 (4 actions)
Downloadsdownload_artifact1
Authsave_auth_tokens, refresh_auth2

Total: 29 tools across 9 groups

Key Decisions

DecisionRecommendation
New notebook vs existingOne notebook per project/topic; add sources to existing
Source typeURL for web, text for inline, file for local docs, drive for Google Docs
Large sourcesSplit >50K chars into multiple sources for better retrieval
Auth expired?nlm login --check; sessions last ~20 min, re-auth with nlm login
Studio contentUse studio_create, poll with studio_status (generation takes 2-5 min)
Research discoveryresearch_start for web/Drive discovery, then research_import to add findings
Release notebooksOne notebook per minor version; upload CHANGELOG + key skill diffs as sources
Query vs searchnotebook_query for AI-grounded answers; source_get_content for raw text
Notes vs sourcesNotes for your insights/annotations; sources for external documents

Example

# 1. Create a notebook for your project
notebook_create(title="Auth Refactor Research")

# 2. Add sources (docs, articles, existing code analysis)
source_add(notebook_id="...", type="url", url="https://oauth.net/2.1/")
source_add(notebook_id="...", type="text", content="Our current auth uses...")
source_add(notebook_id="...", type="file", path="/docs/auth-design.md")

# 3. Query with grounded AI responses
notebook_query(notebook_id="...", query="What are the key differences between OAuth 2.0 and 2.1?")

# 4. Generate a deep dive audio overview
studio_create(notebook_id="...", type="deep_dive")
studio_status(notebook_id="...")  # Poll until complete

# 5. Capture insights as notes
note(notebook_id="...", action="create", content="Key takeaway: PKCE is mandatory in 2.1")

Common Mistakes

  • Forgetting auth expiry — Sessions last ~20 min. Always check with nlm login --check before long workflows. Re-auth with nlm login.
  • One giant notebook — Split by project/topic. One notebook with 50 sources degrades retrieval quality.
  • Huge single sources — Split documents >50K characters into logical sections for better chunking and retrieval.
  • Not polling studio_status — Studio content generation takes 2-5 minutes. Poll studio_status instead of assuming instant results.
  • Ignoring source types — Use type=url for web pages (auto-extracts), type=file for local files. Using type=text for a URL gives you the URL string, not the page content.
  • Deleting notebooks without checkingnotebook_delete is irreversible. List contents with source_list and note(action=list) first.
  • Skipping research_importresearch_start discovers content but does not add it. Use research_import to actually add findings as sources.
  • Raw queries on empty notebooksnotebook_query returns poor results with no sources. Add sources before querying.
  • ork:mcp-patterns — MCP server building, security, and composition patterns
  • ork:web-research-workflow — Web research strategies and source evaluation
  • ork:memory — Memory fabric for cross-session knowledge persistence
  • ork:security-patterns — Input sanitization and layered security

Rules (8)

NotebookLM Quick Setup — HIGH

Quick Setup

Authenticate with Google via nlm login, then register the MCP server with Claude Code. Auth sessions expire after ~20 minutes of inactivity.

Incorrect -- manually editing .mcp.json with wrong server path:

{
  "mcpServers": {
    "notebooklm": {
      "command": "node",
      "args": ["./some/wrong/path/server.js"]
    }
  }
}

Correct -- using CLI setup command:

# 1. Authenticate (opens browser for Google OAuth)
nlm login

# 2. Register MCP server with Claude Code
nlm setup add claude-code

# 3. Verify auth is active
nlm login --check

Manual .mcp.json fallback (if nlm setup is unavailable):

{
  "mcpServers": {
    "notebooklm": {
      "command": "nlm",
      "args": ["mcp"]
    }
  }
}

Alternative -- skill-based install:

nlm skill install

Key rules:

  • Always authenticate with nlm login before first use -- browser OAuth flow required
  • Auth sessions last ~20 minutes; re-run nlm login if tools start failing
  • Use nlm login --check to verify session status before long workflows
  • Prefer nlm setup add claude-code over manual .mcp.json editing
  • If setup command fails, use the manual .mcp.json fallback with "command": "nlm"

Knowledge Base Pattern — HIGH

Knowledge Base Pattern

Build dedicated notebooks as curated knowledge bases for debugging, security, and onboarding. Add incident reports, advisories, and runbooks as sources for grounded, verified answers.

Incorrect -- re-investigating a known issue from scratch:

User: "Production is throwing OOM errors again"
Claude: "Let me research possible causes..."
# Wastes time if this was already diagnosed and documented

Correct -- query the debugging knowledge base:

# 1. Create dedicated KB notebooks
notebook_create(name="Debugging KB")
notebook_create(name="Security Handbook")

# 2. Add incident reports and advisories as sources
source_add(notebook_id="debug_kb", content="INC-042: OOM caused by unbounded cache. Fix: add TTL...")
source_add(notebook_id="security_kb", content="SEC-007: SQL injection in search endpoint. Fix: parameterize...")

# 3. Query for grounded answers
notebook_query(notebook_id="debug_kb", query="What causes OOM errors and how were they fixed?")
notebook_query(notebook_id="security_kb", query="Known SQL injection patterns in our codebase")

Key rules:

  • Create separate notebooks for debugging, security, and onboarding domains
  • Add incident reports, post-mortems, and security advisories as sources
  • Query KB notebooks before re-investigating known issues
  • Keep sources current -- add new incidents as they are resolved
  • Use for onboarding: new team members query the KB instead of asking around

Research Discovery Pattern — HIGH

Research Discovery Pattern

Use the research API for automated web and Google Drive discovery. The flow is async: start a research task, poll for status, then import discovered sources into your notebook.

Incorrect -- manually searching and adding URLs one by one:

# Tedious and misses relevant content
source_add(notebook_id="...", url="https://example.com/article1")
source_add(notebook_id="...", url="https://example.com/article2")
source_add(notebook_id="...", url="https://example.com/article3")
# Missed 20 other relevant articles

Correct -- automated research flow:

# 1. Start research (searches web and/or Google Drive)
task = research_start(
    notebook_id="...",
    topic="Latest developments in WebAssembly component model",
    sources=["web", "drive"]
)

# 2. Poll for completion (uses Google API quota)
status = research_status(task_id=task.id)
# status: "searching" | "analyzing" | "completed"

# 3. Import discovered sources into notebook
research_import(task_id=task.id, notebook_id="...")
# Adds the most relevant discovered sources automatically

Key rules:

  • Use research_start for broad topic discovery instead of manual URL hunting
  • Always poll with research_status -- research takes 1-3 minutes
  • Research uses Google API quota -- avoid running many parallel research tasks
  • Import results with research_import to add discovered sources to your notebook
  • Combine web and Drive sources for comprehensive coverage
  • Follow up with notebook_query to synthesize the newly imported sources

Research Offload Pattern — HIGH

Research Offload Pattern

Add large documents, codebases, and references as notebook sources instead of pasting them into chat. Use notebook_query for targeted synthesis without consuming context window.

Incorrect -- pasting large content directly into chat:

User: "Here's our entire codebase (100K chars)... now explain the auth flow"
# Wastes context, may hit token limits, loses nuance in truncation

Correct -- add as source, query for synthesis:

# 1. Add large docs as sources (use RepoMix for codebases)
source_add(notebook_id="...", url="https://docs.example.com/api-reference")
source_add(notebook_id="...", content=repomix_output)

# 2. Query for specific synthesis
notebook_query(notebook_id="...", query="How does the authentication middleware chain work?")

# 3. Follow up with targeted questions
notebook_query(notebook_id="...", query="What error codes does the auth endpoint return?")

Key rules:

  • Add large documents as sources rather than pasting into chat context
  • Use RepoMix to bundle codebases into a single source for onboarding
  • Query the notebook for synthesis -- NotebookLM reads the full source each time
  • Multiple targeted queries are cheaper than one massive context load
  • Combine with second-brain pattern to build persistent project knowledge

Second Brain Pattern — HIGH

Second Brain Pattern

Create a dedicated notebook per project to capture decisions, design docs, and insights. Query the notebook for grounded answers instead of relying on ephemeral chat context.

Incorrect -- relying on Claude's memory for past decisions:

User: "What did we decide about the auth architecture last week?"
Claude: "I don't have context from previous sessions..."

Correct -- add decisions as sources, query later:

# 1. Create a project notebook
notebook_create(name="Project Alpha Decisions")

# 2. Add decision documents as sources
source_add(notebook_id="...", content="ADR-001: Use JWT for auth because...")
source_add(notebook_id="...", content="ADR-002: PostgreSQL over MongoDB for...")

# 3. Capture new insights with notes
note(notebook_id="...", content="Perf test showed 2x latency with Redis cache miss")

# 4. Query for grounded answers
notebook_query(notebook_id="...", query="What auth approach did we choose and why?")

Key rules:

  • One notebook per project or domain -- avoid mixing unrelated topics
  • Add decision records, design docs, and meeting notes as sources
  • Use note tool to capture in-session insights for future retrieval
  • Use notebook_query for grounded answers backed by actual sources
  • Periodically prune outdated sources to keep answers relevant

Sharing & Collaboration — MEDIUM

Sharing & Collaboration

Control notebook access with sharing tools. Always check current sharing status before modifying access -- public links expose all notebook content.

Incorrect -- sharing publicly without reviewing content:

# Dangerous: makes everything in the notebook publicly accessible
notebook_share_public(notebook_id="...", enabled=true)
# Notebook contains internal security advisories -- now exposed

Correct -- check status, review, then share deliberately:

# 1. Check current sharing settings
status = notebook_share_status(notebook_id="...")
# Shows: public link (on/off), list of collaborators, permission levels

# 2. Review notebook content for sensitive material
notebook_query(notebook_id="...", query="Does this notebook contain credentials or internal secrets?")

# 3a. Share with specific collaborators (preferred)
notebook_share_invite(
    notebook_id="...",
    email="colleague@company.com",
    role="reader"  # or "editor"
)

# 3b. Or enable public link (use with caution)
notebook_share_public(notebook_id="...", enabled=true)

Key rules:

  • Always call notebook_share_status before modifying sharing settings
  • Prefer notebook_share_invite with specific collaborators over public links
  • Review notebook content for sensitive material before enabling public access
  • Use role="reader" by default -- only grant "editor" when collaboration is needed
  • Disable public links when no longer needed: notebook_share_public(enabled=false)

Studio Content Generation — MEDIUM

Studio Content Generation

NotebookLM Studio generates 9 artifact types from notebook sources. All generation is async -- create, poll status, then download.

Incorrect -- calling studio_create and waiting synchronously:

# Blocks for 2-5 minutes with no feedback
result = studio_create(notebook_id="...", type="audio_overview")
# User sees nothing until completion or timeout

Correct -- create, poll, download:

# 1. Create the artifact (returns immediately with artifact ID)
artifact = studio_create(notebook_id="...", type="audio_overview")

# 2. Poll for completion
status = studio_status(artifact_id=artifact.id)
# status: "pending" | "processing" | "completed" | "failed"

# 3. Download when completed
download_artifact(artifact_id=artifact.id, path="./output/podcast.mp3")

All 9 studio artifact types:

TypeOutputUse case
audio_overviewMP3 podcastSummarize sources as conversational audio
video_overviewMP4 videoVisual summary with narration
mind_mapSVG/PNGVisual topic relationships
quizJSONTest comprehension of sources
flashcardsJSONStudy aid from source material
slide_deckPDF/PPTXPresentation from sources
infographicPNGVisual data summary
data_tableCSV/JSONStructured data extraction
reportPDF/MarkdownComprehensive written summary

Key rules:

  • Always use the poll pattern: studio_create -> studio_status -> download_artifact
  • Generation takes 2-5 minutes -- inform the user and poll periodically
  • Check studio_status before attempting download -- downloading a pending artifact fails
  • Use audio_overview for quick summaries, report for comprehensive analysis
  • All artifact types require at least one source in the notebook

Versioned Notebooks Per Release — HIGH

Versioned Notebooks Per Release

Create a dedicated NotebookLM notebook for each OrchestKit release to preserve release context, changelog details, and key skill diffs as a queryable knowledge base.

When to Create

  • On every minor or major release (e.g., v7.0.0, v7.1.0)
  • Patch releases can be appended to the existing minor notebook

Notebook Naming

OrchestKit v{MAJOR}.{MINOR} Release Notes

Examples: OrchestKit v7.0 Release Notes, OrchestKit v7.1 Release Notes

Sources to Upload

For each release notebook, add these sources:

SourceTypePurpose
CHANGELOG.md (release section)textFull changelog for the release
Key skill diffstextBefore/after for skills with significant changes
Migration guide (if breaking)textBreaking changes and migration steps
PR descriptionstextMerged PR summaries for context
Updated CLAUDE.mdfileCurrent project instructions snapshot

Workflow

# 1. Create release notebook
notebook_create(title="OrchestKit v7.0 Release Notes")

# 2. Add changelog section
source_add(notebook_id="...", type="text",
  title="CHANGELOG v7.0.0",
  content="<paste relevant CHANGELOG.md section>")

# 3. Add key skill diffs (significant changes only)
source_add(notebook_id="...", type="text",
  title="Skill Changes: implement",
  content="<diff summary of implement skill changes>")

# 4. Add migration guide for breaking changes
source_add(notebook_id="...", type="text",
  title="Migration Guide v6 to v7",
  content="<breaking changes and migration steps>")

# 5. Share with team
notebook_share_invite(notebook_id="...",
  email="yonatan2gross@gmail.com", role="writer")

Incorrect:

# Dump everything into one shared notebook
source_add(notebook_id="shared", type="text", title="v7 + v6 + v5 notes",
  content="<all changelogs mixed together>")

Correct:

# One notebook per minor version with focused sources
notebook_create(title="OrchestKit v7.0 Release Notes")
source_add(notebook_id="v7", type="text", title="CHANGELOG v7.0.0",
  content="<v7.0.0 changelog section only>")

Key Rules

  • One notebook per minor version — do not mix v7.0 and v7.1 content
  • Upload CHANGELOG section as type=text (not file) for better chunking
  • Include skill diffs only for skills with significant functional changes (not cosmetic edits)
  • Add a note summarizing the release theme after uploading sources
  • Generate an audio overview via studio_create(type="deep_dive") for each release

Querying Release History

# Query specific release context
notebook_query(notebook_id="...",
  query="What changed in the implement skill for v7.0?")

# Compare across releases (query the relevant notebook)
notebook_query(notebook_id="...",
  query="What breaking changes were introduced?")
Edit on GitHub

Last updated on