Memory Fabric
Knowledge graph memory orchestration - entity extraction, query parsing, deduplication, and cross-reference boosting. Use when designing memory orchestration.
Auto-activated — this skill loads automatically when Claude detects matching context.
Memory Fabric - Graph Orchestration
Knowledge graph orchestration via mcp__memory__* for entity extraction, query parsing, deduplication, and cross-reference boosting.
Overview
- Comprehensive memory retrieval from the knowledge graph
- Cross-referencing entities within graph storage
- Ensuring no relevant memories are missed
- Building unified context from graph queries
Architecture Overview
┌─────────────────────────────────────────────────────────────┐
│ Memory Fabric Layer │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Query │ │ Query │ │
│ │ Parser │ │ Executor │ │
│ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────────────────────────────────────┐ │
│ │ Graph Query Dispatch │ │
│ └──────────────────────┬───────────────────────┘ │
│ │ │
│ ┌─────────▼──────────┐ │
│ │ mcp__memory__* │ │
│ │ (Knowledge Graph) │ │
│ └─────────┬──────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Result Normalizer │ │
│ └─────────────────────┬───────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Deduplication Engine (>85% sim) │ │
│ └─────────────────────┬───────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Cross-Reference Booster │ │
│ └─────────────────────┬───────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Final Ranking: recency × relevance │ │
│ │ × source_authority │ │
│ └─────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘Unified Search Workflow
Step 1: Parse Query
Extract search intent and entity hints from natural language:
Input: "What pagination approach did database-engineer recommend?"
Parsed:
- query: "pagination approach recommend"
- entity_hints: ["database-engineer", "pagination"]
- intent: "decision" or "pattern"Step 2: Execute Graph Query
Query Graph (entity search):
mcp__memory__search_nodes({
query: "pagination database-engineer"
})Step 3: Normalize Results
Transform results to common format:
{
"id": "graph:original_id",
"text": "content text",
"source": "graph",
"timestamp": "ISO8601",
"relevance": 0.0-1.0,
"entities": ["entity1", "entity2"],
"metadata": {}
}Step 4: Deduplicate (>85% Similarity)
When two results have >85% text similarity:
- Keep the one with higher relevance score
- Merge metadata
- Mark as "cross-validated" for authority boost
Step 5: Cross-Reference Boost
If a result mentions an entity that exists elsewhere in the graph:
- Boost relevance score by 1.2x
- Add graph relationships to result metadata
Step 6: Final Ranking
Score = recency_factor × relevance × source_authority
| Factor | Weight | Description |
|---|---|---|
| recency | 0.3 | Newer memories rank higher |
| relevance | 0.5 | Semantic match quality |
| source_authority | 0.2 | Graph entities boost, cross-validated boost |
Result Format
{
"query": "original query",
"total_results": 4,
"sources": {
"graph": 4
},
"results": [
{
"id": "graph:cursor-pagination",
"text": "Use cursor-based pagination for scalability",
"score": 0.92,
"source": "graph",
"timestamp": "2026-01-15T10:00:00Z",
"entities": ["cursor-pagination", "database-engineer"],
"graph_relations": [
{ "from": "database-engineer", "relation": "recommends", "to": "cursor-pagination" }
]
}
]
}Entity Extraction
Memory Fabric extracts entities from natural language for graph storage:
Input: "database-engineer uses pgvector for RAG applications"
Extracted:
- Entities:
- { name: "database-engineer", type: "agent" }
- { name: "pgvector", type: "technology" }
- { name: "RAG", type: "pattern" }
- Relations:
- { from: "database-engineer", relation: "uses", to: "pgvector" }
- { from: "pgvector", relation: "used_for", to: "RAG" }Load Read("$\{CLAUDE_SKILL_DIR\}/references/entity-extraction.md") for detailed extraction patterns.
Graph Relationship Traversal
Memory Fabric supports multi-hop graph traversal for complex relationship queries.
Example: Multi-Hop Query
Query: "What did database-engineer recommend about pagination?"
1. Search for "database-engineer pagination"
→ Find entity: "database-engineer recommends cursor-pagination"
2. Traverse related entities (depth 2)
→ Traverse: database-engineer → recommends → cursor-pagination
→ Find: "cursor-pagination uses offset-based approach"
3. Return results with relationship contextIntegration with Graph Memory
Memory Fabric uses the knowledge graph for entity relationships:
- Graph search via
mcp__memory__search_nodesfinds matching entities - Graph traversal expands context via entity relationships
- Cross-reference boosts relevance when entities match
Integration Points
With memory Skill
When memory search runs, it can optionally use Memory Fabric for unified results.
With Hooks
prompt/memory-fabric-context.sh- Inject unified context at session startstop/memory-fabric-sync.sh- Sync entities to graph at session end
Configuration
# Environment variables
MEMORY_FABRIC_DEDUP_THRESHOLD=0.85 # Similarity threshold for merging
MEMORY_FABRIC_BOOST_FACTOR=1.2 # Cross-reference boost multiplier
MEMORY_FABRIC_MAX_RESULTS=20 # Max results per sourceMCP Requirements
Required: Knowledge graph MCP server:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@anthropic/memory-mcp-server"]
}
}
}Error Handling
| Scenario | Behavior |
|---|---|
| graph unavailable | Error - graph is required |
| Query empty | Return recent memories from graph |
Related Skills
ork:memory- User-facing memory operations (search, load, sync, viz)ork:remember- User-facing memory storagecaching- Caching layer that can use fabric
Key Decisions
| Decision | Choice | Rationale |
|---|---|---|
| Dedup threshold | 85% | Balances catching duplicates vs. preserving nuance |
| Parallel queries | Always | Reduces latency, both sources are independent |
| Cross-ref boost | 1.2x | Validated info more trustworthy but not dominant |
| Ranking weights | 0.3/0.5/0.2 | Relevance most important, recency secondary |
Rules (2)
Validate graph integrity after mutations to prevent orphaned nodes and broken relations — HIGH
Validate Graph Integrity After Mutations
Why
Knowledge graph mutations (create, update, delete) can leave the graph in an inconsistent state: orphaned nodes with no relations, dangling edges pointing to deleted entities, or duplicate nodes with divergent metadata. Queries against inconsistent graphs return wrong results.
Rule
After any graph mutation:
- Verify the mutated node exists and has expected properties
- Check that all relations reference valid nodes on both ends
- Detect and flag orphaned nodes (no incoming or outgoing relations)
- Run deduplication check if a new node was created
Incorrect — mutate without validation
// Create entity and relation, assume success
await mcp__memory__create_entities({
entities: [{ name: "cursor-pagination", entityType: "pattern" }]
});
await mcp__memory__create_relations({
relations: [{
from: "database-engineer",
to: "cursor-pagination",
relationType: "recommends"
}]
});
// No verification — what if "database-engineer" doesn't exist?
// Result: dangling relation with invalid "from" referenceCorrect — mutate then validate
// 1. Create entity
await mcp__memory__create_entities({
entities: [{ name: "cursor-pagination", entityType: "pattern" }]
});
// 2. Verify the target node exists before creating relation
const sourceCheck = await mcp__memory__search_nodes({
query: "database-engineer"
});
if (!sourceCheck.nodes?.some(n => n.name === "database-engineer")) {
// Source node missing — create it first
await mcp__memory__create_entities({
entities: [{ name: "database-engineer", entityType: "agent" }]
});
}
// 3. Create relation with both ends validated
await mcp__memory__create_relations({
relations: [{
from: "database-engineer",
to: "cursor-pagination",
relationType: "recommends"
}]
});
// 4. Verify relation was created
const verification = await mcp__memory__search_nodes({
query: "cursor-pagination"
});
const node = verification.nodes?.find(n => n.name === "cursor-pagination");
if (!node) {
console.error("Graph mutation failed: node not found after create");
}Integrity Checks
| Check | When | Action on Failure |
|---|---|---|
| Node exists after create | After every create | Retry once, then error |
| Both relation ends exist | Before create_relations | Create missing node first |
| No duplicate names | Before create_entities | Merge with existing node |
| Orphan detection | After delete operations | Log warning, queue cleanup |
Deduplication on Create
// Before creating, check for existing node
const existing = await mcp__memory__search_nodes({
query: "cursor-pagination"
});
const match = existing.nodes?.find(
n => n.name === "cursor-pagination" || n.name === "cursor_pagination"
);
if (match) {
// Merge observations into existing node instead of creating duplicate
await mcp__memory__add_observations({
observations: [{ entityName: match.name, contents: ["New observation"] }]
});
} else {
await mcp__memory__create_entities({
entities: [{ name: "cursor-pagination", entityType: "pattern" }]
});
}Detect and prune stale graph nodes older than threshold to keep memory relevant — MEDIUM
Detect and Prune Stale Nodes
Why
Knowledge graphs accumulate nodes over time. Without staleness detection, queries return outdated decisions ("use MongoDB" from 6 months ago) alongside current ones ("migrated to Postgres last week"). The recency factor in ranking helps, but nodes beyond a threshold should be flagged or pruned.
Rule
- Define a staleness threshold (default: 90 days for decisions, 180 days for patterns)
- When query results include old nodes, flag them as potentially stale
- Before acting on stale data, verify it is still current
- Prune nodes only after user confirmation, never automatically
Incorrect — treat all graph results as current
// Query returns a 6-month-old decision
const results = await mcp__memory__search_nodes({ query: "database choice" });
// Use first result without checking recency
const decision = results.nodes[0];
// decision: "Use MongoDB for user profiles" (from 6 months ago)
// Reality: team migrated to Postgres 3 months ago
console.log(`Current approach: ${decision.observations[0]}`);
// Gives outdated guidance with no warningCorrect — flag stale results and verify
const STALE_THRESHOLDS = {
decision: 90, // days
pattern: 180,
technology: 120,
default: 90
};
function isStale(node, thresholds) {
const nodeDate = new Date(node.timestamp || node.lastUpdated);
const daysSince = (Date.now() - nodeDate.getTime()) / (1000 * 60 * 60 * 24);
const threshold = thresholds[node.entityType] || thresholds.default;
return { stale: daysSince > threshold, daysSince: Math.round(daysSince) };
}
const results = await mcp__memory__search_nodes({ query: "database choice" });
for (const node of results.nodes) {
const { stale, daysSince } = isStale(node, STALE_THRESHOLDS);
if (stale) {
console.warn(
`STALE NODE: "${node.name}" is ${daysSince} days old ` +
`(threshold: ${STALE_THRESHOLDS[node.entityType] || STALE_THRESHOLDS.default}d). ` +
`Verify before using.`
);
}
}Pruning Protocol
Never auto-delete. Follow this sequence:
// 1. Identify stale candidates
const staleNodes = results.nodes.filter(n => isStale(n, STALE_THRESHOLDS).stale);
// 2. Present to user for review
const report = staleNodes.map(n => ({
name: n.name,
type: n.entityType,
age: `${isStale(n, STALE_THRESHOLDS).daysSince} days`,
lastObservation: n.observations?.slice(-1)[0] || "none"
}));
console.log("Stale nodes for review:", JSON.stringify(report, null, 2));
// 3. Only prune after explicit user confirmation
// await mcp__memory__delete_entities({ entityNames: confirmedDeletions });Staleness Thresholds
| Entity Type | Threshold | Rationale |
|---|---|---|
| decision | 90 days | Decisions get revisited quarterly |
| pattern | 180 days | Patterns are more stable |
| technology | 120 days | Tech stack changes seasonally |
| preference | 365 days | User preferences rarely change |
| agent | Never | Agent definitions are structural |
References (2)
Entity Extraction
Entity Extraction
Extract entities from natural language for graph memory storage.
Entity Types
| Type | Pattern | Examples |
|---|---|---|
| agent | OrchestKit agent names | database-engineer, backend-system-architect |
| technology | Known tech keywords | pgvector, FastAPI, PostgreSQL, React |
| pattern | Design/architecture patterns | cursor-pagination, CQRS, event-sourcing |
| decision | "decided", "chose", "will use" | Architecture choices |
| blocker | "blocked", "issue", "problem" | Identified obstacles |
Extraction Patterns
Agent Detection
(database-engineer|backend-system-architect|frontend-ui-developer|
security-auditor|test-generator|workflow-architect|llm-integrator|
data-pipeline-engineer|[a-z]+-[a-z]+-?[a-z]*)Technology Detection
Known technologies: pgvector, PostgreSQL, FastAPI, SQLAlchemy, React, TypeScript, LangGraph, Redis, Celery, Docker, Kubernetes
Relation Extraction
| Pattern | Relation Type |
|---|---|
| "X uses Y" | uses |
| "X recommends Y" | recommends |
| "X requires Y" | requires |
| "X blocked by Y" | blocked_by |
| "X depends on Y" | depends_on |
| "X for Y" / "X used for Y" | used_for |
Extraction Algorithm
def extract_entities(text):
entities = []
relations = []
# 1. Find agents
for agent in KNOWN_AGENTS:
if agent in text.lower():
entities.append({"name": agent, "type": "agent"})
# 2. Find technologies
for tech in KNOWN_TECHNOLOGIES:
if tech.lower() in text.lower():
entities.append({"name": tech, "type": "technology"})
# 3. Extract relations via patterns
for pattern, relation_type in RELATION_PATTERNS:
matches = re.findall(pattern, text)
for from_entity, to_entity in matches:
relations.append({
"from": from_entity,
"relation": relation_type,
"to": to_entity
})
return {"entities": entities, "relations": relations}Graph Storage Format
Create entities:
mcp__memory__create_entities({
entities: [
{ name: "database-engineer", entityType: "agent", observations: ["recommends pgvector"] },
{ name: "pgvector", entityType: "technology", observations: ["vector extension for PostgreSQL"] }
]
})Create relations:
mcp__memory__create_relations({
relations: [
{ from: "database-engineer", to: "pgvector", relationType: "recommends" }
]
})Observation Patterns
When adding observations to existing entities:
mcp__memory__add_observations({
observations: [
{ entityName: "pgvector", contents: ["supports HNSW indexing", "requires PostgreSQL 15+"] }
]
})Query Merging
Query Merging Algorithm
Algorithm for querying and ranking results from the knowledge graph via mcp__memory.
Query Execution
Execute graph query via MCP:
# Query knowledge graph (MCP)
mcp__memory__search_nodes({ query })Result Normalization
Transform graph results to unified format:
{
"id": "graph:{entity_name}",
"text": "{observations joined}",
"source": "graph",
"timestamp": "null",
"relevance": "1.0 for exact match, 0.8 for partial",
"entities": "[name, related entities]",
"metadata": { "entityType": "{type}", "relations": [] }
}Deduplication Logic
Calculate similarity using normalized text comparison:
def similarity(text_a, text_b):
# Normalize: lowercase, remove punctuation, tokenize
tokens_a = normalize(text_a)
tokens_b = normalize(text_b)
# Jaccard similarity
intersection = len(tokens_a & tokens_b)
union = len(tokens_a | tokens_b)
return intersection / union if union > 0 else 0
# Merge if similarity > 0.85
if similarity(result_a.text, result_b.text) > DEDUP_THRESHOLD:
merged = merge_results(result_a, result_b)Merge Strategy:
- Keep text from higher-relevance result
- Combine entities from both
- Preserve metadata with
source_*prefix - Set
cross_validated: true
Cross-Reference Boosting
When a result mentions a graph entity found elsewhere in the graph:
for result in graph_results:
for entity in all_graph_entities:
if entity.name.lower() in result.text.lower():
result.relevance *= BOOST_FACTOR # 1.2x
result.graph_relations = entity.relations
result.cross_referenced = TrueFinal Ranking Formula
def compute_score(result):
# Recency: decay over 30 days
age_days = (now - result.timestamp).days
recency = max(0.1, 1.0 - (age_days / 30))
# Source authority
authority = 1.0
if result.cross_validated:
authority = 1.3
else:
authority = 1.1
# Final score
return (recency * 0.3) + (result.relevance * 0.5) + (authority * 0.2)Output Assembly
{
"query": "original query",
"total_results": 4,
"sources": { "graph": 4 },
"results": "[sorted by score descending]"
}Memory
Read-side memory operations: search, recall, load, sync, history, visualize. Use when searching past decisions, loading session context, or viewing the knowledge graph.
Monitoring Observability
Monitoring and observability patterns for Prometheus metrics, Grafana dashboards, Langfuse v4 LLM tracing (as_type, score_current_span, should_export_span, LangfuseMedia), and drift detection. Use when adding logging, metrics, distributed tracing, LLM cost tracking, or quality drift monitoring.
Last updated on