Memory Fabric
Knowledge graph memory orchestration - entity extraction, query parsing, deduplication, and cross-reference boosting. Use when designing memory orchestration.
Memory Fabric - Graph Orchestration
Knowledge graph orchestration via mcp__memory__* for entity extraction, query parsing, deduplication, and cross-reference boosting.
Overview
- Comprehensive memory retrieval from the knowledge graph
- Cross-referencing entities within graph storage
- Ensuring no relevant memories are missed
- Building unified context from graph queries
Architecture Overview
┌─────────────────────────────────────────────────────────────┐
│ Memory Fabric Layer │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Query │ │ Query │ │
│ │ Parser │ │ Executor │ │
│ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────────────────────────────────────┐ │
│ │ Graph Query Dispatch │ │
│ └──────────────────────┬───────────────────────┘ │
│ │ │
│ ┌─────────▼──────────┐ │
│ │ mcp__memory__* │ │
│ │ (Knowledge Graph) │ │
│ └─────────┬──────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Result Normalizer │ │
│ └─────────────────────┬───────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Deduplication Engine (>85% sim) │ │
│ └─────────────────────┬───────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Cross-Reference Booster │ │
│ └─────────────────────┬───────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ Final Ranking: recency × relevance │ │
│ │ × source_authority │ │
│ └─────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘Unified Search Workflow
Step 1: Parse Query
Extract search intent and entity hints from natural language:
Input: "What pagination approach did database-engineer recommend?"
Parsed:
- query: "pagination approach recommend"
- entity_hints: ["database-engineer", "pagination"]
- intent: "decision" or "pattern"Step 2: Execute Graph Query
Query Graph (entity search):
mcp__memory__search_nodes({
query: "pagination database-engineer"
})Step 3: Normalize Results
Transform results to common format:
{
"id": "graph:original_id",
"text": "content text",
"source": "graph",
"timestamp": "ISO8601",
"relevance": 0.0-1.0,
"entities": ["entity1", "entity2"],
"metadata": {}
}Step 4: Deduplicate (>85% Similarity)
When two results have >85% text similarity:
- Keep the one with higher relevance score
- Merge metadata
- Mark as "cross-validated" for authority boost
Step 5: Cross-Reference Boost
If a result mentions an entity that exists elsewhere in the graph:
- Boost relevance score by 1.2x
- Add graph relationships to result metadata
Step 6: Final Ranking
Score = recency_factor × relevance × source_authority
| Factor | Weight | Description |
|---|---|---|
| recency | 0.3 | Newer memories rank higher |
| relevance | 0.5 | Semantic match quality |
| source_authority | 0.2 | Graph entities boost, cross-validated boost |
Result Format
{
"query": "original query",
"total_results": 4,
"sources": {
"graph": 4
},
"results": [
{
"id": "graph:cursor-pagination",
"text": "Use cursor-based pagination for scalability",
"score": 0.92,
"source": "graph",
"timestamp": "2026-01-15T10:00:00Z",
"entities": ["cursor-pagination", "database-engineer"],
"graph_relations": [
{ "from": "database-engineer", "relation": "recommends", "to": "cursor-pagination" }
]
}
]
}Entity Extraction
Memory Fabric extracts entities from natural language for graph storage:
Input: "database-engineer uses pgvector for RAG applications"
Extracted:
- Entities:
- { name: "database-engineer", type: "agent" }
- { name: "pgvector", type: "technology" }
- { name: "RAG", type: "pattern" }
- Relations:
- { from: "database-engineer", relation: "uses", to: "pgvector" }
- { from: "pgvector", relation: "used_for", to: "RAG" }See references/entity-extraction.md for detailed extraction patterns.
Graph Relationship Traversal
Memory Fabric supports multi-hop graph traversal for complex relationship queries.
Example: Multi-Hop Query
Query: "What did database-engineer recommend about pagination?"
1. Search for "database-engineer pagination"
→ Find entity: "database-engineer recommends cursor-pagination"
2. Traverse related entities (depth 2)
→ Traverse: database-engineer → recommends → cursor-pagination
→ Find: "cursor-pagination uses offset-based approach"
3. Return results with relationship contextIntegration with Graph Memory
Memory Fabric uses the knowledge graph for entity relationships:
- Graph search via
mcp__memory__search_nodesfinds matching entities - Graph traversal expands context via entity relationships
- Cross-reference boosts relevance when entities match
Integration Points
With memory Skill
When memory search runs, it can optionally use Memory Fabric for unified results.
With Hooks
prompt/memory-fabric-context.sh- Inject unified context at session startstop/memory-fabric-sync.sh- Sync entities to graph at session end
Configuration
# Environment variables
MEMORY_FABRIC_DEDUP_THRESHOLD=0.85 # Similarity threshold for merging
MEMORY_FABRIC_BOOST_FACTOR=1.2 # Cross-reference boost multiplier
MEMORY_FABRIC_MAX_RESULTS=20 # Max results per sourceMCP Requirements
Required: Knowledge graph MCP server:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@anthropic/memory-mcp-server"]
}
}
}Error Handling
| Scenario | Behavior |
|---|---|
| graph unavailable | Error - graph is required |
| Query empty | Return recent memories from graph |
Related Skills
ork:memory- User-facing memory operations (search, load, sync, viz)ork:remember- User-facing memory storagecaching- Caching layer that can use fabric
Key Decisions
| Decision | Choice | Rationale |
|---|---|---|
| Dedup threshold | 85% | Balances catching duplicates vs. preserving nuance |
| Parallel queries | Always | Reduces latency, both sources are independent |
| Cross-ref boost | 1.2x | Validated info more trustworthy but not dominant |
| Ranking weights | 0.3/0.5/0.2 | Relevance most important, recency secondary |
References (2)
Entity Extraction
Entity Extraction
Extract entities from natural language for graph memory storage.
Entity Types
| Type | Pattern | Examples |
|---|---|---|
| agent | OrchestKit agent names | database-engineer, backend-system-architect |
| technology | Known tech keywords | pgvector, FastAPI, PostgreSQL, React |
| pattern | Design/architecture patterns | cursor-pagination, CQRS, event-sourcing |
| decision | "decided", "chose", "will use" | Architecture choices |
| blocker | "blocked", "issue", "problem" | Identified obstacles |
Extraction Patterns
Agent Detection
(database-engineer|backend-system-architect|frontend-ui-developer|
security-auditor|test-generator|workflow-architect|llm-integrator|
data-pipeline-engineer|[a-z]+-[a-z]+-?[a-z]*)Technology Detection
Known technologies: pgvector, PostgreSQL, FastAPI, SQLAlchemy, React, TypeScript, LangGraph, Redis, Celery, Docker, Kubernetes
Relation Extraction
| Pattern | Relation Type |
|---|---|
| "X uses Y" | uses |
| "X recommends Y" | recommends |
| "X requires Y" | requires |
| "X blocked by Y" | blocked_by |
| "X depends on Y" | depends_on |
| "X for Y" / "X used for Y" | used_for |
Extraction Algorithm
def extract_entities(text):
entities = []
relations = []
# 1. Find agents
for agent in KNOWN_AGENTS:
if agent in text.lower():
entities.append({"name": agent, "type": "agent"})
# 2. Find technologies
for tech in KNOWN_TECHNOLOGIES:
if tech.lower() in text.lower():
entities.append({"name": tech, "type": "technology"})
# 3. Extract relations via patterns
for pattern, relation_type in RELATION_PATTERNS:
matches = re.findall(pattern, text)
for from_entity, to_entity in matches:
relations.append({
"from": from_entity,
"relation": relation_type,
"to": to_entity
})
return {"entities": entities, "relations": relations}Graph Storage Format
Create entities:
mcp__memory__create_entities({
entities: [
{ name: "database-engineer", entityType: "agent", observations: ["recommends pgvector"] },
{ name: "pgvector", entityType: "technology", observations: ["vector extension for PostgreSQL"] }
]
})Create relations:
mcp__memory__create_relations({
relations: [
{ from: "database-engineer", to: "pgvector", relationType: "recommends" }
]
})Observation Patterns
When adding observations to existing entities:
mcp__memory__add_observations({
observations: [
{ entityName: "pgvector", contents: ["supports HNSW indexing", "requires PostgreSQL 15+"] }
]
})Query Merging
Query Merging Algorithm
Algorithm for querying and ranking results from the knowledge graph via mcp__memory.
Query Execution
Execute graph query via MCP:
# Query knowledge graph (MCP)
mcp__memory__search_nodes({ query })Result Normalization
Transform graph results to unified format:
{
"id": "graph:{entity_name}",
"text": "{observations joined}",
"source": "graph",
"timestamp": "null",
"relevance": "1.0 for exact match, 0.8 for partial",
"entities": "[name, related entities]",
"metadata": { "entityType": "{type}", "relations": [] }
}Deduplication Logic
Calculate similarity using normalized text comparison:
def similarity(text_a, text_b):
# Normalize: lowercase, remove punctuation, tokenize
tokens_a = normalize(text_a)
tokens_b = normalize(text_b)
# Jaccard similarity
intersection = len(tokens_a & tokens_b)
union = len(tokens_a | tokens_b)
return intersection / union if union > 0 else 0
# Merge if similarity > 0.85
if similarity(result_a.text, result_b.text) > DEDUP_THRESHOLD:
merged = merge_results(result_a, result_b)Merge Strategy:
- Keep text from higher-relevance result
- Combine entities from both
- Preserve metadata with
source_*prefix - Set
cross_validated: true
Cross-Reference Boosting
When a result mentions a graph entity found elsewhere in the graph:
for result in graph_results:
for entity in all_graph_entities:
if entity.name.lower() in result.text.lower():
result.relevance *= BOOST_FACTOR # 1.2x
result.graph_relations = entity.relations
result.cross_referenced = TrueFinal Ranking Formula
def compute_score(result):
# Recency: decay over 30 days
age_days = (now - result.timestamp).days
recency = max(0.1, 1.0 - (age_days / 30))
# Source authority
authority = 1.0
if result.cross_validated:
authority = 1.3
else:
authority = 1.1
# Final score
return (recency * 0.3) + (result.relevance * 0.5) + (authority * 0.2)Output Assembly
{
"query": "original query",
"total_results": 4,
"sources": { "graph": 4 },
"results": "[sorted by score descending]"
}Memory
Read-side memory operations: search, recall, load, sync, history, visualize. Use when searching past decisions, loading session context, or viewing the knowledge graph.
Monitoring Observability
Monitoring and observability patterns for Prometheus metrics, Grafana dashboards, Langfuse LLM tracing, and drift detection. Use when adding logging, metrics, distributed tracing, LLM cost tracking, or quality drift monitoring.
Last updated on