Skip to main content
OrchestKit v6.7.1 — 67 skills, 38 agents, 77 hooks with Opus 4.6 support
OrchestKit
Skills

Memory Fabric

Knowledge graph memory orchestration - entity extraction, query parsing, deduplication, and cross-reference boosting. Use when designing memory orchestration.

Reference high

Memory Fabric - Graph Orchestration

Knowledge graph orchestration via mcp__memory__* for entity extraction, query parsing, deduplication, and cross-reference boosting.

Overview

  • Comprehensive memory retrieval from the knowledge graph
  • Cross-referencing entities within graph storage
  • Ensuring no relevant memories are missed
  • Building unified context from graph queries

Architecture Overview

┌─────────────────────────────────────────────────────────────┐
│                    Memory Fabric Layer                      │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│   ┌─────────────┐              ┌─────────────┐              │
│   │   Query     │              │   Query     │              │
│   │   Parser    │              │   Executor  │              │
│   └──────┬──────┘              └──────┬──────┘              │
│          │                            │                     │
│          ▼                            ▼                     │
│   ┌──────────────────────────────────────────────┐          │
│   │            Graph Query Dispatch              │          │
│   └──────────────────────┬───────────────────────┘          │
│                          │                                  │
│                ┌─────────▼──────────┐                       │
│                │  mcp__memory__*    │                       │
│                │  (Knowledge Graph) │                       │
│                └─────────┬──────────┘                       │
│                          │                                  │
│                          ▼                                  │
│        ┌─────────────────────────────────────────┐          │
│        │        Result Normalizer                │          │
│        └─────────────────────┬───────────────────┘          │
│                              │                              │
│                              ▼                              │
│        ┌─────────────────────────────────────────┐          │
│        │     Deduplication Engine (>85% sim)     │          │
│        └─────────────────────┬───────────────────┘          │
│                              │                              │
│                              ▼                              │
│        ┌─────────────────────────────────────────┐          │
│        │  Cross-Reference Booster                │          │
│        └─────────────────────┬───────────────────┘          │
│                              │                              │
│                              ▼                              │
│        ┌─────────────────────────────────────────┐          │
│        │  Final Ranking: recency × relevance     │          │
│        │                 × source_authority      │          │
│        └─────────────────────────────────────────┘          │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Unified Search Workflow

Step 1: Parse Query

Extract search intent and entity hints from natural language:

Input: "What pagination approach did database-engineer recommend?"

Parsed:
- query: "pagination approach recommend"
- entity_hints: ["database-engineer", "pagination"]
- intent: "decision" or "pattern"

Step 2: Execute Graph Query

Query Graph (entity search):

mcp__memory__search_nodes({
  query: "pagination database-engineer"
})

Step 3: Normalize Results

Transform results to common format:

{
  "id": "graph:original_id",
  "text": "content text",
  "source": "graph",
  "timestamp": "ISO8601",
  "relevance": 0.0-1.0,
  "entities": ["entity1", "entity2"],
  "metadata": {}
}

Step 4: Deduplicate (>85% Similarity)

When two results have >85% text similarity:

  1. Keep the one with higher relevance score
  2. Merge metadata
  3. Mark as "cross-validated" for authority boost

Step 5: Cross-Reference Boost

If a result mentions an entity that exists elsewhere in the graph:

  • Boost relevance score by 1.2x
  • Add graph relationships to result metadata

Step 6: Final Ranking

Score = recency_factor × relevance × source_authority

FactorWeightDescription
recency0.3Newer memories rank higher
relevance0.5Semantic match quality
source_authority0.2Graph entities boost, cross-validated boost

Result Format

{
  "query": "original query",
  "total_results": 4,
  "sources": {
    "graph": 4
  },
  "results": [
    {
      "id": "graph:cursor-pagination",
      "text": "Use cursor-based pagination for scalability",
      "score": 0.92,
      "source": "graph",
      "timestamp": "2026-01-15T10:00:00Z",
      "entities": ["cursor-pagination", "database-engineer"],
      "graph_relations": [
        { "from": "database-engineer", "relation": "recommends", "to": "cursor-pagination" }
      ]
    }
  ]
}

Entity Extraction

Memory Fabric extracts entities from natural language for graph storage:

Input: "database-engineer uses pgvector for RAG applications"

Extracted:
- Entities:
  - { name: "database-engineer", type: "agent" }
  - { name: "pgvector", type: "technology" }
  - { name: "RAG", type: "pattern" }
- Relations:
  - { from: "database-engineer", relation: "uses", to: "pgvector" }
  - { from: "pgvector", relation: "used_for", to: "RAG" }

See references/entity-extraction.md for detailed extraction patterns.

Graph Relationship Traversal

Memory Fabric supports multi-hop graph traversal for complex relationship queries.

Example: Multi-Hop Query

Query: "What did database-engineer recommend about pagination?"

1. Search for "database-engineer pagination"
   → Find entity: "database-engineer recommends cursor-pagination"

2. Traverse related entities (depth 2)
   → Traverse: database-engineer → recommends → cursor-pagination
   → Find: "cursor-pagination uses offset-based approach"

3. Return results with relationship context

Integration with Graph Memory

Memory Fabric uses the knowledge graph for entity relationships:

  1. Graph search via mcp__memory__search_nodes finds matching entities
  2. Graph traversal expands context via entity relationships
  3. Cross-reference boosts relevance when entities match

Integration Points

With memory Skill

When memory search runs, it can optionally use Memory Fabric for unified results.

With Hooks

  • prompt/memory-fabric-context.sh - Inject unified context at session start
  • stop/memory-fabric-sync.sh - Sync entities to graph at session end

Configuration

# Environment variables
MEMORY_FABRIC_DEDUP_THRESHOLD=0.85    # Similarity threshold for merging
MEMORY_FABRIC_BOOST_FACTOR=1.2        # Cross-reference boost multiplier
MEMORY_FABRIC_MAX_RESULTS=20          # Max results per source

MCP Requirements

Required: Knowledge graph MCP server:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@anthropic/memory-mcp-server"]
    }
  }
}

Error Handling

ScenarioBehavior
graph unavailableError - graph is required
Query emptyReturn recent memories from graph
  • ork:memory - User-facing memory operations (search, load, sync, viz)
  • ork:remember - User-facing memory storage
  • caching - Caching layer that can use fabric

Key Decisions

DecisionChoiceRationale
Dedup threshold85%Balances catching duplicates vs. preserving nuance
Parallel queriesAlwaysReduces latency, both sources are independent
Cross-ref boost1.2xValidated info more trustworthy but not dominant
Ranking weights0.3/0.5/0.2Relevance most important, recency secondary

References (2)

Entity Extraction

Entity Extraction

Extract entities from natural language for graph memory storage.

Entity Types

TypePatternExamples
agentOrchestKit agent namesdatabase-engineer, backend-system-architect
technologyKnown tech keywordspgvector, FastAPI, PostgreSQL, React
patternDesign/architecture patternscursor-pagination, CQRS, event-sourcing
decision"decided", "chose", "will use"Architecture choices
blocker"blocked", "issue", "problem"Identified obstacles

Extraction Patterns

Agent Detection

(database-engineer|backend-system-architect|frontend-ui-developer|
 security-auditor|test-generator|workflow-architect|llm-integrator|
 data-pipeline-engineer|[a-z]+-[a-z]+-?[a-z]*)

Technology Detection

Known technologies: pgvector, PostgreSQL, FastAPI, SQLAlchemy, React, TypeScript, LangGraph, Redis, Celery, Docker, Kubernetes

Relation Extraction

PatternRelation Type
"X uses Y"uses
"X recommends Y"recommends
"X requires Y"requires
"X blocked by Y"blocked_by
"X depends on Y"depends_on
"X for Y" / "X used for Y"used_for

Extraction Algorithm

def extract_entities(text):
    entities = []
    relations = []

    # 1. Find agents
    for agent in KNOWN_AGENTS:
        if agent in text.lower():
            entities.append({"name": agent, "type": "agent"})

    # 2. Find technologies
    for tech in KNOWN_TECHNOLOGIES:
        if tech.lower() in text.lower():
            entities.append({"name": tech, "type": "technology"})

    # 3. Extract relations via patterns
    for pattern, relation_type in RELATION_PATTERNS:
        matches = re.findall(pattern, text)
        for from_entity, to_entity in matches:
            relations.append({
                "from": from_entity,
                "relation": relation_type,
                "to": to_entity
            })

    return {"entities": entities, "relations": relations}

Graph Storage Format

Create entities:

mcp__memory__create_entities({
  entities: [
    { name: "database-engineer", entityType: "agent", observations: ["recommends pgvector"] },
    { name: "pgvector", entityType: "technology", observations: ["vector extension for PostgreSQL"] }
  ]
})

Create relations:

mcp__memory__create_relations({
  relations: [
    { from: "database-engineer", to: "pgvector", relationType: "recommends" }
  ]
})

Observation Patterns

When adding observations to existing entities:

mcp__memory__add_observations({
  observations: [
    { entityName: "pgvector", contents: ["supports HNSW indexing", "requires PostgreSQL 15+"] }
  ]
})

Query Merging

Query Merging Algorithm

Algorithm for querying and ranking results from the knowledge graph via mcp__memory.

Query Execution

Execute graph query via MCP:

# Query knowledge graph (MCP)
mcp__memory__search_nodes({ query })

Result Normalization

Transform graph results to unified format:

{
  "id": "graph:{entity_name}",
  "text": "{observations joined}",
  "source": "graph",
  "timestamp": "null",
  "relevance": "1.0 for exact match, 0.8 for partial",
  "entities": "[name, related entities]",
  "metadata": { "entityType": "{type}", "relations": [] }
}

Deduplication Logic

Calculate similarity using normalized text comparison:

def similarity(text_a, text_b):
    # Normalize: lowercase, remove punctuation, tokenize
    tokens_a = normalize(text_a)
    tokens_b = normalize(text_b)

    # Jaccard similarity
    intersection = len(tokens_a & tokens_b)
    union = len(tokens_a | tokens_b)
    return intersection / union if union > 0 else 0

# Merge if similarity > 0.85
if similarity(result_a.text, result_b.text) > DEDUP_THRESHOLD:
    merged = merge_results(result_a, result_b)

Merge Strategy:

  1. Keep text from higher-relevance result
  2. Combine entities from both
  3. Preserve metadata with source_* prefix
  4. Set cross_validated: true

Cross-Reference Boosting

When a result mentions a graph entity found elsewhere in the graph:

for result in graph_results:
    for entity in all_graph_entities:
        if entity.name.lower() in result.text.lower():
            result.relevance *= BOOST_FACTOR  # 1.2x
            result.graph_relations = entity.relations
            result.cross_referenced = True

Final Ranking Formula

def compute_score(result):
    # Recency: decay over 30 days
    age_days = (now - result.timestamp).days
    recency = max(0.1, 1.0 - (age_days / 30))

    # Source authority
    authority = 1.0
    if result.cross_validated:
        authority = 1.3
    else:
        authority = 1.1

    # Final score
    return (recency * 0.3) + (result.relevance * 0.5) + (authority * 0.2)

Output Assembly

{
  "query": "original query",
  "total_results": 4,
  "sources": { "graph": 4 },
  "results": "[sorted by score descending]"
}
Edit on GitHub

Last updated on