η Proof Store — Algorithm Extraction

⚠ HERITAGE EXTRACTION — donor AMS η Proof Store (Wave 8 quarantine)

This file extracts the donor AMS Merkle/retention surface from src/controllers/merkle.js and src/domains/retention/ (both deleted R53). The donor merkle_* and memory_* tool families are donor accretion. Phase 0 Colibri ships exactly 6 ζ/η tools (3 audit + 3 merkle, per ADR-004 R74.5) targeting src/domains/merkle/ (P0.8). The Phase 0 truth lives in ../../concepts/η-proof-store.md. The load-bearing ordering rule still applies: the final thought_record reflection MUST precede merkle_finalize.

Read this file as donor genealogy only.

Algorithmic content extracted from AMS src/controllers/merkle.js and src/domains/retention/ for Colibri implementation reference.

Merkle Tree Construction

The tree is built incrementally via merkle_attest calls. Leaves are added one at a time; internal nodes and root are recomputed on each addition.

Leaf and Internal Node Hashing

Leaf node:
  leaf_hash = SHA256(operation_content)

Internal node:
  node_hash = SHA256(left_child_hash + right_child_hash)
  (string concatenation of two hex-encoded SHA-256 digests)

Root:
  root_hash = node_hash at the apex of the fully balanced tree

Tree Structure

         root_hash
        /         \
    hash_AB      hash_CD
    /    \       /    \
hash_A hash_B hash_C hash_D    ← leaf hashes (individual operations)

If the leaf count is odd, the last leaf is duplicated to complete the level (standard Merkle padding).

Incremental Append Algorithm

function appendLeaf(operation_content):
  leaf_hash = SHA256(operation_content)
  
  # Store leaf
  db.insert('mcp_merkle', { hash: leaf_hash, type: 'leaf', position: next_position })
  
  # Recompute path to root
  current = leaf_hash
  level = 0
  while level < tree_depth:
    sibling = getSibling(current, level)
    if is_left_child(current, level):
      parent_hash = SHA256(current + sibling)
    else:
      parent_hash = SHA256(sibling + current)
    
    db.upsert('mcp_merkle', { hash: parent_hash, level: level+1 })
    current = parent_hash
    level++
  
  root = current
  db.update('mcp_merkle_root', { hash: root })
  return leaf_hash

Inclusion Proof Generation

merkle_proof(leaf_hash) generates the sibling path from leaf to root:

function generateProof(leaf_hash):
  path = []
  current = leaf_hash
  level = 0

  while level < tree_depth:
    sibling = getSibling(current, level)
    position = "right" if is_left_child(current, level) else "left"
    path.append({ hash: sibling, position: position })
    current = getParent(current, level)
    level++

  return {
    leaf_hash: leaf_hash,
    proof: path,
    root: current  # final root hash
  }

Proof Format

{
  "leaf_hash": "abc123...",
  "proof": [
    { "hash": "def456...", "position": "right" },
    { "hash": "789abc...", "position": "left"  }
  ],
  "root": "final_root_hash..."
}

position indicates where the sibling sits relative to the current node during verification recomputation.

Proof Verification Algorithm

merkle_verify(leaf_hash, proof, root):

function verifyProof(leaf_hash, proof, expected_root):
  current = leaf_hash

  for step in proof:
    if step.position == "right":
      # current is left child, sibling is right
      combined = current + step.hash
    else:
      # current is right child, sibling is left
      combined = step.hash + current
    
    current = SHA256(combined)

  return current == expected_root

Returns true if the leaf is proven to belong to the tree with that root. Returns false if any hash in the path does not match.

Memory Packing Algorithm

memory_pack(session_id) compresses working memory into the proof store:

1. SERIALIZE
   entries = db.query("SELECT * FROM memory_short_term WHERE session_id = ?")
   serialized = JSON.stringify(entries)

2. HASH
   For each entry:
     leaf_hash = SHA256(JSON.stringify(entry))
   
   Create subtree from entry leaf hashes

3. ATTACH
   appendLeaf(SHA256(serialized))  — full session as single leaf
   OR
   For each entry: appendLeaf(entry_hash)  — individual leaves

4. DELETE RAW
   db.delete("DELETE FROM memory_short_term WHERE session_id = ?")
   (Packed data is now in the Merkle tree; raw entries removed to save space)

5. RECORD
   thought_record({ type: "reflection", content: "Memory packed for session X. N entries." })

After packing: session data is compressed from N rows of memory_short_term to leaf hashes in mcp_merkle. The original content is still provable via inclusion proofs but the raw text is gone.

Retention Zones

Three active zones, managed by src/domains/retention/:

Zone TTL Content Status
Hot 7 days Active session data, recent thought records, current task state Running
Warm 30 days Completed tasks, finalized proof trees, packed memory Running
Cold 365 days Archived sessions, historical Merkle roots Running
Frozen Indefinite Finalized roots (hash-only, content pruned) Spec-only — NOT operational

TTL-Based Zone Transition Logic

function runRetentionPass():
  now = current_timestamp

  # Hot → Warm
  hot_items = db.query(
    "SELECT * FROM memory_short_term WHERE last_accessed < ?",
    now - 7_days
  )
  for item in hot_items:
    move_to_warm(item)

  # Warm → Cold
  warm_items = db.query(
    "SELECT * FROM memory_long_term WHERE last_accessed < ? AND zone = 'warm'",
    now - 30_days
  )
  for item in warm_items:
    move_to_cold(item)

  # Cold retention (365 days, then purge or archive)
  cold_items = db.query(
    "SELECT * FROM memory_long_term WHERE last_accessed < ? AND zone = 'cold'",
    now - 365_days
  )
  for item in cold_items:
    # Frozen tier would go here — NOT IMPLEMENTED
    # Current: log warning, await manual decision
    log_expiry_warning(item)

Transitions are triggered automatically on read access: reading a Cold item can promote it to Warm. Transition direction: older/less-accessed items move to cooler zones.

Frozen tier note: The design calls for pruning Cold content to hash-only archival (Frozen). This has no runtime implementation — no table, no migration, no domain code. Do not reference Frozen as an active storage zone.

Finalization Workflow

Canonical sequence for sealing a work session:

1. Write final thought_record
   thought_record({ type: "reflection", content: "Session handoff summary..." })
   RULE: This MUST be the last leaf before finalization.

2. Finalize tree
   merkle_finalize()
   → Locks tree (no more merkle_attest calls accepted)
   → Computes and persists final root hash
   → Status: immutable

3. Retrieve root
   root = merkle_root()
   → Returns the immutable root hash string

4. Anchor
   The root hash is stored in audit_log as the session's proof anchor.
   Optional: attach to external audit chain.

Tool Surface (src/controllers/merkle.js)

Tool What it does
merkle_attest Add operation(s) to the tree as new leaf nodes
merkle_finalize Lock the tree; root becomes immutable
merkle_root Return the current root hash
merkle_proof Generate inclusion proof for a leaf
merkle_verify Given leaf + proof, verify membership in tree
merkle_audit Summarize tree: leaf count, root, finalization status

Non-existent tools: merkle_build and merkle_verify_static do NOT exist in the codebase. The tree grows via merkle_attest incrementally.

See Also

  • [[concepts/η-proof-store η Proof Store]] — concept overview
  • [[extractions/zeta-decision-trail-extraction ζ Decision Trail Extraction]] — thought chains that feed the tree
  • [[architecture/database Database Architecture]] — mcp_merkle table schema
  • [[guides/tutorials/merkle-audit Merkle Audit Tutorial]] — end-to-end finalization workflow

Back to top

Colibri — documentation-first MCP runtime. Apache 2.0 + Commons Clause.

This site uses Just the Docs, a documentation theme for Jekyll.