P0.7.1 — Step 3 Execution Packet

Approved execution plan for P0.7.1 ζ Hash-Chained Record Schema. This packet gates Step 4 implementation; deviations require an amended packet.


§1. Files to author

1a. src/domains/trail/schema.ts — new module

Est. ~150-180 LOC (including TSDoc). Structure:

1-20    Module header comment: identity, references, consumers, purity statement.
22-24   Imports: z from 'zod', createHash from 'node:crypto'.
26-40   THOUGHT_TYPES tuple + TSDoc.
42-52   ThoughtType type + TSDoc.
54-60   ZERO_HASH constant + TSDoc.
62-85   ThoughtRecordSchema + TSDoc (invariant notes: shape-only, not hash-validating).
87-92   ThoughtRecord type (inferred).
94-140  canonicalize(value) function + TSDoc.
142-180 computeHash(record) function + TSDoc.

Imports: only zod and node:crypto. No project-internal imports. No side-effect imports.

1b. src/__tests__/trail-schema.test.ts — new test file

Est. ~280-320 LOC. Structure:

1-20    File header + imports.
22-40   describe('THOUGHT_TYPES + ThoughtType'): 2-3 tests.
42-80   describe('ZERO_HASH'): 3 tests.
82-160  describe('ThoughtRecordSchema'): ~12 tests.
162-240 describe('canonicalize'): ~10 tests.
242-320 describe('computeHash'): ~14 tests.

Total tests: 30+ (contract mandates 25+; packet targets 30 for safety margin).

1c. No changes to other files

  • package.json unchanged.
  • jest.config.ts unchanged.
  • tsconfig.json unchanged.
  • src/server.ts unchanged.
  • src/config.ts unchanged.
  • src/modes.ts unchanged.
  • src/db/* unchanged.
  • No new dependencies.

§2. File skeleton — src/domains/trail/schema.ts

/**
 * Colibri — Phase 0 ζ Decision Trail: hash-chained record schema.
 *
 * Pure primitives for the Phase 0 ζ Decision Trail surface:
 *   - THOUGHT_TYPES tuple + ThoughtType union (4 types: plan, analysis,
 *     decision, reflection — canonical order)
 *   - ZERO_HASH constant (64 zeros, for genesis records)
 *   - ThoughtRecordSchema (Zod object validating record shape only)
 *   - ThoughtRecord type (inferred)
 *   - canonicalize(value) — deterministic sorted-key JSON serialization
 *   - computeHash(record) — SHA-256 over canonical-JSON of 6 subset fields
 *
 * Hash input is a SUBSET of the record fields:
 *   {id, type, task_id, content, timestamp, prev_hash}
 * The full record also has `agent_id` (author metadata, not chain-integrity)
 * and `hash` (the output itself). Both are EXCLUDED from the hash input.
 *
 * Canonical references:
 *   - docs/guides/implementation/task-breakdown.md § P0.7.1
 *   - docs/audits/p0-7-1-trail-schema-audit.md
 *   - docs/contracts/p0-7-1-trail-schema-contract.md
 *   - docs/reference/extractions/zeta-decision-trail-extraction.md (donor ref)
 *
 * Consumed by (future):
 *   - P0.7.2 — src/domains/trail/repository.ts + `thought_record` MCP tool
 *   - P0.7.3 — src/domains/trail/verifier.ts + `audit_verify_chain` MCP tool
 *
 * This module is pure. It has no eager side-effects at import time and
 * no runtime state. Every call to computeHash redoes the hash from scratch.
 */

import { createHash } from 'node:crypto';
import { z } from 'zod';

/**
 * The four valid Phase-0 ζ thought types, in canonical order.
 *   - `plan`       — authored before work; intent to do X then Y then Z.
 *   - `analysis`   — authored during work; examining facts, code, options.
 *   - `decision`   — authored at a choice point; "chose A over B because C".
 *   - `reflection` — authored after work; lessons, outcomes, blockers.
 *
 * `observation` and `hypothesis` (donor AMS candidates) are NOT valid — use
 * `analysis` for recorded observations. Order is iterated by callers; adding
 * a new type is a breaking change.
 */
export const THOUGHT_TYPES = ['plan', 'analysis', 'decision', 'reflection'] as const;

/** A valid thought-record type. Derived from THOUGHT_TYPES. */
export type ThoughtType = (typeof THOUGHT_TYPES)[number];

/**
 * Genesis `prev_hash` — exactly 64 ASCII `'0'` characters.
 *
 * The first record in any chain uses this as its `prev_hash`. Every
 * subsequent record's `prev_hash` is the previous record's `hash` value.
 *
 * Lowercase hex, to match the SHA-256 digest output of `computeHash`.
 */
export const ZERO_HASH: string = '0'.repeat(64);

/**
 * Zod schema for a thought record. Validates SHAPE only — does NOT verify
 * that `hash` was correctly computed from the other fields. That is the
 * job of the P0.7.3 `audit_verify_chain` tool.
 *
 * Field invariants:
 *   - id, task_id, agent_id, timestamp — non-empty strings.
 *   - content — string (empty allowed; a zero-content thought is valid
 *     if unusual).
 *   - type — one of THOUGHT_TYPES.
 *   - prev_hash, hash — exactly 64 characters. Format (hex/base64/...) is
 *     NOT enforced at this layer.
 */
export const ThoughtRecordSchema = z.object({
  id: z.string().min(1),
  type: z.enum(THOUGHT_TYPES),
  task_id: z.string().min(1),
  agent_id: z.string().min(1),
  content: z.string(),
  timestamp: z.string().min(1),
  prev_hash: z.string().length(64),
  hash: z.string().length(64),
});

/** The validated shape of a thought record. Inferred from ThoughtRecordSchema. */
export type ThoughtRecord = z.infer<typeof ThoughtRecordSchema>;

/**
 * Deterministic canonical-JSON serialization.
 *
 * Recursively sorts object keys (ASCII-ascending) at every nesting depth,
 * then serializes with no whitespace. Arrays preserve insertion order.
 * Primitives match `JSON.stringify` output.
 *
 * Deterministic across platforms and Node versions: two calls on JSON-equal
 * inputs (possibly with different object-literal author order) produce
 * byte-identical strings.
 *
 * Throws `TypeError` on circular references, BigInt values, or values that
 * `JSON.stringify` cannot serialize.
 *
 * This function is pure. It does not mutate the input.
 */
export function canonicalize(value: unknown): string {
  return JSON.stringify(sortValue(value));
}

/**
 * Recursively rebuild `value` with plain-object keys sorted ascending.
 * Returns a structurally-equivalent tree. Arrays preserve order; primitives
 * pass through. Plain objects are rebuilt in sorted-key order.
 */
function sortValue(value: unknown): unknown {
  if (value === null || typeof value !== 'object') {
    return value;
  }
  if (Array.isArray(value)) {
    return value.map(sortValue);
  }
  const obj = value as Record<string, unknown>;
  const sorted: Record<string, unknown> = {};
  for (const key of Object.keys(obj).sort()) {
    sorted[key] = sortValue(obj[key]);
  }
  return sorted;
}

/**
 * Compute a thought record's `hash` field.
 *
 * Input is the SUBSET of record fields that participate in the chain:
 * `{id, type, task_id, content, timestamp, prev_hash}`. `agent_id` and
 * `hash` are EXCLUDED by design (see the audit §4 for the rationale).
 *
 * Output is a 64-character lowercase-hex SHA-256 digest of the canonical-
 * JSON serialization of the subset.
 *
 * Deterministic: two calls on records with the same 6 subset values
 * produce identical output, regardless of object-literal author order.
 *
 * Callers that want shape validation should run `ThoughtRecordSchema.parse`
 * first; `computeHash` does not validate its input.
 */
export function computeHash(record: {
  id: string;
  type: ThoughtType;
  task_id: string;
  content: string;
  timestamp: string;
  prev_hash: string;
}): string {
  const subset = {
    id: record.id,
    type: record.type,
    task_id: record.task_id,
    content: record.content,
    timestamp: record.timestamp,
    prev_hash: record.prev_hash,
  };
  const canonical = canonicalize(subset);
  return createHash('sha256').update(canonical, 'utf8').digest('hex');
}

The private sortValue helper is a deliberate design choice over a stringifying replacer: a recursive rebuild yields clear types, is easier to test (via the behavioral output of canonicalize), and avoids the replacer function’s reentrancy semantics around toJSON.


§3. Test matrix — src/__tests__/trail-schema.test.ts

Test count target: 30+. All contract tests from §8 of the contract are included; packet adds edge cases.

3a. describe('THOUGHT_TYPES + ThoughtType') — 2 tests

  1. has exactly 4 entries in canonical orderexpect(THOUGHT_TYPES).toEqual(['plan', 'analysis', 'decision', 'reflection']).
  2. is a readonly tupleexpect(Object.isFrozen(THOUGHT_TYPES)).toBe(false) is NOT asserted (tuple literal as const is type-level, not runtime-frozen); instead, assert the tuple passes through to Zod cleanly: z.enum(THOUGHT_TYPES).safeParse('plan').success === true.

3b. describe('ZERO_HASH') — 3 tests

  1. has length 64expect(ZERO_HASH.length).toBe(64).
  2. is exactly 64 zero charactersexpect(ZERO_HASH).toMatch(/^0{64}$/).
  3. has the expected literal valueexpect(ZERO_HASH).toBe('0000000000000000000000000000000000000000000000000000000000000000').

3c. describe('ThoughtRecordSchema') — 12 tests

Fixture: const VALID = { id: 'r1', type: 'plan', task_id: 't1', agent_id: 'a1', content: 'hello', timestamp: '2026-04-17T00:00:00Z', prev_hash: ZERO_HASH, hash: 'a'.repeat(64) };

  1. accepts a valid record and returns it unchangedexpect(schema.parse(VALID)).toEqual(VALID).
  2. rejects missing idsafeParse({...VALID, id: undefined}).success === false.
  3. rejects missing type — same pattern.
  4. rejects missing task_id — same.
  5. rejects missing agent_id — same.
  6. rejects missing content — same.
  7. rejects missing timestamp — same.
  8. rejects missing prev_hash — same.
  9. rejects missing hash — same.
  10. rejects invalid type value 'observation'safeParse({...VALID, type: 'observation'}).success === false.
  11. rejects 63-char prev_hash'0'.repeat(63) fails.
  12. rejects 65-char hash'a'.repeat(65) fails.

Plus a describe('empty-string behavior') with 1 test: content may be ''. This is an inclusion test, not a rejection test — VALID with content: '' parses. That is test 13.

3d. describe('canonicalize') — 10 tests

  1. serializes a number primitivecanonicalize(42) === '42'.
  2. serializes a string primitivecanonicalize('hi') === '"hi"'.
  3. serializes a boolean primitive (true and false) — two assertions.
  4. serializes nullcanonicalize(null) === 'null'.
  5. sorts object keys ascendingcanonicalize({b: 1, a: 2}) === '{"a":2,"b":1}'.
  6. recurses into nested objectscanonicalize({b: {d: 1, c: 2}, a: 3}) === '{"a":3,"b":{"c":2,"d":1}}'.
  7. two objects with same keys in different insertion orders produce identical outputcanonicalize({b:1,a:2}) === canonicalize({a:2,b:1}).
  8. preserves array insertion ordercanonicalize([3, 1, 2]) === '[3,1,2]'.
  9. skips undefined values in objects (native JSON.stringify behavior)canonicalize({a: 1, b: undefined, c: 3}) === '{"a":1,"c":3}'.
  10. throws TypeError on circular reference — build const o: any = {}; o.self = o; then expect(() => canonicalize(o)).toThrow(TypeError).

3e. describe('computeHash') — 14 tests

Fixture: const INPUT = { id: 'r1', type: 'plan' as ThoughtType, task_id: 't1', content: 'hello', timestamp: '2026-04-17T00:00:00Z', prev_hash: ZERO_HASH };

  1. returns a 64-char lowercase-hex stringconst h = computeHash(INPUT); expect(h).toMatch(/^[0-9a-f]{64}$/);.
  2. deterministic: two calls on the same input produce identical output — compute twice, compare.
  3. deterministic: snapshot value for a fixed genesis input — compute INPUT, compare against a fixed expected digest. (Computed once during test development; see §4 for the value.)
  4. insertion-order-agnostic: swapping field author order gives same hash — build input with keys in reverse order (prev_hash first, timestamp next, …), assert identical hash.
  5. ignores agent_id: two full records differing only in agent_id produce identical hash — pass full ThoughtRecord-shaped objects with different agent_id and confirm computeHash (via the extra-field-tolerant signature; test passes the fields anyway) is insensitive.
  6. ignores hash: passing a hash field gives same hash as without — same pattern with hash extra field (demonstrates TS-level signature exclusion + runtime subset extraction).
  7. sensitive to id change — vary id, assert different hash.
  8. sensitive to type change — vary type (plan→analysis), assert different.
  9. sensitive to task_id change — vary, assert different.
  10. sensitive to content change — vary, assert different.
  11. sensitive to timestamp change — vary, assert different.
  12. sensitive to prev_hash change — vary (ZERO_HASH → different 64-char hex), assert different.
  13. all 4 types produce distinct hashes with otherwise identical input — run computeHash for each of plan, analysis, decision, reflection, collect into Set, assert size === 4.
  14. handles long content (1 KB) — 1024-char content string, confirm it hashes cleanly to a 64-char hex.

Tests 5 and 6 prove the critical exclusion invariant from the contract §3.

3f. Optional edge cases — 1-2 tests

  1. computeHash handles empty-string contentINPUT.content = '' hashes cleanly.
  2. computeHash handles Unicode contentINPUT.content = 'héllo 世界 🎉' hashes cleanly (UTF-8 encoding confirmed stable).

Total: 3b(3) + 3c(13) + 3d(10) + 3e(14) + 3f(2) + 3a(2) = 44 tests.


§4. Snapshot hash value — pre-computed

Test 3e.3 needs a fixed expected hash. Compute it during implementation from the fixed input:

canonical = {"content":"hello","id":"r1","prev_hash":"0000000000000000000000000000000000000000000000000000000000000000","task_id":"t1","timestamp":"2026-04-17T00:00:00Z","type":"plan"}

SHA-256 of that UTF-8 string. The implementer computes this value via node -e "const {createHash}=require('node:crypto'); console.log(createHash('sha256').update('{\"content\":\"hello\",\"id\":\"r1\",\"prev_hash\":\"0000000000000000000000000000000000000000000000000000000000000000\",\"task_id\":\"t1\",\"timestamp\":\"2026-04-17T00:00:00Z\",\"type\":\"plan\"}').digest('hex'))" and pins the result in the test. (Packet does not pre-compute to avoid baking a stale value; Step 4 computes and asserts.)


§5. Coverage expectations

Per the contract §9:

  • src/domains/trail/schema.ts — 100% statements, 100% functions, 100% lines, ≥95% branches.

Every branch in the file is test-reachable:

  • sortValue:
    • value === null → test 3d.4 (canonicalize(null)).
    • typeof value !== 'object' → test 3d.1 / 3d.2 / 3d.3 (number, string, boolean).
    • Array.isArray(value) → test 3d.8 (array).
    • Object-default branch → tests 3d.5 / 3d.6 (object + nested).
  • canonicalize: single expression, covered by any of 3d.1-3d.9.
  • computeHash: single path, covered by 3e.1.
  • ThoughtRecordSchema: covered by 3c parse/safeParse tests.

Expected coverage: 100/100/100/100.


§6. Verification sequence (Step 5)

Run in order, all must pass:

  1. npm ci — clean install, zero vulnerabilities, zero deprecation errors.
  2. npm run lint — zero errors, zero warnings. The new file must pass ESLint cleanly (no // eslint-disable comments).
  3. npm test — all existing + new tests pass. Coverage report shows src/domains/trail/schema.ts at expected coverage.
  4. npm run buildtsc emits dist/domains/trail/schema.js + .d.ts with no errors.

If any step fails, Step 5 writes the failure into the verification doc and the task does NOT proceed to push/PR.


§7. Commit plan

Five commits total, matching the 5-step chain:

  1. audit(p0-7-1-trail-schema): inventory surface — DONE (f7856b82).
  2. contract(p0-7-1-trail-schema): behavioral contract — DONE (d5587da1).
  3. packet(p0-7-1-trail-schema): execution plan — this commit.
  4. feat(p0-7-1-trail-schema): Zod schema + SHA-256 canonical-JSON hasher — source + tests.
  5. verify(p0-7-1-trail-schema): test evidence — verification doc.

All commits add files only. No edits to existing files.


§8. Push + PR plan

After Step 5 verifies clean:

unset GITHUB_TOKEN
git push -u origin feature/p0-7-1-trail-schema
gh pr create \
  --title "feat(p0-7-1): ζ hash-chained record schema — SHA-256 canonical JSON + ZERO_HASH genesis" \
  --body "<as dispatched>"

STOP after PR opens. Do NOT merge. Sigma merges.


§9. Risks + mitigations

Risk Likelihood Impact Mitigation
jest.config.ts roots: ['<rootDir>/src'] excludes my test HIGH if I forget Block File placed at src/__tests__/trail-schema.test.ts per Wave A lock. Confirmed by packet §1b.
Circular-reference test causes Jest to hang MEDIUM Block Use expect(() => canonicalize(o)).toThrow(TypeError)JSON.stringify throws synchronously on circular refs, so no hang.
Non-determinism across platforms (Linux CI vs Windows local) LOW Silent break Canonicalizer uses Object.keys().sort() (default ASCII order, platform-stable) and UTF-8 digest input. createHash is Node-portable. Snapshot test pins a fixed digest.
better-sqlite3 native-build issue blocks npm ci LOW Blocks verification Already verified to install cleanly on P0.2.2; no change to native deps here.
Lint rule @typescript-eslint/no-explicit-any trips on const o: any = {} in circular-ref test MEDIUM Block Use const o: Record<string, unknown> = {}; (o as any).self = o; with one localised // eslint-disable-next-line for the cycle setup, OR use a { self: null as any } pattern. Final: packet prefers casting the cycle attach point inline — tight, understandable. Implementation selects the style that passes lint cleanly.
Zod v3 locale-cache bug under jest.isolateModulesAsync LOW N/A Packet §3 does NOT use jest.isolateModulesAsync. Pure-factory pattern, no module-load isolation.
Coverage < 100% because of unreachable defensive branches in sortValue LOW Advisory Contract allows ≥95% branch. Actual target per §5 is 100% on this small surface; if a branch comes in below target, the verification doc reports it and explains why.
Parallel Wave C collision on src/domains/ directory LOW None Git tracks files, not directories. All four Wave C tasks create their own subdirectory under src/domains/ (tasks/, skills/, trail/). No merge conflict.

§10. Non-deviation check

Dispatch prompt lock confirmations:

Lock Where satisfied
Test path = src/__tests__/<module>.test.ts §1b — src/__tests__/trail-schema.test.ts
snake_case tool names N/A this task (no tool registration)
No new runtime deps §1c — no package.json edits
Hash inputs exclude agent_id and hash §2 — computeHash signature takes 6 subset fields
Canonical JSON deterministic cross-platform §2 — Object.keys().sort() + UTF-8 digest; §3d.7, §3e.4 tested
Record schema shape matches spec §2 — 8 fields matching spec line 325
4 valid types §2 — THOUGHT_TYPES matches spec line 326
Hash algorithm matches spec §2 — computeHash uses createHash('sha256') over canonical-JSON of 6-field subset
ZERO_HASH = "0"*64 §2 — '0'.repeat(64)
Determinism test §3e.2 / §3e.3 / §3e.4
Step 1 audit produced DONE (f7856b82)
Step 2 contract produced DONE (d5587da1)

No deviations from the dispatch prompt. Ready to implement.


§11. Packet acceptance

  • File inventory (§1).
  • Source skeleton with exact imports + exports (§2).
  • 30+-test matrix with exact assertions (§3).
  • Snapshot-hash plan (§4).
  • Coverage target + per-branch accounting (§5).
  • Verification sequence (§6).
  • Commit plan (§7).
  • Push + PR plan (§8).
  • Risk register (§9).
  • Dispatch-prompt non-deviation check (§10).

Sigma pre-approved this packet via the dispatch prompt. Proceeding to Step 4.


Back to top

Colibri — documentation-first MCP runtime. Apache 2.0 + Commons Clause.

This site uses Just the Docs, a documentation theme for Jekyll.