What Constraints Make Behavior Reliable?

Какие ограничения делают поведение надёжным?

⚠ Phase 0 reality stamp (R74.5). This document describes the full constraint surface including layers that are not yet in Phase 0. Phase 0 enforces: (1) append-only thought_records with SHA-256 hash chaining via audit_verify_chain, (2) serialized single-writer SQLite via the tool-lock middleware stage, (3) 8-state β FSM transitions enforced at the middleware contract layer (INIT → GATHER → ANALYZE → PLAN → APPLY → VERIFY → DONE + CANCELLED), (4) Merkle sealing via merkle_finalize. The κ Rule Engine (determinism, no-clock, no-randomness), θ Consensus (Byzantine fault tolerance), λ Reputation, and ξ Identity constraints are specified but deferred to Phases 1–8. Canonical values live in colibri-system.md §2.

Preamble

A constraint is something you cannot do. A feature is something you can do. This document describes the constraints that make Colibri’s behavior predictable, trustworthy, and verifiable. Constraints are what prevent chaos in a peer-to-peer system with no central authority.

The difference: a feature says “the system supports X.” A constraint says “the system forbids Y.” Reliability comes from forbidding dangerous things, not from building more things.

Colibri has seven categories of constraints. Each category exists to answer a question:

  1. Determinism — How do we ensure every node computes the same result?
  2. Immutability — How do we prove nothing was altered after the fact?
  3. State Machines — How do we prevent tasks from getting stuck or looping?
  4. Serialization — How do we prevent concurrent writes from corrupting the database?
  5. Finality Gates — How do we prevent irreversible side-effects before we’re sure?
  6. Byzantine Tolerance — How do we agree when some nodes lie?
  7. Governance Rate Limits — How do we prevent rule changes from destabilizing the system?

Constraint Category 1: Determinism (κ Rule Engine)

What it is: The rule engine is a pure function. Given an event and current state, it computes consequences in a way that every honest node will reproduce identically, bit for bit.

What is forbidden:

Forbidden Operation Why
Reading wall-clock time Different nodes have different local clocks; skew causes disagreement
Reading filesystem state Different nodes have different filesystems
Making network calls inside rule evaluation Network calls are non-deterministic; timeouts vary per node
Using floating-point arithmetic 0.1 + 0.2 ≠ 0.3 on different processors; bit-for-bit agreement impossible
Using randomness (unseeded) Each node generates different random numbers; quorum voting fails
Depending on execution order of independent rules CPU caches and instruction reordering can cause different ordering
Mutating input event objects Events are append-only; side effects on input break auditability
Reading external sources (APIs, files, clock) Non-determinism spreads; different nodes get different values

What is enforced instead:

All arithmetic uses 64-bit signed integers only. Percentages are expressed in basis points (1 bp = 0.01%):

100% = 10,000 bps
5%   = 500 bps
0.5% = 50 bps
0.01% = 1 bp

bps_mul(value, bps) = (value * bps) // 10000
bps_div(value, bps) = (value * 10000) // bps
decay(value, rate)  = (value * (10000 - rate)) // 10000
cap(value, max_bps) = min(value, (max_bps * 10000) // 10000)

Rule Evaluation Order is fixed and deterministic:

1. Filter by event type (Commitment, Dispute, Settlement, etc.)
2. Apply Admission rules (can this event be created?)
3. Apply State transition rules (what changes?)
4. Apply Consequence rules (reputation, stake)
5. Apply Promotion rules (token level changes)

Within each class, rules are ordered statically (defined order, not alphabetical).

DSL Built-in Functions (8 total, all integer-pure):

  • min(a, b) — minimum of two values
  • max(a, b) — maximum of two values
  • sqrt(n) — integer square root via Newton’s method
  • log2(n) — integer log base-2 (bit_length - 1)
  • abs(x) — absolute value
  • cap(value, max_bps) — cap at maximum basis points
  • decay(value, rate_bps) — apply exponential decay
  • diminishing(count, k=1000) — diminishing returns function

VRF (Verifiable Random Function) is used when “randomness” is needed:

  • VRF produces output that looks random but comes with a cryptographic proof
  • The output is passed as input to the rule engine
  • Every node uses the same VRF output, so determinism is preserved
  • VRF is evaluated outside the rule engine (not inside it)

Why this constraint exists:

In a centralized system, one server decides consequences. In Colibri’s P2P network, every node must independently arrive at the same answer. If two nodes disagree on the result of a rule evaluation, the network splits. Determinism prevents this.

What breaks without it:

  • Nodes diverge on who has reputation (contradictory opinions of same event)
  • Consensus voting fails (nodes vote differently on identical events)
  • Stake penalties are computed differently per node (unfair punishment)
  • History becomes subjective (no ground truth)

Constraint Category 2: Immutability (ζ Decision Trail + Constitutional Axiom 1)

What it is: Every decision, plan, analysis, and reasoning step is recorded as a hash-chained thought record. Modifying any record after creation breaks the chain, making tampering detectable.

What is forbidden:

Forbidden Operation Why
Deleting thought records History must be complete; deletion breaks the trail
Modifying record content after creation Hash changes, breaking the chain
Reordering records Chain_hash depends on parent; reordering breaks links
Deleting events from the decision trail Decisions must be permanent; deletion hides past reasoning
Backdating records (setting created_at in the past) Chain validation fails if timestamps are inconsistent
Creating records without a parent in a session Each record must link to the previous one (or be first)

Data structure for each thought record:

interface ThoughtRecord {
  id: string;                    // Unique ID
  session_id: string;            // Groups records by session
  parent_id: string | null;      // Previous record (null = root)
  type: "plan" | "analysis" | "decision" | "reflection";
  content: string;               // Actual thought text
  content_hash: string;          // SHA-256(content)
  chain_hash: string;            // SHA-256(content_hash + parent.chain_hash)
  metadata: {
    task_id?: string;
    agent_id?: string;
    tool_calls?: string[];
    timestamps?: string[];
  };
  created_at: ISO8601;           // Immutable timestamp
}

Chain hash formula:

If parent_id is null:
  parent_chain_hash = 0x0000... (zero hash)

chain_hash = SHA256(content_hash + parent_chain_hash)

Verification workflow (thought_verify operation):

  1. Walk chain from first record to last
  2. For each record, compute SHA256(content) → compare to stored content_hash
  3. Compute SHA256(content_hash + parent.chain_hash) → compare to stored chain_hash
  4. If any mismatch: return {valid: false, broken_at: record_id, expected: hash_X, actual: hash_Y}
  5. If all match: return {valid: true, records: N}

Constitutional Axiom 1: Append-Only Events

Events are never deleted. Corrections are new events. This is immutable at the system level.

Why this constraint exists:

In a system where decisions can be hidden or altered, auditing is impossible. By making the trail tamper-evident, any modification is detectable. A broken hash chain is proof of tampering.

What breaks without it:

  • Agents hide unfavorable decisions (reputation manipulated retroactively)
  • Thought records are rewritten (decision history becomes fiction)
  • Audit trails are fabricated (proofs become worthless)
  • Accountability disappears (no way to prove what was decided and when)

Constraint Category 3: State Machines (β Task Pipeline)

What it is: Tasks flow through exactly 7 states. No task skips a state, loops back unexpectedly, or gets stuck in an undefined state.

State machine (β):

INIT → GATHER → ANALYZE → PLAN → APPLY → VERIFY → DONE
  ↑                                                   |
  └───────────── RETRY (on failure) ─────────────────┘

What is forbidden:

Forbidden Operation Why
Skipping a state (e.g., INIT → PLAN) Each state performs essential work; skipping loses context
Looping back from VERIFY to APPLY multiple times Unbounded retry loops block progress
Entering an undefined state State machine is the source of truth; off-map states = bugs
Transitioning without recording Lost state transitions = lost progress on restart
Changing state without atomic write Concurrent requests might see inconsistent state
Starting work before APPLY state Setup (INIT, GATHER, ANALYZE, PLAN) must complete first
Allowing tasks to exit mid-state Tasks must complete state transitions atomically

State transition rules:

From State To State Conditions Actions
INIT GATHER Always Allocate workspace, validate task definition, set up git branch
GATHER ANALYZE Dependencies resolved Snapshot context, load related memory, identify risks
ANALYZE PLAN Preconditions met Design execution plan, set acceptance criteria, resolve dependency order
PLAN APPLY Plan approved Allocate to agent pool, start work, track progress
APPLY VERIFY Phase work complete Run acceptance tests, lint, generate proofs
VERIFY DONE Acceptance criteria pass Write results back, update task status, seal audit trail
VERIFY RETRY Acceptance criteria fail Classify error type, apply backoff, re-enter appropriate state
APPLY, VERIFY, PLAN RETRY Transient error Exponential backoff, max 3 attempts, then escalate

Checkpointing:

Long-running tasks write checkpoints at phase boundaries:

  • Current state, phase, intermediate results
  • Agent assignments, elapsed time, memory snapshots
  • On server restart, tasks resume from last checkpoint (no full restart)

Writeback Contract:

Every completed task must produce:

task_update(task_id, status="done", progress=100)
thought_record(task_id, content={...}, type="reflection")

Agents that terminate without writeback are flagged as orphaned (convention-level enforcement, not hard error).

Why this constraint exists:

Without state machines, tasks can be lost, stuck, or executed out of order. The 7-state pipeline ensures every task is tracked and progresses predictably. Checkpoints prevent loss of work on server restart.

What breaks without it:

  • Tasks get stuck (no clear next action)
  • Duplicated work (task restarted when it was already done)
  • Lost results (intermediate state never persisted)
  • Unpredictable behavior (task execution order unclear)
  • No recovery on failure (no checkpoint to resume from)

Constraint Category 4: Serialization (α Middleware)

What it is: All database writes are serialized. Only one tool call writes to the database at a time, preventing concurrent writes from corrupting the database.

What is forbidden:

Forbidden Operation Why
Concurrent SQLite writes from multiple threads SQLite is not designed for concurrent writers; locks are per-connection
Non-atomic multi-step writes A tool call might partially complete; on crash, state is inconsistent
Tool calls without ACL check Unauthorized callers could modify data they shouldn’t access
Tool calls not logged to audit trail No record of who did what when; auditing impossible
Retrying transient database errors without backoff Hammering a locked database worsens congestion
Ignoring tool-lock on write paths Multiple tools writing simultaneously = corrupted database

Middleware layer enforcement:

Layer 1: tool-lock (Serialization)

  • Per-process Promise queue serializes execution
  • Cross-process file lock at data/locks/mcp-tool.lock
  • Stale locks auto-cleared by PID liveness check
  • Configuration (Phase 0 uses the COLIBRI_* namespace — the AMS_* names listed in parentheses are heritage donor variables kept only for genealogy and not read by Phase 0 code):
    • COLIBRI_TOOL_LOCK_TIMEOUT_MS (e.g., 30,000) (donor: AMS_MCP_TOOL_LOCK_TIMEOUT_MS)
    • COLIBRI_TOOL_LOCK_POLL_MS (e.g., 100) (donor: AMS_MCP_TOOL_LOCK_POLL_MS)
    • COLIBRI_TOOL_LOCK_STALE_MS (e.g., 60,000) (donor: AMS_MCP_TOOL_LOCK_STALE_MS)

Layer 2: ACL (Access Control)

  • Role-based access control (owner > admin > member > viewer)
  • Tool map: ~80 tools mapped to minimum required role
  • Project context resolution: explicit param > session > environment > auto-detect
  • Failure throws ACL: Access denied

Layer 3: Audit (Logging)

  • Every tool call logged before execution (no bypass)
  • AsyncLocalStorage for thread-safe session isolation
  • Result hashed via SHA-256(stableSerialize(result))
  • Atomic logging with step_index (UNIQUE constraint)
  • Audit log failure is logged to stderr but does NOT fail the tool call

Layer 4: Rate Limit

  • Per-tool token bucket (e.g., 100 requests per 60 seconds)
  • In-memory Map with sliding window
  • Throws Rate limit exceeded with retry-after

Layer 5: Circuit Breaker

  • Prevents cascading failures by halting repeatedly-failing tools
  • Threshold: 5 failures in 30-second window
  • Auto-reset on success
  • Throws Circuit breaker open for tool: X

Layer 6: Retry

  • Automatic retry for transient SQLite errors (SQLITE_BUSY, SQLITE_LOCKED, EBUSY, EAGAIN)
  • Exponential backoff with jitter, capped at 3,000 ms per delay
  • Budget-bounded (total retry budget in milliseconds)
  • Validation errors are NOT retried

Why this constraint exists:

SQLite is a file-based database that uses file locks, not full ACID concurrency. Concurrent writers corrupt the database. Serialization ensures only one tool call modifies the database at a time. Audit logging ensures every change is recorded. ACL prevents unauthorized access.

What breaks without it:

  • Database corruption (inconsistent state, unrecoverable data)
  • Lost transactions (partial writes not rolled back)
  • Unauthorized access (anyone can call any tool)
  • No audit trail (impossible to trace who modified what)
  • Cascading failures (one bad tool brings down others)

Constraint Category 5: Finality Gates (η Proof Store + θ Consensus)

What it is: Two gates prevent irreversible side-effects from happening too early:

  1. Merkle Finalization Rule — Never finalize the Merkle tree before the final thought record
  2. HARD Finality Gate — Never do irreversible actions (payments, notifications, data exports) before HARD finality

What is forbidden:

Forbidden Operation Why
Calling merkle_finalize() before the final thought record is written The thought record must be a leaf in the finalized tree
Writing a thought record after calling merkle_finalize() Tree is locked; no new leaves can be added
Exporting data before HARD finality Data export is irreversible; disputes during SOFT/QUORUM finality could invalidate export
Sending notifications before HARD finality Can’t unsend a message; if event is reversed, recipient is confused
Charging fees or transferring tokens before HARD finality Can’t undo a transfer; if event is rolled back, payment is orphaned
Creating new events while the Merkle tree is locked Finalized trees are immutable proofs; no new work can be added

Five finality levels (θ Consensus):

Level Meaning When Reversible? Allowed Actions
PENDING Just proposed, 0 votes Event submitted Yes Read-only validation
SOFT Some votes, below quorum Voting in progress Yes Prepare for next phase
QUORUM Votes >= quorum threshold Accepted by majority Yes, during dispute window Update internal state
HARD Survived dispute window (100+ epochs) No successful challenges Only via new events Irreversible side-effects (send, charge, export)
ABSOLUTE Irreversible HARD + no pending appeals No Constitutional proof anchor

Merkle finalization workflow:

1. Task or session completes work
2. Write final thought_record (handoff summary)
3. Call merkle_finalize() — lock tree, compute final root
4. Call merkle_root() — retrieve immutable root hash
5. Store root hash in audit_log as session proof anchor
6. No new leaves can be added to the tree

Rule: The thought record must be written BEFORE finalization.

// WRONG: Finalize first
merkle_finalize();           // Tree locked
thought_record(...);         // ERROR: Tree is locked, cannot add leaf

// RIGHT: Record first
thought_record(...);         // Adds leaf to tree
merkle_finalize();           // Locks tree with thought record inside

Why this constraint exists:

Before HARD finality, disputes can reverse events. If you send a payment on SOFT finality and later a dispute reverses the event, the payment is orphaned (gone but not undone). HARD finality means the dispute window has closed. Merkle finalization ensures the thought trail is immutable before locking.

What breaks without it:

  • Side-effects that can’t be undone (payment reversed, but notification already sent)
  • Broken Merkle proofs (thought record not in the finalized tree)
  • Orphaned transactions (event reversed but side-effects persist)
  • Audit trail corruption (thought records added after tree was “complete”)
  • Unfair advantage (some participants can undo actions, others can’t)

Constraint Category 6: Byzantine Tolerance (θ Consensus)

What it is: The network tolerates up to 1/3 of nodes being faulty (lying, offline, or malicious) using Byzantine Fault Tolerant quorum voting.

The math:

Tolerance: f < n/3 (fewer than 1/3 faulty nodes)
Minimum nodes: n >= 3f + 1

Examples:
- 4 nodes → tolerate 1 faulty node (25% fault tolerance)
- 7 nodes → tolerate 2 faulty nodes (28% fault tolerance)
- 10 nodes → tolerate 3 faulty nodes (30% fault tolerance)
- 100 nodes → tolerate 33 faulty nodes (33% fault tolerance)

Quorum threshold: floor(2n/3) + 1 votes needed

Examples:
- 4 nodes → quorum = 3
- 7 nodes → quorum = 5
- 10 nodes → quorum = 7
- 100 nodes → quorum = 67

What is forbidden:

Forbidden Operation Why
Accepting an event with fewer than quorum votes Fewer than 2/3 majority could be wrong nodes; not enough agreement
Allowing a node to vote without signature Signatures are proof of identity; unsigned votes are forgery
Accepting contradictory votes from the same node Equivocation is proof of malice; must be detected and punished
Counting votes after the dispute window closes Disputes must close before accepting as HARD; late votes are spam
Allowing a single node to decide Defeats the purpose of voting; no quorum = no decision
Accepting votes with wrong rule version Different rule versions produce different consequences; voting must agree on rules

Equivocation Detection:

Equivocation = signing two contradictory messages on the same event (e.g., voting “accept” AND “reject”).

function checkEquivocation(vote_a, vote_b) {
  if (vote_a.node_id === vote_b.node_id
      && vote_a.event_id === vote_b.event_id
      && vote_a.decision !== vote_b.decision
      && verifySignature(vote_a) && verifySignature(vote_b)) {
    return EquivocationProof(vote_a, vote_b);
  }
  return null;
}

When equivocation is detected:

  1. Any node holding both conflicting signatures broadcasts an equivocation proof
  2. The proof is distributed to all nodes
  3. The equivocating node’s votes on that event are invalidated
  4. The node receives a reputation penalty
  5. If the equivocating node was the proposer (leader), a view change is triggered

Event validation (5 checks in order):

  1. Signature check — Is Ed25519 signature from proposer valid?
  2. Admission check — Does proposer meet rate limits, stake requirements, quality gates?
  3. Rule engine — Does event pass deterministic rule evaluation?
  4. State check — Is event consistent with local state?
  5. Vote — Sign accept or reject with own key, broadcast to peers

Why this constraint exists:

In a peer-to-peer network with no central server, nodes must reach agreement on truth. Byzantine tolerance means honest nodes agree even when some nodes lie. Equivocation detection catches liars. Quorum means majority rules.

What breaks without it:

  • Minority can control the network (no quorum requirement)
  • Faulty nodes can cause splits (no Byzantine tolerance)
  • Liars go undetected (no equivocation detection)
  • Different nodes have different history (no consensus)
  • Network can be partitioned (no recovery from partitions)

Constraint Category 7: Governance Rate Limits

What it is: Rule changes are limited in frequency, scope, and authority. No single change can dramatically shift the system’s behavior overnight.

What is forbidden:

Forbidden Operation Why
Changing a rule version without a vote Rules are governance decisions, not technical changes
Changing the same parameter more than ±10% in 6 months Large changes destabilize the system; gradual change only
Activating a new rule version without testing against corpus New version might break old behavior; must be compatible
Constitutional rules with < 80% supermajority vote Constitutional changes need strong consensus
Non-constitutional rules with < 66% quorum vote Regular rules need reasonable consensus
Skipping the 3-stage voting process for constitutional rules Hasty decisions harm the system; 30-day thinking time required
Mixing old and new rule versions in the same evaluation Inconsistency breaks determinism; all nodes must use same version
Exceeding ±30% change to any key parameter without supermajority Large changes require deep consensus

Rule change process:

Constitutional-adjacent rules:

Stage 1: Proposal (30 days)
  - Publish proposal text
  - Public discussion period
  - Minimum 5 nodes must endorse to advance

Stage 2: Voting (30 days)
  - Vote on adoption
  - Threshold: > 80% supermajority (quorum = floor(2n/3) + 1)
  - 10+ signatures needed to pass

Stage 3: Activation (30 days)
  - Elected nodes test new version against corpus
  - Must produce identical results on test data
  - Gradual rollout to non-validator nodes

Non-constitutional rules:

Proposal + Discussion (14 days)
Vote (14 days)
  - Threshold: > 66% quorum
  - Results published immediately
Activation (immediate)

Rate limit constants:

MAX_CHANGE_PER_6M = 10%          // Max ±10% change per 6-month window
KEY_PARAM_THRESHOLD = 30%        // > 30% requires supermajority
MIN_STABILITY_PERIOD = 2 epochs  // Wait 2 epochs before another change to same param

Validation rule versioning:

interface RuleVersion {
  rule_id: string;
  version_number: number;
  content_hash: string;           // SHA256 of rule text
  activated_at_epoch: number;
  deactivated_at_epoch?: number;
  change_magnitude: number;       // Percent change from prior version
  vote_threshold_met: boolean;
  test_corpus_passed: boolean;
  constitutional: boolean;
}

// Before activation, MUST have:
// 1. vote_threshold_met = true
// 2. test_corpus_passed = true
// 3. change_magnitude <= 10% (or <= 30% with supermajority)
// 4. 2-epoch stability period after prior change to same rule

Constitutional Axioms (immutable):

These are the bedrock. Changes require system redesign, not voting:

  1. AX-01: Append-Only Events — Events never deleted, only marked rejected
  2. AX-02: Reputation is Derived — Never assigned by admin, only earned
  3. AX-03: No Absolute Authority — No role bypasses consequences
  4. AX-04: Consequence Windows — Subjects get admission window to contest
  5. AX-05: Subjective Finality — Valid + signed + accepted = fact
  6. AX-06: Right to Exit — Fork with ≤10% reputation penalty
  7. AX-07: Technical Sovereignty — Infrastructure isolation prevents data leakage

Why this constraint exists:

Rules are the heart of the system. If rule changes happen too fast or require too little consensus, the system becomes unstable. Governance rate limits ensure rules change only when there is broad agreement and time for review.

What breaks without it:

  • Rules change overnight (users have no time to adapt)
  • Malicious coalitions can hijack the system (low threshold for change)
  • Inconsistent behavior across nodes (mixed rule versions)
  • Reputation system becomes meaningless (rules can be rewritten to help specific actors)
  • System credibility collapses (governance is arbitrary, not legitimate)

Constitutional Axioms: The Foundation

All constraints rest on seven constitutional axioms. These are immutable — changing them requires system redesign, not voting.

AX-01: Append-Only Events

Events are never deleted. Corrections are new events.

Why: History must be complete and verifiable. Deletion hides what happened.

AX-02: Reputation is Derived

Reputation is computed from history, never assigned administratively.

Why: Assigned reputation can be weaponized. Derived reputation is objective and auditable.

AX-03: No Absolute Authority

No role (admin, owner, arbitrator) can bypass consequences or reset reputation.

Why: Absolute authority enables corruption. Constrained authority is accountable.

AX-04: Consequence Windows

When a sanction is proposed, the subject gets an admission window to contest it.

Why: Fairness requires the accused to respond before consequences apply.

AX-05: Subjective Finality

Events become fact when they are valid, signed, and accepted by counterparties.

Why: No global clock exists in P2P systems. Local validation + signatures = truth.

AX-06: Right to Exit

Participants can fork (leave) the network with a reputation penalty ≤ 10%.

Why: Participants must be free to reject unfair systems. Capping exit penalty prevents traps.

AX-07: Technical Sovereignty

Each node can validate independently. No centralized arbiter can force states.

Why: Centralized arbiters become bottlenecks and targets for attack.


What Happens When Constraints Are Violated

Scenario 1: Floating-Point Arithmetic Snuck Into Rule Evaluation

Constraint violated: Determinism (Category 1)

Code:

# WRONG
reputation_delta = commitment.value * 0.05  # Floating-point!

# Some nodes: 100 * 0.05 = 5.0
# Other nodes: 100 * 0.05 = 5.000000000001

What breaks:

  • Nodes diverge on computed reputation
  • Consensus voting splits (nodes vote differently on the same event)
  • Network forks
  • No way to agree on truth

Scenario 2: Thought Record Modified After Finalization

Constraint violated: Immutability (Category 2)

Code:

// After thought_record is written and chain_hash computed
// Someone modifies the content:
thought.content = "Changed what I said"; // FORBIDDEN
// content_hash changes, chain_hash is now wrong
// thought_verify() detects the break immediately

What breaks:

  • Audit trail is useless (any record could be tampered)
  • Deniability — can claim actions were different than recorded
  • Trust in the system collapses

Scenario 3: Task Skips PLAN State

Constraint violated: State Machines (Category 3)

Code:

// Task jumps from GATHER to APPLY (skips ANALYZE and PLAN)
task.status = "APPLY"; // FORBIDDEN
// Acceptance criteria were never set
// Execution proceeds without clear goals
// Verification fails because no criteria exist

What breaks:

  • Acceptance criteria are never defined (verification has nothing to check)
  • Work is duplicated or incomplete
  • Task progress is unpredictable

Scenario 4: Two Tools Write Concurrently

Constraint violated: Serialization (Category 4)

Code:

Client A: calls task_create()
Client B: calls task_update() at same time
Both hit SQLite at once
Database file is corrupted
Recovery is manual and painful

What breaks:

  • Database corruption (unrecoverable data loss)
  • State becomes inconsistent
  • Audit trail is incomplete

Scenario 5: Irreversible Payment Before HARD Finality

Constraint violated: Finality Gates (Category 5)

Code:

// Event reaches QUORUM finality, but dispute window not closed
if (event.finality >= "QUORUM") {
  processPayment(amount);  // FORBIDDEN — too early!
  // Later, a fraud proof arrives
  // Event is reversed during dispute window
  // But payment already happened — can't undo
}

What breaks:

  • Payments become orphaned (event reversed, but money gone)
  • Fraud becomes profitable (reverse the event, keep the benefit)
  • System trust collapses

Scenario 6: Minority Controls Network

Constraint violated: Byzantine Tolerance (Category 6)

Code:

// No quorum threshold — accept event if ANY node votes yes
if (votes.accept.length >= 1) {  // FORBIDDEN
  acceptEvent(event);
  // 1 honest node + 99 faulty nodes
  // Faulty nodes can flip event outcome
}

What breaks:

  • Dishonest nodes control the network (no quorum protection)
  • History becomes subjective (depends on which partition you’re in)
  • Consensus is impossible

Scenario 7: Rule Changed by 50% Overnight

Constraint violated: Governance Rate Limits (Category 7)

Code:

Epoch 1: Reputation decay rate is 2% per epoch (standard)
Epoch 2: Vote passes with 55% quorum to change to 100% per epoch (50% increase!)
Epoch 3: All old users lose 50% reputation immediately
        New users are unaffected

What breaks:

  • System becomes unfair (punishment is retroactive and selective)
  • Users lose trust (rules changed without real consensus)
  • Governance is captured (51% coalition weaponizes rules)

The Constraint Stack: How They Compose

A single tool call passes through all seven constraint categories in a real scenario. Here’s how they compose:

Scenario: A User Submits a Task

Event: Client calls task_create(description, acceptance_criteria)

Layer 1: Middleware (Category 4 - Serialization)

Request arrives at α (System Core)
  ↓ tool-lock
One tool at a time — no concurrent writes

  ↓ acl
Is caller authorized to create tasks?
Throw if not

  ↓ audit
Log this tool call before execution
Hash the result for tamper evidence

  ↓ Controller routing
Map to task creation function

  ↓ Domain handler
Create task record in database
Write to tasks table atomically

Layer 2: Task Pipeline (Category 3 - State Machines)

Task record created with status = INIT
  ↓ Transition to GATHER
Allocate workspace
Snapshot context
Write state transition atomically

  ↓ Transition to ANALYZE
Identify dependencies
Check preconditions
No skipped states

Layer 3: Thought Recording (Category 2 - Immutability)

Agent work produces thoughts
Each thought is a hash-chained record
Parent_chain_hash binds to previous thought
Chain becomes tamper-evident

Layer 4: Rule Evaluation (Category 1 - Determinism)

If task involves reputation changes:
  ↓ Rule engine runs
Guard conditions evaluated (deterministically)
Consequences computed in integer math (basis points)
No floating-point, no randomness, no clocks
All nodes compute identical results

Layer 5: Finality (Category 5 - Gates)

If task involves external side-effects (payment, notification):
  ↓ Check finality level
Wait for HARD finality before sending
Never send at SOFT or QUORUM

Layer 6: Merkle Tree (Category 2 - Immutability)

When task completes:
  ↓ Final thought_record written
Tree still accepting leaves

  ↓ merkle_finalize()
Lock the tree
All previous thoughts are now immutable proofs
Root hash is generated

Layer 7: Governance (Category 7 - Rate Limits)

If task involves rule changes:
  ↓ Check rule versioning
Was this rule changed in the last 6 months?
Is this change <= 10% of prior version?
Did it pass voting threshold?
Did it pass test corpus?

Result: Task is fully auditable, irreversible before finality, deterministic across nodes, tamper-evident, state-tracked, governed, and serialized.


Constraint Enforcement Summary Table

Constraint Enforced By What It Prevents Impact of Violation
Determinism κ Rule Engine (pure function, integer math) Non-deterministic state (floating-point, randomness, clocks) Node divergence, consensus failure, network fork
Immutability ζ Decision Trail + Hash chains Retroactive modification of history Audit trail corruption, deniability, loss of truth
State Machines β Task Pipeline (7-state FSM) Tasks skipping states, looping, or getting stuck Lost work, duplicated effort, undefined progress
Serialization α Middleware (tool-lock, ACL, audit) Concurrent database writes Database corruption, lost transactions, unauthorized access
Finality Gates η Proof Store + θ Consensus (HARD finality level) Irreversible side-effects on reversible events Orphaned payments, fraud becomes profitable, trust collapse
Byzantine Tolerance θ Consensus (quorum voting, equivocation detection) Minority control, faulty nodes undetected Network takeover, history subjectivity, consensus impossible
Governance Rate Limits π Governance (vote thresholds, 6-month windows, test corpus) Overnight rule changes, malicious coalitions Unfair punishment, retroactive changes, governance capture

Synthesis: Why All Seven Categories Are Necessary

Determinism alone is not enough:

  • Without immutability, nodes could alter history after computing consequences
  • Without serialization, concurrent writes would corrupt shared state
  • Without finality gates, side-effects would happen before being sure

Immutability alone is not enough:

  • Without determinism, nodes would disagree on consequences
  • Without state machines, changes would be chaotic and untracked
  • Without governance, history could be rewritten by decree

State machines alone are not enough:

  • Without serialization, state transitions would race
  • Without finality, state could be reversed after being used
  • Without determinism, different nodes would compute different states

Serialization alone is not enough:

  • Without immutability, writes could be hidden
  • Without determinism, different nodes would apply different rules
  • Without Byzantine tolerance, malicious nodes could poison state

All seven categories work together to create a system where:

  • Every decision is recorded and auditable (ζ Immutability)
  • Every node computes identically (κ Determinism)
  • Every task progresses predictably (β State Machines)
  • Every write is atomic and ordered (α Serialization)
  • Side-effects only happen when safe (η Finality Gates)
  • Lies are detectable (θ Byzantine Tolerance)
  • Changes require consent (π Governance)

References


[[concepts/index Concept Index]] · [[architecture/system-overview System Overview]] · [[architecture/legitimacy-axis Legitimacy Axis]] · [[docs/index Documentation Index]]

Back to top

Colibri — documentation-first MCP runtime. Apache 2.0 + Commons Clause.

This site uses Just the Docs, a documentation theme for Jekyll.