θ Consensus — Algorithm Extraction

Language-agnostic pseudocode extracted from Phoenix Python source. Source files: bft.py (1347L), finality.py (843L), threshold_sig.py (614L), vrf.py (664L), gossip.py (~500L).


1. BFT Quorum Math

Core Formulas

n = total validator nodes
f = max_byzantine_faults(n) = floor((n - 1) / 3)
quorum = quorum_size(n)     = floor(2 * n / 3) + 1

Safety property:   n >= 3f + 1
Liveness property: quorum ensures overlap of at least f+1 honest nodes

Examples

n f (max faults) quorum required
4 1 3
7 2 5
10 3 7
13 4 9
100 33 67

Message Types

enum MessageType: PROPOSE, VOTE, COMMIT, VIEW_CHANGE, CHECKPOINT

struct Message:
    sender:    PublicKey
    type:      MessageType
    round:     int64
    payload:   bytes
    signature: Ed25519Signature
    timestamp: int64        // logical, not wall-clock

enum VoteType: ACCEPT, REJECT, ABSTAIN

2. Equivocation Detection

Equivocation = a node sends conflicting votes for the same round. Constitutes provable Byzantine behavior.

function detect_equivocation(vote_log: Map<NodeId, Vote[]>) -> EquivocationProof | null:
    for node_id, votes in vote_log:
        for each pair (v1, v2) where v1.round == v2.round:
            if v1.payload != v2.payload:
                return EquivocationProof {
                    node_id:   node_id,
                    round:     v1.round,
                    evidence1: v1,          // includes Ed25519 signature
                    evidence2: v2,
                }
    return null

struct EquivocationProof:
    node_id:   PublicKey
    round:     int64
    evidence1: Message      // first conflicting vote (signed)
    evidence2: Message      // second conflicting vote (signed)

Slashing: Proven equivocator is removed from validator set and arbitration pool.


3. View-Change Protocol

Triggered when the current leader times out without producing a PROPOSE.

VIEW_CHANGE_TIMEOUT_MS = 5000

function trigger_view_change(current_view: int, reason: string):
    broadcast ViewChangeMessage {
        from:         my_key,
        current_view: current_view,
        new_view:     current_view + 1,
        reason:       reason,
        last_known:   my_last_finalized_hash,
        signature:    sign(my_key, hash(current_view || new_view || last_known)),
    }

function collect_view_change(messages: ViewChangeMessage[]):
    // Wait for quorum of VIEW_CHANGE messages for same new_view
    if count(messages where msg.new_view == target_view) >= quorum:
        new_leader = select_leader(target_view)
        if new_leader == me:
            broadcast NewViewMessage {
                view:        target_view,
                view_proofs: messages[0..quorum],   // quorum VIEW_CHANGE messages
                proposal:    build_proposal(),
            }

function select_leader(view: int) -> NodeId:
    // Round-robin weighted by reputation
    return sorted_validators[view % len(sorted_validators)]

4. Finality State Machine

States

PENDING --votes start--> SOFT --votes >= quorum--> QUORUM --100 epochs elapsed--> HARD --appeal window elapsed--> ABSOLUTE

Transitions are monotonic — a state never decreases.

enum FinalityLevel:
    PENDING  = 0    // Proposed, not yet processed
    SOFT     = 1    // >= 1 vote received; locally validated; disputable
    QUORUM   = 2    // votes >= quorum_threshold; dispute may open
    HARD     = 3    // 100 epochs elapsed, no open dispute; payments/exports unlock
    ABSOLUTE = 4    // Appeal window elapsed; constitutional protection; never reversed

Transition Conditions

From → To Condition Side Effects Allowed Reversible
PENDING → SOFT First valid vote received None Yes
SOFT → QUORUM vote_count >= quorum_size(n) Dispute can open Yes (via dispute)
QUORUM → HARD 100 epochs pass, no open dispute Payments, token grants, exports No (except appeal)
HARD → ABSOLUTE Appeal window elapsed (config: 200 epochs default) All side effects Never

FinalityProof Structure

struct FinalityProof:
    event_hash:    bytes32         // SHA-256 of the event
    level:         FinalityLevel
    votes:         FinalityVote[]  // collected voter signatures
    quorum_size:   int
    finalized_at:  int64           // epoch number
    proof_hash:    bytes32         // SHA-256(event_hash || level || votes_root)

struct FinalityVote:
    voter:     PublicKey
    event:     bytes32
    level:     FinalityLevel
    epoch:     int64
    signature: Ed25519Signature

FinalityGadget Algorithm

class FinalityGadget:
    votes_by_event: Map<bytes32, FinalityVote[]>

    function receive_vote(vote: FinalityVote) -> FinalityLevel:
        verify_signature(vote.voter, vote)
        votes_by_event[vote.event].append(vote)
        return compute_level(vote.event)

    function compute_level(event_hash: bytes32) -> FinalityLevel:
        votes = votes_by_event[event_hash]
        count = len(unique_voters(votes))
        if count == 0:        return PENDING
        if count < quorum:    return SOFT
        if count >= quorum:   return QUORUM
        // HARD and ABSOLUTE advancement is time-gated (epoch counter)

    function advance_to_hard(event_hash: bytes32, current_epoch: int64):
        record = finality_store[event_hash]
        if record.level == QUORUM:
            if current_epoch >= record.quorum_epoch + 100:
                if not has_open_dispute(event_hash):
                    record.level = HARD

    function advance_to_absolute(event_hash: bytes32, current_epoch: int64):
        record = finality_store[event_hash]
        if record.level == HARD:
            if current_epoch >= record.hard_epoch + APPEAL_WINDOW:
                record.level = ABSOLUTE

5. Threshold Signature: Shamir k-of-n

Used for checkpoint signing: 7-of-10 arbiters must co-sign a checkpoint.

Shamir Secret Sharing

PRIME = large prime > 2^256   // e.g., secp256k1 field prime

function shamir_split(secret: int, n: int, k: int) -> Share[]:
    // Create polynomial of degree k-1 with secret as constant term
    coefficients = [secret] + [random_int(1, PRIME-1) for _ in range(k-1)]

    shares = []
    for i in range(1, n+1):
        x = i
        y = evaluate_polynomial(coefficients, x, PRIME)
        shares.append(Share { x: x, y: y })
    return shares

function evaluate_polynomial(coeffs: int[], x: int, p: int) -> int:
    result = 0
    for i, coeff in enumerate(coeffs):
        result = (result + coeff * pow(x, i, p)) % p
    return result

function shamir_reconstruct(shares: Share[], k: int) -> int:
    // Lagrange interpolation at x=0
    assert len(shares) >= k
    shares = shares[:k]
    secret = 0
    for i, (xi, yi) in enumerate(shares):
        num = 1; den = 1
        for j, (xj, _) in enumerate(shares):
            if i != j:
                num = (num * (-xj)) % PRIME
                den = (den * (xi - xj)) % PRIME
        lagrange = (num * modular_inverse(den, PRIME)) % PRIME
        secret = (secret + yi * lagrange) % PRIME
    return secret

function modular_inverse(a: int, p: int) -> int:
    return pow(a, p-2, p)   // Fermat's little theorem

Distributed Key Generation (DKG) — Dealer-less

// Step 1: Each participant i generates a random polynomial and broadcasts commitments
function dkg_round1(my_id: int, n: int, k: int):
    my_poly = random_polynomial(degree=k-1)
    commitments = [multiply_generator(coeff) for coeff in my_poly]
    shares_for_others = { j: evaluate(my_poly, j) for j in range(1, n+1) }
    broadcast DKGCommitment { sender: my_id, commitments: commitments }
    for j in peers: send_private(j, shares_for_others[j])

// Step 2: Each participant verifies received shares against commitments
function dkg_round2(received_shares: Map<NodeId, int>, received_commitments: Map<NodeId, list>):
    for sender, share in received_shares:
        expected = sum_commitments(received_commitments[sender], my_id)
        actual = multiply_generator(share)
        if actual != expected:
            broadcast DKGComplaint { accuser: my_id, accused: sender }

// Step 3: Assemble local private key share
function dkg_finalize(valid_shares: int[]) -> PrivateKeyShare:
    return sum(valid_shares)   // aggregate all valid shares

// Checkpoint signing (7/10 rule)
CHECKPOINT_THRESHOLD = 7
CHECKPOINT_TOTAL_SIGNERS = 10

function sign_checkpoint(checkpoint: CheckpointData, my_share: PrivateKeyShare) -> PartialSig:
    return partial_sign(checkpoint.hash, my_share)

function aggregate_checkpoint(partial_sigs: PartialSig[]) -> ThresholdSig:
    assert len(partial_sigs) >= CHECKPOINT_THRESHOLD
    return lagrange_aggregate(partial_sigs[:CHECKPOINT_THRESHOLD])

6. VRF Arbiter Selection

Based on ECVRF (RFC 9381). Deterministic but unpredictable without the private key.

// VRF Proof Generation
function vrf_prove(secret_key: Ed25519PrivKey, input: bytes) -> (output: bytes32, proof: VRFProof):
    gamma = hash_to_curve(input)                   // hash input to curve point
    gamma_k = scalar_multiply(gamma, secret_key)   // key × curve point
    proof = dleq_proof(secret_key, gamma, gamma_k) // zero-knowledge equality proof
    output = hash(gamma_k)                         // deterministic output
    return (output, proof)

// VRF Proof Verification
function vrf_verify(public_key: Ed25519PubKey, input: bytes, output: bytes32, proof: VRFProof) -> bool:
    gamma = hash_to_curve(input)
    return verify_dleq(public_key, gamma, proof) AND hash(proof.gamma_k) == output

// Arbiter Selection Algorithm
ARBITERS_REQUIRED = { 0: 3, 1: 5, 2: 7 }   // L0=3, L1=5, L2=7 per dispute level

function select_arbiters(event_id: bytes32, epoch: int64, eligible_nodes: Node[], level: int) -> Node[]:
    seed = hash(event_id || epoch)
    count = ARBITERS_REQUIRED[level]

    scored = []
    for node in eligible_nodes:
        output, proof = vrf_prove(node.secret_key, seed)
        // In verification phase, verifier uses: vrf_verify(node.public_key, seed, output, proof)
        scored.append((output, node))

    filtered = anti_collusion_filter(scored)
    sorted_by_output = sort(filtered, key=lambda x: x.output)
    return [node for _, node in sorted_by_output[:count]]

function anti_collusion_filter(scored: (bytes32, Node)[]) -> (bytes32, Node)[]:
    // Cap: max 30% of selected arbiters from same controlling entity
    result = []
    entity_counts = {}
    for output, node in sorted(scored):
        entity = node.controlling_entity
        entity_counts[entity] = entity_counts.get(entity, 0) + 1
        cap = ceil(0.3 * ARBITERS_REQUIRED[level])
        if entity_counts[entity] <= cap:
            result.append((output, node))
    return result

7. Gossip Protocol

Message Types

struct IHaveMessage:
    sender:       PublicKey
    event_ids:    bytes32[]     // events this node has
    state_root:   bytes32       // current Merkle root
    rule_version: string        // rule set version hash
    fork_id:      bytes32       // current fork identifier

struct IWantMessage:
    sender:    PublicKey
    event_ids: bytes32[]        // events being requested

struct GraftMessage:
    sender: PublicKey
    topic:  string              // subscribe to eager push for topic

struct PruneMessage:
    sender: PublicKey
    topic:  string              // downgrade to lazy push for topic

Triple-Anchor Validation

Before processing any gossip message:

function validate_gossip_anchor(msg: IHaveMessage, local_state: State) -> bool:
    if msg.rule_version != local_state.rule_version:
        log("Rejected: rule version mismatch")
        return false
    if not is_valid_ancestor(msg.state_root, local_state.state_root):
        log("Rejected: state root not ancestor or current")
        return false
    if msg.fork_id != local_state.fork_id:
        log("Rejected: fork ID mismatch — possible fork divergence")
        return false
    return true

Bloom Filter Deduplication

// Bloom filter parameters: < 1% false positive rate, reset every 1000 events
BLOOM_RESET_INTERVAL = 1000

struct BloomFilter:
    bits:       bit_array[size]
    hash_count: int

function bloom_add(filter: BloomFilter, event_id: bytes32):
    for i in range(filter.hash_count):
        pos = hash_i(event_id, i) % len(filter.bits)
        filter.bits[pos] = 1

function bloom_check(filter: BloomFilter, event_id: bytes32) -> bool:
    for i in range(filter.hash_count):
        pos = hash_i(event_id, i) % len(filter.bits)
        if filter.bits[pos] == 0: return false
    return true   // probably seen; may be false positive

function should_process_event(filter: BloomFilter, event_id: bytes32) -> bool:
    if bloom_check(filter, event_id): return false   // already seen
    bloom_add(filter, event_id)
    return true

Adaptive Fanout

function compute_fanout(network_size: int, connectivity_pct: float) -> int:
    base = ceil(log2(network_size))
    if connectivity_pct > 0.50: return base // 2     // well-connected: reduce
    if connectivity_pct < 0.10: return base * 2      // sparse: increase
    return base                                       // normal

// TTL-based expiry
MAX_TTL = 10   // hops before message is dropped
function forward_gossip(msg: GossipMessage) -> GossipMessage | null:
    if msg.ttl <= 0: return null
    return { ...msg, ttl: msg.ttl - 1 }

8. Ed25519 Crypto: EventSigner Pattern

// EventSigner wraps Ed25519 for structured event signing
struct EventSigner:
    private_key: Ed25519PrivKey
    public_key:  Ed25519PubKey

function sign_event(signer: EventSigner, event: Event) -> SignedEvent:
    payload = serialize_canonical(event)   // deterministic byte encoding
    signature = ed25519_sign(signer.private_key, payload)
    return SignedEvent {
        event:     event,
        signature: signature,
        signer:    signer.public_key,
    }

function verify_event(signed: SignedEvent) -> bool:
    payload = serialize_canonical(signed.event)
    return ed25519_verify(signed.signer, payload, signed.signature)

// Key derivation from seed phrase
function derive_key(seed_phrase: string) -> (PrivKey, PubKey):
    seed_bytes = pbkdf2(seed_phrase, salt="phoenix-v1", iterations=100000)
    return ed25519_from_seed(seed_bytes)

9. Fork Detection

enum ForkDetectionResult:
    ALREADY_FORKED        // peer is on different fork
    IN_SYNC               // identical state root
    PEER_BEHIND           // peer has older state
    I_AM_BEHIND           // local node has older state
    UNPLANNED_FORK        // divergence detected — trigger fork protocol

function detect_fork(peer_fork: bytes32, peer_root: bytes32,
                     my_fork: bytes32, my_root: bytes32) -> ForkDetectionResult:
    if peer_fork != my_fork:           return ALREADY_FORKED
    if peer_root == my_root:           return IN_SYNC
    if is_ancestor(peer_root, my_root): return PEER_BEHIND
    if is_ancestor(my_root, peer_root): return I_AM_BEHIND
    return UNPLANNED_FORK              // trigger fork creation

10. Time Anchors

struct TimeAnchor:
    publisher:    PublicKey
    timestamp_ms: int64       // wall-clock milliseconds
    epoch:        int64       // logical epoch number
    signature:    Ed25519Signature

// Rules:
// - Only high-reputation arbiters may publish anchors
// - Median: collect recent valid anchors, return median timestamp_ms
// - Drift: if |local_time - median_anchor| > 30_000ms → deprioritize own proposals
// - Monotonicity: anchors from same publisher must be non-decreasing
// - Replay protection: reject anchors where anchor.epoch < current_epoch - 10

11. Partition Handling

During partition:
- Finality pauses at SOFT (cannot reach quorum without enough peers)
- Events accumulate locally with SOFT finality

After partition heal:
- Gossip resumes; node exchanges IHave messages
- No conflict detected → natural Merkle merge
- Conflict detected → fork detection → fork creation protocol (see ι)

12. Dependencies

Module Interaction
κ Rule Engine Validates deterministic event processing per rule set
λ Reputation Gates arbiter eligibility; leader selection weight
ι State Fork Fork creation when consensus permanently fails
η Proof Store Merkle proofs for finalized state transitions
μ Integrity Sentinel scans for consensus manipulation
[[concepts/θ-consensus θ Consensus]] · [[concepts/κ-rule-engine κ Rule Engine]] · [[concepts/ι-state-fork ι State Fork]] · [[reference/extractions/kappa-rule-engine-extraction κ Extraction]] · [[reference/extractions/iota-state-fork-extraction ι Extraction]]

Back to top

Colibri — documentation-first MCP runtime. Apache 2.0 + Commons Clause.

This site uses Just the Docs, a documentation theme for Jekyll.