Intelligence Router — Function Reference

⚠ HERITAGE EXTRACTION — donor AMS intelligence router (Wave 8 quarantine)

This file extracts the donor AMS δ Model Router from projects/unified-mcp/src/intelligence/ (deleted R53). Per ADR-005, the δ Model Router is deferred to Phase 1.5: Phase 0 Colibri is Claude-only and ships no router, no Kimi/Codex backends, no intelligent_* tool family, no RequestClassifier, no RoutingModel, no scoring engine. The 12 routing patterns, the 22 intelligent_* tools, and the SQLite-backed decision recording on this page are donor accretion only.

Phase 0 truth: there is no src/intelligence/ directory and no Colibri code that imports from one. The δ concept doc is ../../concepts/δ-model-router.md and confirms the Phase 1.5 deferral.

Read this file as donor genealogy only.

Core Algorithm

The router uses a three-layer stack:

  1. RequestClassifier (classifier.js) — extracts features (intent, complexity, urgency, domain, entities) from free-text input using keyword scoring.
  2. RoutingModel (models/routing-model.js) — classifies the request against 12 named patterns (task_create, task_list, roadmap_list, analysis_rag, autonomous_run, etc.) by keyword matching + entity presence, then selects the top tool.
  3. IntelligentRouter / router-engine.js — scores all registered handlers using a 6-criteria weighted sum, caches high-confidence results (TTL 5 min), records decisions to SQLite, and learns from feedback.
request text
  → classifyRequest()           # keyword match against 12 patterns
  → calculateConfidence()       # adjust for context, historical accuracy
  → selectTool()                # A/B test check, preference override, threshold gate
  → IntelligentRouter.route()   # score all handlers, cache, record, return

Exported Functions

intelligentRoute(params): Promise<object>

File: src/intelligence/router.js Purpose: Public entry point. Routes a free-text request to the best tool. Algorithm:

  1. Validate params.request (string required).
  2. Check LRU cache by normalized cache key (toLowerCase, collapse whitespace, 100-char prefix + |pref:<preferred_tool>).
  3. classifyRequest(request, context) → scores against 12 patterns.
  4. calculateConfidence(classification, context) → 0–1 float.
  5. selectTool(classification, context.preferences) → selected tool + reason.
  6. If confidence >= 0.7 and no clarification needed → addToCache(key, result).
  7. recordRoutingDecision() → INSERT into intelligence_routing_decisions.
  8. Return result with processing_ms. Parameters: { request: string, context?: { preferences, historical_accuracy, session_id, preferred_tool } } Returns: { success, routing: { selected, confidence, reason, alternatives, requires_clarification }, classification: { top_match, all_scores[0..4] }, suggestions, from_cache?, processing_ms } Notes for rewrite: Cache key normalization is case-insensitive + whitespace-collapsed. Cache evicts LRU at 1000 entries, TTL 300 s.

IntelligentRouter.route(request, context): Promise<object>

File: src/intelligence/router-engine.js Purpose: Advanced handler-registry router with per-handler scoring. Algorithm:

  1. Check in-memory cache (generateCacheKey).
  2. extractFeatures(request, context) via RequestClassifier.
  3. For each registered handler (in this.handlers), if available, call scoreHandler(handler, features, context).
  4. Sort handlers descending by score.overall.
  5. requires_clarification = bestMatch.confidence < minConfidence (default 0.5).
  6. Cache result if confidence >= 0.7.
  7. Record decision if learningEnabled.
  8. Return { handler, confidence, reasoning, features, alternatives[0..2], requires_clarification }. Scoring weights:
Criterion Weight Logic
capability 0.35 intent+domain match → 1.0 / 0.8 / 0.6 / 0.3
availability 0.15 1.0 if available, 0.0 if not
performance 0.15 latency score: <50 ms → 1.0, <100 ms → 0.9, <500 ms → 0.8, else max(0.3, 1-(lat/500)*0.5)
history 0.20 1 - (errors/calls) + 0.1 recency boost (if used in last hour)
domain 0.10 1.0 exact domain, 0.4 mismatch
urgency 0.05 1.0 critical+prioritySupport, 0.9 high+fastLane, else 0.7

Notes for rewrite: Handlers registered via registerHandler(name, config) with fields: capabilities[], domains[], avgLatency, available, cost, reliability. Singleton via getIntelligentRouter().


classifyRequest(request, context): object

File: src/intelligence/models/routing-model.js Purpose: Score a request against 12 hardcoded tool patterns. Algorithm:

  • For each pattern in REQUEST_PATTERNS (12 entries):
    • Multi-phrase keywords: +0.4 per match; single-word: +0.25 per match.
    • If >1 keyword matched: +0.15 × matchedKeywords bonus.
    • Entity context presence: +0.2 per matching entity key.
    • Special rule: task_create gets score ≥ 0.6 if “create” AND “task” appear independently.
  • Sort by score descending.
  • Return { top_match, alternatives[0..2], all_scores, classified_at }. Patterns: task_create, task_list, task_update, roadmap_list, roadmap_progress, analysis_search, analysis_rag, audit_session, context_ensure, thought_record, workflow_run, autonomous_run. Notes for rewrite: Thresholds vary by pattern (0.5 → 0.75). autonomous_run requires 0.75.

selectTool(classification, preferences): object

File: src/intelligence/models/routing-model.js Purpose: Choose final tool from classification result, applying A/B tests and user preferences. Algorithm:

  1. If top_match.meets_threshold is false → return requires_clarification: true.
  2. If active A/B test for top_match.tool → assign variant (A/B at configured split %).
  3. If preferences.preferred_tools includes a high-scoring alternative (score > 0.5) → use that instead.
  4. Otherwise return top_match.tool. Returns: { selected, confidence, reason, alternatives?, requires_clarification?, variant?, ab_test_id? }

calculateConfidence(classification, context): number

File: src/intelligence/models/routing-model.js Purpose: Normalize and boost/deflate the raw classification score. Algorithm:

  • Start with top_match.score.
  • context.explicit_tool: +0.2 (capped at 1.0).
  • context.ambiguous_terms.length > 0: ×0.9.
  • context.historical_accuracy: average with current score. Returns: float 0–1, rounded to 3 dp.

RequestClassifier.extractFeatures(request, context): object

File: src/intelligence/classifier.js Purpose: Full feature extraction combining all sub-classifiers. Sub-methods:

  • classifyIntent(text) — 8 intent categories (code_generation, debugging, refactoring, testing, documentation, analysis, planning, question). Score = sum of keyword matches (multi-word +0.4, single +0.25), capped at 1.0.
  • estimateComplexity(text) — 5 factors: length (word count), entities, constraints, dependencies, keywords. Weighted sum → simple/medium/complex/very_complex.
  • detectUrgency(text) — high indicators (+0.4 each), medium (+0.2), low (−0.1). Level thresholds: critical ≥ 0.6, high ≥ 0.3, medium ≥ 0.1.
  • identifyDomain(text) — 5 domains (frontend, backend, database, devops, security) via keyword lists. +0.3 per match, capped at 1.0. Falls back to “general” if score ≤ 0.2.
  • extractEntities(text) — regex extraction: files, functions, classes, URLs, emails, numbers. Returns: { intent, intent_confidence, complexity, complexity_score, urgency, urgency_score, domain, domain_confidence, entities, raw, context }

ScoringEngine.score(options, criteria, weights): object

File: src/intelligence/scorer.js Purpose: Multi-criteria decision scoring. Algorithm:

  1. Normalize weights to sum to 1.
  2. For each option: for each criterion evaluate via evaluateCriterion() (types: boolean, number, inverse_number, enum, threshold, custom).
  3. Weighted sum → total_score.
  4. Sort descending.
  5. calculateConfidence(scores) → separation (top gap × 2), consistency (1 − stdDev), range (spread × 1.5), weighted 0.5/0.3/0.2. Returns: { rankings[{rank, option, score, breakdown}], top_choice, confidence, alternatives }

ScoringEngine.bayesianScore(prior, likelihoods): object

File: src/intelligence/scorer.js Purpose: Bayesian update P(H|E) = P(E|H)·P(H) / P(E). Algorithm:

  • Weighted average of likelihoods[].probability.
  • Posterior = (L×P) / ((L×P) + (1−L)×(1−P)).
  • 95% CI: posterior ± 1.96 × sqrt(posterior×(1−posterior)/n).
  • Evidence strength: very_strong (>0.9, range <0.2) → insufficient.

Learning Functions

recordPattern(pattern): Promise<object>

File: src/intelligence/learner.js Purpose: Upsert (hash-deduplicated) pattern into intelligence_patterns table. Pattern hash = djb2 of ${type}:${JSON.stringify(action)}. Config: min 3 occurrences, decay 168 h, confidence threshold 0.7.

analyzePatterns(patternType?): Promise<object>

Queries patterns in last 168 h; identifies hot (success_rate ≥ 0.7) and cold (success_rate < 0.5, freq ≥ 5) patterns. Also runs co-occurrence sequence analysis (temporal proximity < 0.01 julianday, co-occurrence ≥ 3).

learnFromFeedback(request, predicted, actual, success): Promise<void>

File: src/intelligence/models/routing-model.js Inserts into intelligence_routing_feedback. If predicted !== actual && success, logs learning message.


Recommendation Engine

RecommendationEngine.recommendTools(context, history): Promise<object>

File: src/intelligence/recommender.js Blends 4 strategies:

Strategy Weight Method
collaborative 0.30 Query intelligence_patterns for most-used tools across all users
content-based 0.30 Hardcoded tool-pair sequences (e.g., task_create → task_list)
popularity 0.20 Last-7-day routing decisions count
contextual 0.20 Context field heuristics (recent_tools, goal keywords)

Returns top 5 by combined score.


Configuration

Config key Default Purpose
CACHE_MAX_SIZE (router) 1000 entries LRU eviction threshold
CACHE_TTL_MS (router) 300000 ms Cache time-to-live
minConfidence (IntelligentRouter) 0.5 Below this → requires_clarification: true
learningEnabled (IntelligentRouter) true Record decisions to DB
LEARNING_CONFIG.min_pattern_occurrences 3 Minimum before pattern is “real”
LEARNING_CONFIG.pattern_decay_hours 168 1-week window for analysis
LEARNING_CONFIG.confidence_threshold 0.7 “Hot” pattern threshold
LEARNING_CONFIG.max_patterns_per_user 100 Cap per user in-memory

Database Tables

  • intelligence_routing_decisions — (request_hash, selected_tool, confidence, timestamp, context)
  • intelligence_routing_feedback — (request_text, predicted_tool, actual_tool, success, duration_ms, timestamp)
  • intelligence_patterns — (pattern_hash, user_id, pattern_type, context_snapshot, action_taken, outcome, occurrence_count, first_seen, last_seen, success_count, metadata)
  • intelligence_model_weights — (weights JSON, updated_at)

Back to top

Colibri — documentation-first MCP runtime. Apache 2.0 + Commons Clause.

This site uses Just the Docs, a documentation theme for Jekyll.