ABS Core
Features

CHI — Cognitive Host Interface

Mandatory introspection layer. Turns black-box agents into auditable, governable systems.

CHI — Cognitive Host Interface

The Cognitive Host Interface is ADR-004 of the ABS Core platform. It is the introspection layer that forces an AI agent to declare its intent, justify its next action, and expose its reasoning to the governance kernel — before any action is executed.

Without CHI, you have an agent that acts and a log that records what happened. With CHI, you have an agent that must explain itself first, and a cryptographic record that proves the explanation matched the allowed policy.

This is the difference between an audit trail and proof of intent — and it is what regulators and DPOs in critical sectors require.


The Core Mechanism

Before executing any tool call or write operation, the agent must produce a CHIAnalysis envelope:

interface CHIAnalysis {
  // What the agent intends to do, in plain language
  intent: string;

  // The agent's own risk self-assessment
  risk_assessment: {
    level: "low" | "medium" | "high";
    factors: string[];
  };

  // The specific next action about to be executed
  next_action: string;

  // Optional: memory context relevant to this decision
  memory_context?: string;
}

Example — financial transfer agent:

const analysis: CHIAnalysis = {
  intent: "Complete authorized PIX transfer of R$5,000 to supplier account per PO #4421",
  risk_assessment: {
    level: "high",
    factors: [
      "financial transaction",
      "external account",
      "amount exceeds R$1,000 threshold"
    ]
  },
  next_action: "POST /v1/pix/transfer",
  memory_context: "User confirmed via MFA at 14:22:01Z. PO #4421 validated against ERP."
};

The governance kernel validates this envelope against three checks before forwarding the action:

  1. Goal alignment: Does the declared intent match the session goal? A drift of intent from goal triggers an immediate BLOCK regardless of policy.
  2. Policy coherence: Does the declared risk match what the policy engine independently assesses? Underreported risk is treated as potential deception.
  3. Action verification: Does the next_action reference real entities (files, endpoints, agent IDs that exist)? Hallucinated references are blocked.

Semantic Drift Detection

CHI continuously monitors the gap between the original session goal and the agent's current declared intent. This is drift.

An agent that starts with goal "Fix the authentication bug in /src/auth.ts" and whose next_action becomes "DELETE /db/users" has drifted catastrophically. CHI computes a semantic distance score and blocks when it exceeds the configured threshold.

graph LR
    GOAL["Session Goal: Fix auth bug"] --> CHI[CHI Drift Monitor]
    ACTION["Agent intent: DELETE /db/users"] --> CHI
    CHI -->|"Semantic distance: CRITICAL"| BLOCK[BLOCK + Alert]
    CHI -->|"Semantic distance: within threshold"| PASS[Forward to Policy Engine]

Drift thresholds are configurable per agent profile and per operation type. Financial operations default to the strictest threshold.


Output Vaccines

CHI injects validation logic into the agent's output stream before it reaches any downstream system. These are called vaccines because they are preventive, not reactive.

PII Vaccine: Redacts emails, phone numbers, CPF/CNPJ, account numbers, API keys, and private keys before they leave the agent's output. Configurable per regex profile or LGPD/HIPAA preset.

System Prompt Leakage Vaccine: Detects if the agent is producing output that reveals its system instructions — a common exfiltration vector in jailbreak attempts.

Hallucination Vaccine: Validates that any file path, endpoint URL, or agent ID referenced in next_action actually exists in the known environment. Blocks calls to non-existent resources before they cause cascading errors.

Entropy Vaccine: Runs Shannon entropy analysis on the declared intent string. High entropy (>4.5 bits/char) in an intent declaration indicates potential obfuscated payload injection.


Why This Matters for Regulated Sectors

A financial regulator asking "what did the agent do and why" cannot be satisfied by a log file that records what happened. They need proof of what was authorized and what the agent declared before acting.

CHI provides this. Every CHIAnalysis envelope is:

  • Included in the decision envelope that gets hashed into the forensic ledger (and anchored on Polygon L2 in Enterprise tier)
  • Linked to the specific policy version that was active at decision time
  • Signed by the governance kernel's key — not the agent's

This means a compliance officer can reconstruct, for any given transaction, exactly what the agent claimed it was doing, what the system verified, what policy ruled, and who signed off — all from public blockchain data.


Integration

CHI is enforced automatically when using the Magic Proxy. For agents using the ABS SDK, the abs.process() call triggers CHI evaluation:

import { ABSClient } from "@oconnector/sdk";

const abs = new ABSClient({
  endpoint: "https://api.abscore.app",
  apiKey: process.env.ABS_KEY!,
  tenantId: "my-tenant"
});

// Before any significant action:
const result = await abs.process({
  event_id: crypto.randomUUID(),
  tenant_id: "my-tenant",
  event_type: "agent.action",
  source: "payment-agent-01",
  payload: { amount: 5000, destination: "account-xyz" },
  chi: {
    intent: "Completing authorized transfer per user instruction ref #TXN-4421",
    risk_assessment: { level: "high", factors: ["financial transaction", "external account"] },
    next_action: "POST /v1/transfers"
  }
}, { sync: true });

if (result.envelope.verdict !== "ALLOW") {
  throw new Error(`Blocked: ${result.envelope.reason_human}`);
}

For multi-agent frameworks (CrewAI, LangGraph), CHI must be instrumented at the tool-call level inside each agent. The Magic Proxy alone does not capture inter-agent calls. See the CrewAI integration guide for the complete migration path and honest assessment of rewrite cost.


Trade-offs

CHI adds ~5–8ms per evaluation in local WASM mode and ~12–20ms via the edge proxy. For operations where the cost of a wrong action is high — financial transactions, PHI access, infrastructure changes — this overhead is justified. For high-frequency low-stakes operations (e.g., read-only queries), consider using CHI in ghost mode (analyze but do not block) and promoting to enforcement mode selectively.

On this page