Sovereign Agentic OS

The Wire Protocol for
Machine Cognition

HLF — Hyper-Lingua Franca — is a formal language for autonomous AI agents to communicate, govern, and audit each other. Not for humans to write. For humans to understand.

934+
Tests Passing
v0.4.0
Grammar Version
86%
Token Compression
13
Architecture Layers

The Problem Nobody Solved

Every AI agent today speaks in one of two ways — and both are broken.

Unstructured natural language is expensive, ambiguous, and impossible to audit. When GPT talks to Claude talks to Ollama, there is no shared protocol. Each interaction is a bespoke, one-off prompt. There is no type system. There is no security model. There is no way to know — after the fact — exactly what was said, why it was said, or whether it was allowed.

Rigid JSON schemas are verbose, brittle, and provide no governance. A malicious or confused agent can embed arbitrary instructions inside a JSON payload, and the receiving system has no formal way to validate intent beyond key-existence checks.

💡 The Core Insight
HLF answers the question: "What if agent-to-agent communication had the rigor of a compiled programming language, the compression of hieroglyphs, and the auditability of a constitutional legal system?"

What HLF Actually Is

A context-free grammar, parsed by an LALR(1) compiler, that compresses agent-to-agent instructions into a structured, auditable, governable format.

🔴 JSON (~160 tokens)

{
  "action": "deploy",
  "target": "production",
  "constraints": {
    "max_ram": "4GB",
    "require_tests": true,
    "model_whitelist": ["qwen3-30b-a3b"]
  },
  "metadata": {
    "request_id": "01HZ...",
    "timestamp": "2026-03-01T...",
    "source_agent": "orchestrator"
  }
}
~160 tokens • no validation • no governance

🟢 HLF (~22 tokens)

[INTENT] deploy target="production"
[CONSTRAINT] ram≤4GB, tests=true
[EXPECT] model="qwen3-30b-a3b"
Ω
~22 tokens • LALR(1) parsed • 6-gate security • 86% smaller

In a 5-agent swarm making round-trip decisions, that's 615-775 tokens saved per exchange. At scale, real money and real latency savings. But compression is the least interesting thing about HLF.


Governance Built Into the Grammar

The revolutionary idea is not that HLF is compact. It's that every HLF message passes through a 6-gate security pipeline impossible to replicate with natural language or JSON.

Gate 1
validate_hlf()
Structural regex gate
Gate 2
hlfc.compile()
LALR(1) parse + types
Gate 3
hlflint.lint()
Token + gas budget
Gate 4
ALIGN Ledger
Governance enforcement
Gate 5
Gas Meter
Per-intent limit
Gate 6
Nonce Check
Replay protection

If an agent sends "rm -rf / please" in JSON, it sails through. In HLF, it fails at Gate 1 — it doesn't even parse. If an agent uses a banned model → Gate 4. Floods intents → Gate 5. Replays old commands → Gate 6.

🔮 The Transparency Mandate
Every AST node carries a human_readable field. This is not optional. This is not a comment. It is a structural requirement of the grammar. The machines don't need it. It exists because the humans who own these systems have an inalienable right to understand what their machines are deciding.

The 7-Layer Architecture

HLF is the language. The Sovereign Agentic OS is the organism it lives in.

graph TB
    L1["🔧 Layer 1: Physical — ACFS
Agent-Centric File System
Git worktree isolation"] L2["🛡️ Layer 2: Kernel Security
ALIGN Ledger (read-only)
Sentinel Gate enforcement"] L3["🔮 Layer 3: HLF Language
Compiler, Runtime, Linter
format_correction feedback loop"] L4["⚡ Layer 4: Agent Services
Pipeline → Registry → Router → Executor
MoMA routing, gas metering"] L5["💾 Layer 5: Bytecode VM
Stack-machine compiler, .hlb format
32-instruction opcode set"] L6["🧠 Layer 6: Memory Matrix
Infinite RAG, Dream State
SQLite WAL + MCP bridge"] L7["📡 Layer 7: Communication
Gateway Bus, Event Bus
MCP Server auto-launch"] L8["🤖 Layer 8: Agent Orchestration
PlanExecutor → SpindleDAG
CodeAgent, BuildAgent"] L9["🛠️ Layer 9: Tool Ecosystem
install/uninstall/upgrade/audit
12-point CoVE gate, lockfiles"] L10["🖥️ Layer 10: Applications
GUI Cognitive SOC
C-SOC dark mode dashboard"] L11["📋 Layer 11: Governance
14-Hat Review Matrix
CoVE terminal validation"] L12["👁️ Layer 12: Observability
OpenLLMetry tracing
Hat findings persistence"] L13["🎯 Layer 13: Meta-Governance
Weaver recursive self-improvement
Anti-reductionist mandate"] L1 --> L2 --> L3 --> L4 --> L5 L5 --> L6 --> L7 --> L8 L8 --> L9 --> L10 L11 -. audits .-> L3 L11 -. audits .-> L8 L12 -. monitors .-> L4 L12 -. monitors .-> L8 L13 -. evolves .-> L11 style L1 fill:#0d1f3c,stroke:#58a6ff,color:#e6edf3 style L2 fill:#2d1111,stroke:#f85149,color:#e6edf3 style L3 fill:#1f1433,stroke:#bc8cff,color:#e6edf3 style L4 fill:#0f2626,stroke:#39d0d8,color:#e6edf3 style L5 fill:#1a1a33,stroke:#a371f7,color:#e6edf3 style L6 fill:#1a2211,stroke:#56d364,color:#e6edf3 style L7 fill:#22110d,stroke:#f0883e,color:#e6edf3 style L8 fill:#0d2233,stroke:#79c0ff,color:#e6edf3 style L9 fill:#221122,stroke:#d2a8ff,color:#e6edf3 style L10 fill:#112211,stroke:#3fb950,color:#e6edf3 style L11 fill:#2d2200,stroke:#d29922,color:#e6edf3 style L12 fill:#2d1125,stroke:#f778ba,color:#e6edf3 style L13 fill:#112233,stroke:#58a6ff,color:#e6edf3
1
Physical — Agent-Centric File System
Cryptographic trust boundaries, container topology, resource isolation
Defines how agents own files, how cryptographic trust boundaries work, and how the physical deployment topology is structured. Every agent service runs in its own Docker container with configurable resource limits. The ACFS ensures agents cannot access each other's filesystems without explicit cryptographic authorization.
2
Kernel Security — The ALIGN Ledger
Read-only crypto-signed governance, sentinel_gate.py enforcement
The immune system. A read-only, crypto-signed governance ledger mounted in every container. Rules R-001 through R-008 cover: no PII leakage, no raw subprocess calls, no banned models, no self-modification of governance. sentinel_gate.py enforces with 403 rejections. No exceptions. No overrides except human approval.
3
The HLF Language
hlfc.py compiler, hlffmt.py formatter, hlflint.py linter, hlfrun.py runtime
The nervous system. hlfc.py (684 lines) — Lark LALR(1) compiler producing JSON AST. hlffmt.py — Canonical formatter. hlflint.py — Token budget + gas enforcement. hlfrun.py — Tier-aware runtime with 5 built-in functions and 7 host function stubs. format_correction() — Self-correcting feedback loop: invalid HLF gets error details + full operator catalog + suggested fix. The agent corrects itself. No human intervention.
4
Agent Services — The Pipeline
Pipeline → Registry → Router → Executor with MoMA routing
The circulatory system. Four modules in a chain: Pipeline (intent ingestion), Registry (SQL model registry), Router (MoMA — Mixture of Model Agents: visual→qwen3-vl, code→qwen-max, simple→qwen:7b, with dynamic downshifting), Executor (dispatches to Ollama or OpenRouter with circuit-breakers and timeouts).
5
Applications
Streamlit GUI, Tool Forge, Dream State, 14-Hat Engine (10 Named Agents)
The face. A Cognitive SOC (Streamlit dashboard) for real-time agent visualization. A Tool Forge for dynamic tool creation. A Dream State engine for subconscious background processing during idle times. A 14-Hat Aegis-Nexus Engine with 10 named cloud agents (Sentinel, Scribe, Arbiter, Synthesizer, Scout, Guardian, Operator, Compressor, Steward, CoVE) for structured multi-perspective analysis — each with tuned temperature and cloud model profiles. Plus the Weaver meta-agent for recursive self-improvement.
6
Governance & Audit
CoVE validation, cove_audit_results.md, 14-Hat Review Matrix
The conscience. CoVE (Chain of Verification and Evaluation) audits run against every PR — checking security, CI/CD, testing, dependencies, docs, config, infra, and governance. Findings classified P0 (critical) through P3 (nice-to-have). Stored in governance/cove_audit_results.md.
7
Observability
OpenLLMetry tracing, hat finding persistence, dream cycle telemetry
The senses. OpenLLMetry distributed tracing for every agent call. Hat findings persisted to SQLite for longitudinal analysis. Dream cycle telemetry. Every intent, every routing decision, every model call is logged and traceable end-to-end.
8
Agent Orchestration
PlanExecutor → SpindleDAG → CodeAgent/BuildAgent pipeline
The blueprint-to-code engine. PlanExecutor translates SDD plans into executable DAG nodes via SpindleDAG. CodeAgent handles file operations (create, modify, refactor, delete) within ACFS sandboxes. BuildAgent runs tests, linters, and syntax checks. Together they implement the Specify→Plan→Execute→Verify lifecycle with fail-fast error propagation and O(n) node lookups.
9
Tool Ecosystem
hlf install/uninstall/upgrade/health/audit — 12-point CoVE verification gate
The package manager for agent tools. tool_installer.py handles downloading, verifying, and installing tools from registries. tool_scaffold.py generates hlf new-tool project templates. tool_dispatch.py provides lazy-load bridging to the τ() runtime dispatch. tool_lockfile.py ensures reproducible installs with SHA-256 integrity checks. tool_monitor.py runs health sweeps, gas tracking, and auto-revocation of compromised tools.

The Grammar: v0.4.0

20 statement types. 7 semantic glyphs. Two-pass compilation. Full epistemic modifiers.

Statement Types

TypePurpose
[INTENT]Declare an agent's goal
[CONSTRAINT]Define boundaries and limits
[EXPECT]Specify expected outcomes
[ACTION]Trigger a concrete operation
[SET]Immutable variable binding
[FUNCTION]Pure built-in function call
[RESULT]Error-code propagation
[MODULE] / [IMPORT]Module system
[DATA]Structured data payload
[ROUTE] / [DELEGATE]Agent-to-agent routing & delegation
[IF] / [ELIF] / [ELSE]Conditional logic
[MATH] / [EXEC]Math operations & tool execution
[TYPE] / [CONCURRENT] / [REF]Type annotations, parallelism, pass-by-reference
[BELIEVE] / [ASSUME] / [DOUBT]Epistemic modifiers — agent confidence levels

Glyph System

GlyphNameMeaning
ΩOmegaProgram terminator
ΔDeltaState diff — what changed
ЖZheReasoning blocker / paradox flag
GasComputational budget marker
CommandSystem directive
NablaGradient / drift detection
JoinMatrix / data join

Compilation Pipeline

flowchart LR
    A["📄 Source .hlf"] --> B["🔍 Lark LALR(1)"]
    B --> C["🌳 Parse Tree"]
    C --> D["⚙️ HLFTransformer"]
    D --> E["📦 Pass 1: Collect SET env"]
    E --> F["🔗 Pass 2: Expand vars"]
    F --> G["✅ JSON AST v0.4.0"]
    G --> H["🛡️ 6-Gate Pipeline"]
    H --> I["📡 Redis Stream"]

    style A fill:#1f1433,stroke:#bc8cff,color:#e6edf3
    style G fill:#112211,stroke:#3fb950,color:#e6edf3
    style I fill:#0d1f3c,stroke:#58a6ff,color:#e6edf3
            

Where We Are Right Now

March 2026 — 934+ tests passing, 20+ statement types, Bytecode VM (stack-machine + disassembler), Agent Orchestration Layer, Tool Ecosystem Pipeline operational.

MetricValueStatus
Tests passing934+✓ 100%
Grammar versionv0.4.0✓ Live
Statement types20+✓ All compiled
Bytecode VM.hlb binary + disassembler✓ Shipped
Agent modules53 core modules✓ Active
Hat Engine14-Hat (19 named agents)✓ Live
Token compression83-86% vs JSON✓ Measured
Agent OrchestrationPlanExecutor + CodeAgent + BuildAgent✓ Integrated
Tool Ecosysteminstall/uninstall/upgrade/health/audit✓ Shipped
Self-correction loopformat_correction()bus.py✓ Integrated
Jules agentsBolt / Palette / Sentinel⟳ Running daily

Where We're Going

From DSL to virtual machine. From chat demo to self-governing agent infrastructure.

gantt
    title HLF Roadmap
    dateFormat YYYY-MM
    axisFormat %b %Y

    section Core Language
        v0.4.0 Grammar Complete     :done, 2026-01, 2026-03
        Module Runtime              :done, 2026-03, 2026-04
        Host Function Registry      :done, 2026-04, 2026-05

    section Bytecode VM
        Stack-machine Compiler      :done, 2026-03, 2026-03
        .hlb Binary Format          :done, 2026-03, 2026-03
        RAG Opcodes                 :done, 2026-03, 2026-03
        Wasm Sandbox Integration    :2026-07, 2026-09

    section Agent Orchestration
        PlanExecutor + SpindleDAG   :done, 2026-03, 2026-03
        CodeAgent + BuildAgent      :done, 2026-03, 2026-03
        SDD Lifecycle Enforcement   :done, 2026-03, 2026-03

    section Tool Ecosystem
        tool_installer + CoVE Gate  :done, 2026-03, 2026-03
        Lockfile + Monitor          :done, 2026-03, 2026-03
        Scaffold + Dispatch         :done, 2026-03, 2026-03

    section Developer Experience
        LSP Server                  :2026-06, 2026-08
        HLF REPL                    :2026-07, 2026-08
        Package Manager             :2026-08, 2026-10

    section Infrastructure
        GUI Cognitive SOC           :done, 2026-03, 2026-06
        14-Hat Aegis-Nexus Engine   :done, 2026-03, 2026-03
        A2A Protocol Integration    :2026-08, 2026-12
            

The Bytecode VM ✅ (Shipped)

The compiler emits both JSON AST and stack-machine bytecodehlfc --emit-bytecode produces .hlb binaries with a 32-instruction opcode set including RAG-specific opcodes (OP_RAG_QUERY, OP_RAG_STORE). A full disassembler is included for debugging. The VM executes instructions in a hermetically sealed stack machine with gas metering.

The Aegis-Nexus Engine ✅ (Live)

A 14-hat autonomous governance system with 10 named cloud agents, each tuned with role-specific temperature and token limits. Plus the Weaver meta-agent for recursive self-improvement:

Agent
🛡️ Sentinel
Security watchdog. ALIGN violation detection.
Agent
📜 Scribe
Record keeper. Immutable audit trail.
Agent
⚖️ Arbiter
Conflict resolver. Process orchestration.
Agent
🟣 Synthesizer
Cross-system fusion. Emergent insights.
Agent
🩵 Scout
Innovation explorer. Blue-sky proposals.
Agent
🟪 Guardian
Compliance auditor. ALIGN verification.
Agent
🟠 Operator
Deployment engineer. Container health.
Agent
🪨 Compressor
Waste eliminator. Token optimization.
🌍 The Ultimate Vision
A self-sovereign operating system for AI agents where: (1) agents communicate in a formal, auditable language, (2) every message is governed by immutable, crypto-signed rules, (3) every decision is transparent to humans, (4) the system self-corrects when agents err, (5) the system self-governs through 10+ specialized oversight agents with a Weaver meta-agent for recursive improvement, and (6) humans retain ultimate authority — not through constant supervision, but through structural guarantees embedded in the protocol itself.

The 14-Hat Agent Roster

10 named cloud agents + the Weaver meta-agent, each with a tuned temperature and specific focus area. Click any agent card for the full deep-dive.

Agent
🔴 Sentinel
Red Hat · Security watchdog · temp 0.3
Agent
⚪ Scribe
White Hat · Data integrity · temp 0.1
Agent
🔵 Arbiter
Blue Hat · Process mediator · temp 0.2
Agent
🟣 Synthesizer
Indigo Hat · Cross-system fuser · temp 0.4
Agent
🩵 Scout
Cyan Hat · Innovation explorer · temp 0.6
Agent
🟪 Guardian
Purple Hat · Compliance auditor · temp 0.1
Agent
🟠 Operator
Orange Hat · Deployment engineer · temp 0.2
Agent
🪨 Compressor
Silver Hat · Waste eliminator · temp 0.2
Agent
💎 Steward
Azure Hat · MCP workflow integrity · temp 0.2
Agent
✨ CoVE
Gold Hat · Terminal QA authority · temp 0.1

The remaining 4 hats — ⬛ Black (Security Exploits), 🟡 Yellow (Synergies), 🟢 Green (Evolution), and 🕸️ Weaver (Meta-Governance) — use default model settings without named agent profiles. The Weaver operates as the recursive self-improvement meta-agent.

⚙️ How It Works
Each named agent is defined in config/agent_registry.json with a specific cloud model (kimi-k2.5:cloud), provider (cloud), and restrictions (temperature, max tokens). When the Dream State engine triggers a hat analysis cycle, hat_engine.py loads these profiles automatically via _load_agent_registry() and routes inference through _ollama_generate_v2() — the same cloud-routed pipeline all other OS agents use.