HLF — Hyper-Lingua Franca — is a formal language for autonomous AI agents to communicate, govern, and audit each other. Not for humans to write. For humans to understand.
Every AI agent today speaks in one of two ways — and both are broken.
Unstructured natural language is expensive, ambiguous, and impossible to audit. When GPT talks to Claude talks to Ollama, there is no shared protocol. Each interaction is a bespoke, one-off prompt. There is no type system. There is no security model. There is no way to know — after the fact — exactly what was said, why it was said, or whether it was allowed.
Rigid JSON schemas are verbose, brittle, and provide no governance. A malicious or confused agent can embed arbitrary instructions inside a JSON payload, and the receiving system has no formal way to validate intent beyond key-existence checks.
A context-free grammar, parsed by an LALR(1) compiler, that compresses agent-to-agent instructions into a structured, auditable, governable format.
{
"action": "deploy",
"target": "production",
"constraints": {
"max_ram": "4GB",
"require_tests": true,
"model_whitelist": ["qwen3-30b-a3b"]
},
"metadata": {
"request_id": "01HZ...",
"timestamp": "2026-03-01T...",
"source_agent": "orchestrator"
}
}
[INTENT] deploy target="production"
[CONSTRAINT] ram≤4GB, tests=true
[EXPECT] model="qwen3-30b-a3b"
Ω
In a 5-agent swarm making round-trip decisions, that's 615-775 tokens saved per exchange. At scale, real money and real latency savings. But compression is the least interesting thing about HLF.
The revolutionary idea is not that HLF is compact. It's that every HLF message passes through a 6-gate security pipeline impossible to replicate with natural language or JSON.
If an agent sends "rm -rf / please" in JSON, it sails through. In HLF, it fails at Gate 1 — it doesn't even parse. If an agent uses a banned model → Gate 4. Floods intents → Gate 5. Replays old commands → Gate 6.
human_readable field. This is not optional. This is not a comment. It is a structural requirement of the grammar. The machines don't need it. It exists because the humans who own these systems have an inalienable right to understand what their machines are deciding.
HLF is the language. The Sovereign Agentic OS is the organism it lives in.
graph TB
L1["🔧 Layer 1: Physical — ACFS
Agent-Centric File System
Git worktree isolation"]
L2["🛡️ Layer 2: Kernel Security
ALIGN Ledger (read-only)
Sentinel Gate enforcement"]
L3["🔮 Layer 3: HLF Language
Compiler, Runtime, Linter
format_correction feedback loop"]
L4["⚡ Layer 4: Agent Services
Pipeline → Registry → Router → Executor
MoMA routing, gas metering"]
L5["💾 Layer 5: Bytecode VM
Stack-machine compiler, .hlb format
32-instruction opcode set"]
L6["🧠 Layer 6: Memory Matrix
Infinite RAG, Dream State
SQLite WAL + MCP bridge"]
L7["📡 Layer 7: Communication
Gateway Bus, Event Bus
MCP Server auto-launch"]
L8["🤖 Layer 8: Agent Orchestration
PlanExecutor → SpindleDAG
CodeAgent, BuildAgent"]
L9["🛠️ Layer 9: Tool Ecosystem
install/uninstall/upgrade/audit
12-point CoVE gate, lockfiles"]
L10["🖥️ Layer 10: Applications
GUI Cognitive SOC
C-SOC dark mode dashboard"]
L11["📋 Layer 11: Governance
14-Hat Review Matrix
CoVE terminal validation"]
L12["👁️ Layer 12: Observability
OpenLLMetry tracing
Hat findings persistence"]
L13["🎯 Layer 13: Meta-Governance
Weaver recursive self-improvement
Anti-reductionist mandate"]
L1 --> L2 --> L3 --> L4 --> L5
L5 --> L6 --> L7 --> L8
L8 --> L9 --> L10
L11 -. audits .-> L3
L11 -. audits .-> L8
L12 -. monitors .-> L4
L12 -. monitors .-> L8
L13 -. evolves .-> L11
style L1 fill:#0d1f3c,stroke:#58a6ff,color:#e6edf3
style L2 fill:#2d1111,stroke:#f85149,color:#e6edf3
style L3 fill:#1f1433,stroke:#bc8cff,color:#e6edf3
style L4 fill:#0f2626,stroke:#39d0d8,color:#e6edf3
style L5 fill:#1a1a33,stroke:#a371f7,color:#e6edf3
style L6 fill:#1a2211,stroke:#56d364,color:#e6edf3
style L7 fill:#22110d,stroke:#f0883e,color:#e6edf3
style L8 fill:#0d2233,stroke:#79c0ff,color:#e6edf3
style L9 fill:#221122,stroke:#d2a8ff,color:#e6edf3
style L10 fill:#112211,stroke:#3fb950,color:#e6edf3
style L11 fill:#2d2200,stroke:#d29922,color:#e6edf3
style L12 fill:#2d1125,stroke:#f778ba,color:#e6edf3
style L13 fill:#112233,stroke:#58a6ff,color:#e6edf3
sentinel_gate.py enforces with 403 rejections. No exceptions. No overrides except human approval.governance/cove_audit_results.md.hlf new-tool project templates. tool_dispatch.py provides lazy-load bridging to the τ() runtime dispatch. tool_lockfile.py ensures reproducible installs with SHA-256 integrity checks. tool_monitor.py runs health sweeps, gas tracking, and auto-revocation of compromised tools.20 statement types. 7 semantic glyphs. Two-pass compilation. Full epistemic modifiers.
| Type | Purpose |
|---|---|
[INTENT] | Declare an agent's goal |
[CONSTRAINT] | Define boundaries and limits |
[EXPECT] | Specify expected outcomes |
[ACTION] | Trigger a concrete operation |
[SET] | Immutable variable binding |
[FUNCTION] | Pure built-in function call |
[RESULT] | Error-code propagation |
[MODULE] / [IMPORT] | Module system |
[DATA] | Structured data payload |
[ROUTE] / [DELEGATE] | Agent-to-agent routing & delegation |
[IF] / [ELIF] / [ELSE] | Conditional logic |
[MATH] / [EXEC] | Math operations & tool execution |
[TYPE] / [CONCURRENT] / [REF] | Type annotations, parallelism, pass-by-reference |
[BELIEVE] / [ASSUME] / [DOUBT] | Epistemic modifiers — agent confidence levels |
| Glyph | Name | Meaning |
|---|---|---|
| Ω | Omega | Program terminator |
| Δ | Delta | State diff — what changed |
| Ж | Zhe | Reasoning blocker / paradox flag |
| ⩕ | Gas | Computational budget marker |
| ⌘ | Command | System directive |
| ∇ | Nabla | Gradient / drift detection |
| ⨝ | Join | Matrix / data join |
flowchart LR
A["📄 Source .hlf"] --> B["🔍 Lark LALR(1)"]
B --> C["🌳 Parse Tree"]
C --> D["⚙️ HLFTransformer"]
D --> E["📦 Pass 1: Collect SET env"]
E --> F["🔗 Pass 2: Expand vars"]
F --> G["✅ JSON AST v0.4.0"]
G --> H["🛡️ 6-Gate Pipeline"]
H --> I["📡 Redis Stream"]
style A fill:#1f1433,stroke:#bc8cff,color:#e6edf3
style G fill:#112211,stroke:#3fb950,color:#e6edf3
style I fill:#0d1f3c,stroke:#58a6ff,color:#e6edf3
March 2026 — 934+ tests passing, 20+ statement types, Bytecode VM (stack-machine + disassembler), Agent Orchestration Layer, Tool Ecosystem Pipeline operational.
| Metric | Value | Status |
|---|---|---|
| Tests passing | 934+ | ✓ 100% |
| Grammar version | v0.4.0 | ✓ Live |
| Statement types | 20+ | ✓ All compiled |
| Bytecode VM | .hlb binary + disassembler | ✓ Shipped |
| Agent modules | 53 core modules | ✓ Active |
| Hat Engine | 14-Hat (19 named agents) | ✓ Live |
| Token compression | 83-86% vs JSON | ✓ Measured |
| Agent Orchestration | PlanExecutor + CodeAgent + BuildAgent | ✓ Integrated |
| Tool Ecosystem | install/uninstall/upgrade/health/audit | ✓ Shipped |
| Self-correction loop | format_correction() → bus.py | ✓ Integrated |
| Jules agents | Bolt / Palette / Sentinel | ⟳ Running daily |
From DSL to virtual machine. From chat demo to self-governing agent infrastructure.
gantt
title HLF Roadmap
dateFormat YYYY-MM
axisFormat %b %Y
section Core Language
v0.4.0 Grammar Complete :done, 2026-01, 2026-03
Module Runtime :done, 2026-03, 2026-04
Host Function Registry :done, 2026-04, 2026-05
section Bytecode VM
Stack-machine Compiler :done, 2026-03, 2026-03
.hlb Binary Format :done, 2026-03, 2026-03
RAG Opcodes :done, 2026-03, 2026-03
Wasm Sandbox Integration :2026-07, 2026-09
section Agent Orchestration
PlanExecutor + SpindleDAG :done, 2026-03, 2026-03
CodeAgent + BuildAgent :done, 2026-03, 2026-03
SDD Lifecycle Enforcement :done, 2026-03, 2026-03
section Tool Ecosystem
tool_installer + CoVE Gate :done, 2026-03, 2026-03
Lockfile + Monitor :done, 2026-03, 2026-03
Scaffold + Dispatch :done, 2026-03, 2026-03
section Developer Experience
LSP Server :2026-06, 2026-08
HLF REPL :2026-07, 2026-08
Package Manager :2026-08, 2026-10
section Infrastructure
GUI Cognitive SOC :done, 2026-03, 2026-06
14-Hat Aegis-Nexus Engine :done, 2026-03, 2026-03
A2A Protocol Integration :2026-08, 2026-12
The compiler emits both JSON AST and stack-machine bytecode — hlfc --emit-bytecode produces .hlb binaries with a 32-instruction opcode set including RAG-specific opcodes (OP_RAG_QUERY, OP_RAG_STORE). A full disassembler is included for debugging. The VM executes instructions in a hermetically sealed stack machine with gas metering.
A 14-hat autonomous governance system with 10 named cloud agents, each tuned with role-specific temperature and token limits. Plus the Weaver meta-agent for recursive self-improvement:
10 named cloud agents + the Weaver meta-agent, each with a tuned temperature and specific focus area. Click any agent card for the full deep-dive.
The remaining 4 hats — ⬛ Black (Security Exploits), 🟡 Yellow (Synergies), 🟢 Green (Evolution), and 🕸️ Weaver (Meta-Governance) — use default model settings without named agent profiles. The Weaver operates as the recursive self-improvement meta-agent.
config/agent_registry.json with a specific cloud model (kimi-k2.5:cloud), provider (cloud), and restrictions (temperature, max tokens). When the Dream State engine triggers a hat analysis cycle, hat_engine.py loads these profiles automatically via _load_agent_registry() and routes inference through _ollama_generate_v2() — the same cloud-routed pipeline all other OS agents use.