Skip to main content

Architecture

Luminarys is built around a host runtime that loads, validates, and executes skills in isolated sandboxes. This page describes the major components and how they interact.

Host runtime components

The host binary (luminarys) contains the following subsystems:

Sandbox engine

The embedded engine compiles and executes skill binaries in isolated sandboxes. Skills written in Go, Rust, AssemblyScript — or any other language that compiles to WebAssembly — run in a portable, sandboxed environment. Compiled modules are cached on disk to avoid redundant compilation on subsequent starts.

ABI layer

The Application Binary Interface is the only bridge between a skill and the outside world. Skills cannot access host resources directly — every operation (file I/O, HTTP requests, shell commands, TCP connections) goes through ABI functions exported by the host. Each ABI call passes through the permission manager before execution.

Permission manager

Every ABI call is intercepted by the permission manager, which evaluates it against the skill's declared permissions from its manifest:

  • File system — allowed directories, read/write modes, glob patterns
  • HTTP — URL allowlists with wildcard matching
  • TCP — host:port allowlists, DNS-aware filtering
  • Shell — command allowlists, working directory restrictions
  • Inter-skill invocation — which skills may call which other skills
  • File transfer — allowed nodes and directories for cross-node transfers

If a permission check fails, the ABI call returns an error to the skill. The skill never gains direct access to the denied resource.

Orchestrator

The orchestrator manages the lifecycle of all loaded skills and routes invocation requests. It maintains a registry of skills with their metadata (methods, parameters, descriptions) and dispatches incoming calls to the correct skill instance.

In cluster mode, the orchestrator routes calls to remote nodes transparently — the client doesn't need to know which node hosts which skill.

MCP server

The MCP server exposes skills as tools to AI clients. Three transport modes are supported:

  • Streamable HTTP — primary mode for web-based clients (/mcp endpoint)
  • Legacy SSE — backward-compatible Server-Sent Events (/sse endpoint)
  • stdio — for direct integration with Claude Desktop, Cursor, Qwen CLI, and similar tools

Each skill method is registered as an MCP tool with typed input schemas generated from skill annotations.

Skill lifecycle

When the host starts, each configured skill goes through the following stages:

  1. Load — the host reads the .skill package from the path specified in the manifest.
  2. Verify signature — the package signature and integrity are verified. If validation fails, the skill is rejected.
  3. Compile — the binary is compiled to native code. The compiled module is cached to disk for faster subsequent loads.
  4. Describe — the host reads skill metadata: name, version, methods, parameter schemas.
  5. Register — the skill and its methods are registered in the orchestrator. In cluster mode, the skill is also announced to other nodes.
  6. Expose — methods are exposed as MCP tools based on the manifest configuration (per-method or per-skill mapping).

Request flow

A typical request flows through the system:

  1. MCP client sends a tool call (e.g., fs-skill/read) via HTTP, SSE, or stdio
  2. MCP server resolves the tool name to a skill ID and method
  3. Orchestrator dispatches the call — locally or to a remote node via NATS
  4. Skill receives the request, executes logic, and makes ABI calls as needed
  5. Permission manager checks every ABI call against the manifest
  6. Host services (FS, HTTP, Shell, TCP) execute the permitted operation
  7. Result flows back through the chain to the client

In cluster mode, steps 3–6 happen on the node that owns the skill. The client sees a seamless response regardless of which node executed it.

Signed skill packages

Skills are distributed as .skill packages — signed bundles that contain the compiled binary and integrity metadata.

Signing (lmsk sign):

  1. Compute a cryptographic hash of the binary
  2. Sign the hash with the developer's private key
  3. Bundle the binary, signature, and metadata into the .skill file

Verification (at load time):

  1. Extract the binary and signature from the package
  2. Recompute the hash
  3. Verify the signature
  4. Reject the skill if verification fails

This ensures that skills have not been tampered with between build and deployment.

Clustering

Nodes in a cluster communicate via NATS:

  • Master node — accepts MCP connections, maintains the unified skill registry, routes calls to the appropriate node
  • Slave nodes — register their skills with the master, execute calls locally, return results

When a slave joins the cluster, its skills become available to all clients connected to the master. When a slave disconnects, its skills are removed from the registry.

Cross-node file transfer is built in — skills can copy files between nodes using the file_transfer ABI.

State management

Each skill has access to a persistent key-value store scoped to its instance. State is stored in an embedded database on the host and survives restarts.

  • Isolation — each skill can only access its own state. There is no shared state between skills.
  • Inter-skill communication — happens exclusively through the invocation mechanism, not through shared state.