Architecture
Luminarys is built around a host runtime that loads, validates, and executes skills in isolated sandboxes. This page describes the major components and how they interact.
Host runtime components
The host binary (luminarys) contains the following subsystems:
Sandbox engine
The embedded engine compiles and executes skill binaries in isolated sandboxes. Skills written in Go, Rust, AssemblyScript — or any other language that compiles to WebAssembly — run in a portable, sandboxed environment. Compiled modules are cached on disk to avoid redundant compilation on subsequent starts.
ABI layer
The Application Binary Interface is the only bridge between a skill and the outside world. Skills cannot access host resources directly — every operation (file I/O, HTTP requests, shell commands, TCP connections) goes through ABI functions exported by the host. Each ABI call passes through the permission manager before execution.
Permission manager
Every ABI call is intercepted by the permission manager, which evaluates it against the skill's declared permissions from its manifest:
- File system — allowed directories, read/write modes, glob patterns
- HTTP — URL allowlists with wildcard matching
- TCP — host:port allowlists, DNS-aware filtering
- Shell — command allowlists, working directory restrictions
- Inter-skill invocation — which skills may call which other skills
- File transfer — allowed nodes and directories for cross-node transfers
If a permission check fails, the ABI call returns an error to the skill. The skill never gains direct access to the denied resource.
Orchestrator
The orchestrator manages the lifecycle of all loaded skills and routes invocation requests. It maintains a registry of skills with their metadata (methods, parameters, descriptions) and dispatches incoming calls to the correct skill instance.
In cluster mode, the orchestrator routes calls to remote nodes transparently — the client doesn't need to know which node hosts which skill.
MCP server
The MCP server exposes skills as tools to AI clients. Three transport modes are supported:
- Streamable HTTP — primary mode for web-based clients (
/mcpendpoint) - Legacy SSE — backward-compatible Server-Sent Events (
/sseendpoint) - stdio — for direct integration with Claude Desktop, Cursor, Qwen CLI, and similar tools
Each skill method is registered as an MCP tool with typed input schemas generated from skill annotations.
Skill lifecycle
When the host starts, each configured skill goes through the following stages:
- Load — the host reads the
.skillpackage from the path specified in the manifest. - Verify signature — the package signature and integrity are verified. If validation fails, the skill is rejected.
- Compile — the binary is compiled to native code. The compiled module is cached to disk for faster subsequent loads.
- Describe — the host reads skill metadata: name, version, methods, parameter schemas.
- Register — the skill and its methods are registered in the orchestrator. In cluster mode, the skill is also announced to other nodes.
- Expose — methods are exposed as MCP tools based on the manifest configuration (per-method or per-skill mapping).
Request flow
A typical request flows through the system:
- MCP client sends a tool call (e.g.,
fs-skill/read) via HTTP, SSE, or stdio - MCP server resolves the tool name to a skill ID and method
- Orchestrator dispatches the call — locally or to a remote node via NATS
- Skill receives the request, executes logic, and makes ABI calls as needed
- Permission manager checks every ABI call against the manifest
- Host services (FS, HTTP, Shell, TCP) execute the permitted operation
- Result flows back through the chain to the client
In cluster mode, steps 3–6 happen on the node that owns the skill. The client sees a seamless response regardless of which node executed it.
Signed skill packages
Skills are distributed as .skill packages — signed bundles that contain the compiled binary and integrity metadata.
Signing (lmsk sign):
- Compute a cryptographic hash of the binary
- Sign the hash with the developer's private key
- Bundle the binary, signature, and metadata into the
.skillfile
Verification (at load time):
- Extract the binary and signature from the package
- Recompute the hash
- Verify the signature
- Reject the skill if verification fails
This ensures that skills have not been tampered with between build and deployment.
Clustering
Nodes in a cluster communicate via NATS:
- Master node — accepts MCP connections, maintains the unified skill registry, routes calls to the appropriate node
- Slave nodes — register their skills with the master, execute calls locally, return results
When a slave joins the cluster, its skills become available to all clients connected to the master. When a slave disconnects, its skills are removed from the registry.
Cross-node file transfer is built in — skills can copy files between nodes using the file_transfer ABI.
State management
Each skill has access to a persistent key-value store scoped to its instance. State is stored in an embedded database on the host and survives restarts.
- Isolation — each skill can only access its own state. There is no shared state between skills.
- Inter-skill communication — happens exclusively through the invocation mechanism, not through shared state.