Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents
by souvik1997 on 1/30/2026, 2:34:32 PM
WASM sandbox for running LLM-generated code safely.<p>Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS — just pip install amla-sandbox
https://github.com/amlalabs/amla-sandbox
Comments
by: vimota
Sharing our version of this built on just-bash, AgentFS, and Pyodide: <a href="https://github.com/coplane/localsandbox" rel="nofollow">https://github.com/coplane/localsandbox</a><p>One nice thing about using AgentFS as the VFS is that it's backed by sqlite so it's very portable - making it easy to fork and resume agent workflows across machines / time.<p>I really like Amla Sandbox addition of injecting tool calls into the sandbox, which lets the agent generated code interact with the harness provided tools. Very interesting!
1/30/2026, 4:26:28 PM
by: sd2k
Cool to see more projects in this space! I think Wasm is a great way to do secure sandboxing here. How does Amla handle commands like grep/jq/curl etc which make AI agents so effective at bash but require recompilation to WASI (which is kinda impractical for so many projects)?<p>I've been working on a couple of things which take a very similar approach, with what seem to be some different tradeoffs:<p>- eryx [1], which uses a WASI build of CPython to provide a true Python sandbox (similar to componentize-py but supports some form of 'dynamic linking' with either pure Python packages or WASI-compiled native wheels) - conch [2], which embeds the `brush` Rust reimplementation of Bash to provide a similar bash sandbox. This is where I've been struggling with figuring out the best way to do subcommands, right now they just have to be rewritten and compiled in but I'd like to find a way to dynamically link them in similar to the Python package approach...<p>One other note, WASI's VFS support has been great, I just wish there was more progress on `wasi-tls`, it's tricky to get network access working otherwise...<p>[1] <a href="https://github.com/eryx-org/eryx" rel="nofollow">https://github.com/eryx-org/eryx</a> [2] <a href="https://github.com/sd2k/conch" rel="nofollow">https://github.com/sd2k/conch</a>
1/30/2026, 4:07:43 PM
by: syrusakbary
This is great!<p>While I think that with their current choice for the runtime will hit some limitations (aka: not really full Python support, partial JS support), I strongly believe using Wasm for sandboxing is the way for the future of containers.<p>At Wasmer we are working hardly to make this model work. I'm incredibly happy to see more people joining on the quest!
1/30/2026, 3:10:08 PM
by: asyncadventure
Really appreciate the pragmatic approach here. The 11MB vs 173MB difference with agentvm highlights an important tradeoff: sometimes you don't need full Linux compatibility if you can constrain the problem space well enough. The tool-calling validation layer seems like the sweet spot between safety and practical deployment.
1/30/2026, 4:01:12 PM
by: quantummagic
Sure, but every tool that you provide access to, is a potential escape hatch from the sandbox. It's safer to run everything inside the sandbox, including the called tools.
1/30/2026, 3:05:39 PM
by:
1/30/2026, 3:03:18 PM
by: westurner
From the README:<p>> Security model<p>> <i>The sandbox runs inside WebAssembly with WASI for a minimal syscall interface. WASM provides memory isolation by design—linear memory is bounds-checked, and there's no way to escape to the host address space. The wasmtime runtime we use is built with defense-in-depth and has been formally verified for memory safety.</i><p>> <i>On top of WASM isolation, every tool call goes through capability validation:</i> [...]<p>> <i>The design draws from capability-based security as implemented in systems like seL4—access is explicitly granted, not implicitly available. Agents don't get ambient authority just because they're running in your process.</i>
1/30/2026, 2:48:57 PM
by: taosu_yb
[dead]
1/30/2026, 3:43:39 PM