Hacker News Viewer

We replaced RAG with a virtual filesystem for our AI documentation assistant

by denssumesh on 4/2/2026, 6:24:29 PM

https://www.mintlify.com/blog/how-we-built-a-virtual-filesystem-for-our-assistant

Comments

by: softwaredoug

The real thing I think people are rediscovering with file system based search is that there’s a type of semantic search that’s not embedding based retrieval. One that looks more like how a librarian organizes files into shelves based on the domain.<p>We’re rediscovering forms of in search we’ve known about for decades. And it turns out they’re more interpretable to agents.<p><a href="https:&#x2F;&#x2F;softwaredoug.com&#x2F;blog&#x2F;2026&#x2F;01&#x2F;08&#x2F;semantic-search-without-embeddings" rel="nofollow">https:&#x2F;&#x2F;softwaredoug.com&#x2F;blog&#x2F;2026&#x2F;01&#x2F;08&#x2F;semantic-search-wit...</a>

4/3/2026, 5:41:11 PM


by: Galanwe

I am not familiar with the tech stack they use, but from an outsider point of view, I was sort of expecting some kind of fuse solution. Could someone explain why they went through a fake shell? There has to be a reason.

4/3/2026, 6:21:39 PM


by: pboulos

I think this is a great approach for a startup like Mintlify. I do have skepticism around how practical this would be in some of the “messier” organisations where RAG stands to add the most value. From personal experience, getting RAG to work well in places where the structure of the organisation and the information contained therein is far from hierarchical or partition-able is a very hard task.

4/3/2026, 6:04:58 PM


by: seanlinehan

This is definitely the way. There are good use cases for real sandboxes (if your agent is executing arbitrary code, you better it do so in an air-gapped environment).<p>But the idea of spinning up a whole VM to use unix IO primitives is way overkill. Makes way more sense to let the agent spit our unix-like tool calls and then use <i>whatever your prod stack uses</i> to do IO.

4/3/2026, 5:42:20 PM


by: kenforthewin

I don&#x27;t get it - everybody in this thread is talking about the death of vector DBs and files being all you need. The article clearly states that this is a layer on top of their existing Chroma db.

4/3/2026, 6:34:06 PM


by: dmix

This puts a lot of LLM in front of the information discovery. That would require far more sophisticated prompting and guardrails. I&#x27;d be curious to see how people architect an LLM-&gt;document approach with tool calling, rather than RAG-&gt;reranker-&gt;LLM. I&#x27;m also curious what the response times are like since it&#x27;s more variable.

4/3/2026, 6:19:52 PM


by: tylergetsay

I dont understand the additional complexity of mocking bash when they could just provide grep, ls, find, etc tools to the LLM

4/3/2026, 6:33:41 PM


by: bluegatty

RAG should have have been represented as a context tool but rather just vector querying ad an variation of search&#x2F;query - and that&#x27;s it.<p>We were bit by our own nomenclature.<p>Just a small variation in chosen acronym may have wrought a different outcome.<p>Different ways to find context are welcome, we have a long way to go!

4/3/2026, 6:35:26 PM


by: mandeepj

&gt; even a minimal setup (1 vCPU, 2 GiB RAM, 5-minute session lifetime) would put us north of $70,000 a year based on Daytona&#x27;s per-second sandbox pricing ($0.0504&#x2F;h per vCPU, $0.0162&#x2F;h per GiB RAM)<p>$70k?<p><i>how about if we round off one zero? Give us $7000.</i><p>That number still seems to be very high.

4/3/2026, 6:08:44 PM


by: HanClinto

&gt; &quot;The agent doesn&#x27;t need a real filesystem; it just needs the illusion of one. Our documentation was already indexed, chunked, and stored in a Chroma database to power our search, so we built ChromaFs: a virtual filesystem that intercepts UNIX commands and translates them into queries against that same database. Session creation dropped from ~46 seconds to ~100 milliseconds, and since ChromaFs reuses infrastructure we already pay for, the marginal per-conversation compute cost is zero.&quot;<p>Not to be &quot;that guy&quot; [0], but (especially for users who aren&#x27;t already in ChromaDB) -- how would this be different for us from using a RAM disk?<p>&gt; &quot;ChromaFs is built on just-bash ... a TypeScript reimplementation of bash that supports grep, cat, ls, find, and cd. just-bash exposes a pluggable IFileSystem interface, so it handles all the parsing, piping, and flag logic while ChromaFs translates every underlying filesystem call into a Chroma query.&quot;<p>It sounds like the expected use-case is that agents would interact with the data via standard CLI tools (grep, cat, ls, find, etc), and there is nothing Chroma-specific in the final implementation (? Do I have that right?).<p>The author compares the speeds against the Chroma implementation vs. a physical HDD, but I wonder how the benchmark would compare against a Ramdisk with the same information &#x2F; queries?<p>I&#x27;m very willing to believe that Chroma would still be faster &#x2F; better for X&#x2F;Y&#x2F;Z reason, but I would be interested in seeing it compared, since for many people who already have their data in a hierarchical tree view, I bet there could be some massive speedups by mounting the memory directories in RAM instead of HDD.<p>[0] - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9224">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9224</a>

4/3/2026, 6:40:18 PM


by: maille

Let&#x27;s say I want a free, local or free-tier-llm, simple solution to search information mostly from my emails and a little bit from text, doc and pdf files. Are there any tool I should try to have ollamma or gemini able to reply with my own knowledge base?

4/3/2026, 5:57:51 PM


by: tschellenbach

I think generally we are going from vector based search, to agentic tool use, and hierarchy based systems like skills.

4/3/2026, 6:24:48 PM


by: dust42

If grep and ls do the trick, then sure you don&#x27;t need RAG&#x2F;embeddings. But you also don&#x27;t need an LLM: a full text search in a database will be a lot more performant, faster and use less resources.

4/3/2026, 6:33:08 PM


by: jrm4

Is this related to that thing where somehow the entire damn world forgot about the power of boolean (and other precise) searching?

4/3/2026, 6:40:36 PM


by: ctxc

haha, sweet. One of the cooler things I&#x27;ve read lately

4/3/2026, 6:25:14 PM


by:

4/3/2026, 5:59:16 PM


by: volume_tech

[dead]

4/3/2026, 6:12:04 PM


by: hyperlambda

[dead]

4/3/2026, 6:17:44 PM