Does coding with LLMs mean more microservices?
by jer0me on 4/6/2026, 2:33:21 AM
https://ben.page/microservices
Comments
by: int_19h
That's an argument for components with well-defined contracts on their interfaces, but making them microservices just complicates debugging for the model.<p>It's also unclear whether tight coupling is actually a problem when you can refactor this fast.
4/6/2026, 9:42:09 AM
by: nikeee
What matters for LLMs is what matters for humans, which usually means DX. Most Microservice setups are extremely hard to debug across service boundaries, so I think in the future, we'll see more architectural decisions that make sense for LLMs to work with. Which will probably mean modular monoliths or something like that.
4/6/2026, 9:18:31 AM
by: veselin
I think this is a promise, probably also for spec driven development. You write the spec, the whole thing can be reimplemented in rust tomorrow. Make small modules or libraries.<p>One colleague describes monolith vs microservices as "the grass is greener of the other side".<p>In the end, having microservices is that that the release process becomes much harder. Every feature spans 3 services at least, with possible incompatibility between some of their versions. Precisely the work you cannot easily automate with LLMs.
4/6/2026, 10:38:31 AM
by: victorbjorklund
I think no. But I think it makes sense to break down your app into libraries etc
4/6/2026, 10:56:32 AM
by: Theaetetus
I don't think LLMs push us to use microservices as much as Borgers says they do. They don't avoid the problems microservices have always faced, and encapsulation is mostly independent from whether a boundary is a service-to-service boundary:<p><a href="https://www.natemeyvis.com/agentic-coding-and-microservices/" rel="nofollow">https://www.natemeyvis.com/agentic-coding-and-microservices/</a>
4/6/2026, 10:29:13 AM
by: tatrions
The bounded surface area insight is right, but the actual forcing function is context window size. Small codebase fits in context, LLM can reason end-to-end. You get the same containment with well-defined modules in a monolith if your tooling picks the right files to feed into the prompt.<p>Interesting corollary: as context windows keep growing (8k to 1M+ in two years), this architectural pressure should actually reverse. When a model can hold your whole monolith in working memory, you get all the blast radius containment without the operational overhead of separate services, billing accounts, and deployment pipelines.
4/6/2026, 8:16:36 AM
by: siruwastaken
This seems like the idea of modularizing code, and using specific function sighatures for data exchange as an API is being re-invented by people using AI. Aren't we already mostly doing things this way, albeit via submodules in a monolith, due to the cognitive ctrain it puts on humans to understand the whole thing at any given time?
4/6/2026, 8:48:25 AM
by: _pdp_
This makes no sense. You can easily make a monolith and build all parts of it in isolation - i.e. modules, plugins, packages.<p>In fact, my argument is that there will be more monolith applications due to AI coding assistants, not less.
4/6/2026, 8:48:27 AM
by: c1sc0
Why microservices when small composable CLI tools seem a better fit for LLMs?
4/6/2026, 8:37:32 AM
by: Kim_Bruning
A typical rant (composed from memory) goes something like this:<p>> "These AI types are all delusional. My job is secure. Sure your model can one-shot a small program in green field in 5 minutes with zero debugging. But make it a little larger and it starts to forget features, introduces more bugs than you can fix, and forget letting it loose on large legacy codebases"<p>What if that's not a diagnosis? What if we see that as an opportunity? O:-)<p>I'm not saying it needs to be microservices, but say you can constrain the blast radius of an AI going oops (compaction is a famous oops-surface, for instance); and say you can split the work up into self-contained blocks where you can test your i/o and side effects thoroughly...<p>... well, that's going to be interesting, isn't it?<p>Programming has always supposed to be about that: Structured programming, functions (preferably side-effect-less for this argument), classes&objects, other forms of modularization including -ok sure- microservices. I'm not sold on exactly the latter because it feels a bit too heavy for me. But ... something like?
4/6/2026, 9:50:40 AM
by: claud_ia
[dead]
4/6/2026, 10:03:37 AM
by: jeremie_strand
[dead]
4/6/2026, 3:05:39 AM
by: benh2477
[dead]
4/6/2026, 2:58:55 AM