Software is mostly all you need
by jbmilgrom on 1/29/2026, 11:06:34 PM
https://softwarefordays.com/post/software-is-mostly-all-you-need/
Comments
by: isodev
> Neural networks excel at judgment<p>I don’t think they do. I think they excel at outputting echoes of their training data that best fit (rhyme with, contextually) the prompt they were given. If you try using Claude with an obscure language or use case, you will notice that effect even more - it will keep pulling towards things it knows that aren’t at all what’s asked or “the best judgement” for what’s needed.
1/30/2026, 12:32:13 AM
by: verdverm
Lost me at the claim AI is good at judgement making, this is the exact opposite of my experience, they make both good and bad decisions with reliability
1/29/2026, 11:56:10 PM
by: jbmilgrom
Author here<p>We are building this learned software system at Docflow Labs to solve the integration problem in healthcare at scale ie systems only able to chat with other systems via web portals. RPA historically awful to build and maintain so we've needed to build this to stay above water. Happy to answer any questions!
1/30/2026, 1:53:46 AM
by: wrs
In other words, a higher-level JIT compiler, meaning it still dynamically generates code based on runtime observations, but the code is in a higher-level language than assembly, and the observations are of a higher-level context than just runtime data types.
1/29/2026, 11:54:56 PM
by: 2001zhaozhao
> Code is the policy, deployment is the episode, and the bug report is the reward signal<p>This is a great quote. I think it makes <i>a ton</i> of sense to view a sufficiently-cheap-and-automated agentic SWE system as a machine learning system rather than traditional coding.<p>* Perhaps the key to transparent/interpretable ML is to just replace the ML model with AI-coded traditional software and decision trees. This way it's still fully autonomously trained but you can easily look at the code to see what is going on.<p>* I also wonder whether you can use fully-automated agentic SWE/data science in adversarial use-cases where you traditionally have to use ML, such as online moderation. You could set a clear goal to cut down on any undesired content while minimizing false-positives, and the agent would be able to create a self-updating implementation that dynamically responds to adversarial changes. I'm most familiar with video game anti-cheat where I think something like this is very likely possible.<p>* Perhaps you can use a fully-automated SWE loop, constrained in some way, to develop game enemies and AI opponents which currently requires gruesome amounts of manual work to implement. Those are typically too complex to tackle using traditional ML and you can't naively use RL because the enemies are supposed to be immersive rather than being the best at playing the game by gaming the mechanics. Maybe with a player controller SDK and enough instructions (and live player feedback?), you can get an agent to make a programmatic game AI for you and automatically refine it to be better.
1/30/2026, 12:09:40 AM