Hacker News Viewer

Horses: AI progress is steady. Human equivalence is sudden

by pbui on 12/9/2025, 12:26:35 AM

https://andyljones.com/posts/horses.html

Comments

by: twodave

Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:<p><pre><code> 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value. 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.</code></pre>

12/9/2025, 3:30:44 AM


by: d4rkn0d3z

It might be better to think about what a horse is to a human, mostly a horse is an energy slave. The history of humanity is a story about how many energy slaves are available to the average human.<p>In times past, the only people on earth who had their standard of living raised to a level that allowed them to cast there gaze upon the stars were the Kings and there courts, vassals, and noblemen. As time passed we have learned to make technologies that provide enough energy slaves to the common man that everyone lives a life that a king would have envied in times past.<p>So the question arises as to whether AI or the pursuit of AGI provides more or less energy slaves to the common man?

12/9/2025, 8:17:31 AM


by: ible

People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.<p>The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.<p>If the benefits of AI accrue to&#x2F;are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.

12/9/2025, 1:29:15 AM


by: billisonline

An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term &quot;AGI.&quot; But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term &quot;AGI,&quot; preferring weasel-words like &quot;transformative AI&quot; instead.<p>I&#x27;m still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I&#x27;ve seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can&#x27;t influence anyway.

12/9/2025, 2:07:11 AM


by: richardles

I&#x27;ve also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes &#x2F; design docs. But I don&#x27;t think the horse analogy works.<p>It&#x27;s really changing cultural expectations. Don&#x27;t ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions &#x2F; historical context.<p>What LLMs are killing is:<p>- noisy Slacks with junior folks questions. Those are now your Gemini &#x2F; chat gpt sessions.<p>- tedious implementation sessions.<p>The vast majority of the work is still human led from what I can tell.

12/9/2025, 2:18:35 AM


by: socketcluster

I think my software engineering job will be safe so long as big companies keep using average code as their training set. This is because the average developer creates unnecessary complexity which creates more work for me.<p>The way the average dev structures their code requires like 10x the number of lines as I do and at least 10x the amount of time to maintain... The interest on technical debt compounds like interest on normal debt.<p>Whenever I join a new project, within 6 months, I control&#x2F;maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies. The code looks for the shortest path to production and creates a moat around engineers who can make their team members&#x27; jobs easier.<p>IMO, it&#x27;s not so different to how entrepreneurship works. But with code and processes instead of money and people as your moat. I think once AI can replace top software engineers, it will be able to replace top entrepreneurs. Scary combination. We&#x27;ll probably have different things to worry about then.

12/9/2025, 6:45:32 AM


by: namesbc

Software engineers used to know that measuring lines of code written was a poor metric for productivity...<p><a href="https:&#x2F;&#x2F;www.folklore.org&#x2F;Negative_2000_Lines_Of_Code.html" rel="nofollow">https:&#x2F;&#x2F;www.folklore.org&#x2F;Negative_2000_Lines_Of_Code.html</a>

12/9/2025, 2:10:31 AM


by: jsheard

Cost per word is a bizarre metric to bring up. Since when is volume of words a measure of value or achievement?

12/9/2025, 1:43:33 AM


by: 1970-01-01

How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don&#x27;t know what it will be like in 20 months. Everything is a guesstimate.

12/9/2025, 1:46:35 AM


by: tills13

Person whose job it is to sell AI selling AI is what I got from this post.

12/9/2025, 6:38:02 AM


by: burnto

The 1220s horse bubble was a wild time. People walked everywhere all slow and then BAM guys on horses shooting arrows at you.<p>AI is like that, but instead with dudes in slim fitting vests blogging about alignment

12/9/2025, 7:47:02 AM


by: s17n

This is a fun piece... but what killed off the horses wasn&#x27;t steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.

12/9/2025, 1:26:41 AM


by: palmotea

Aren&#x27;t you guys looking forward to the day when we get the opportunity to go the way of all those horses? You should! I&#x27;m optimistic; I think I&#x27;d make a fine pot of glue.<p>AI, faster please!

12/9/2025, 7:46:27 AM


by: barbazoo

Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.

12/9/2025, 1:25:00 AM


by: themafia

&gt; Back then, me and other old-timers were answering about 4,000 new-hire questions a month.<p>&gt; Then in December, Claude finally got good enough to answer some of those questions for us.<p>What getting high on your own supply actually looks like. These are not the types of questions most people have or need answered. It&#x27;s unique to the hiring process and the nascent status of the technology. It seems insane to stretch this logic to literally any other arena.<p>On top of that horses were initially replaced with _stationary_ gasoline engines. Horses:Cars is an invalid view into the historical scenario.

12/9/2025, 5:39:36 AM


by: COAGULOPATH

&gt; In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.<p>But would you rather be a horse in 1920 or 2020? Wouldn&#x27;t you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?<p>The only way horses conceivably have it worse is that there are fewer of them (a kind of &quot;repugnant conclusion&quot;)...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.

12/9/2025, 2:44:35 AM


by: personjerry

I think it&#x27;s a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.<p>And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?

12/9/2025, 1:25:52 AM


by: YmiYugy

To stay within the engine analogy. We have engines that are more powerful than horses, but<p>1. we aren’t good at building cars yet,<p>2. they break down so often that using horses often still ends up faster,<p>3. we have dirt tracks and feed stations for horses but have few paved roads and are not producing enough gasoline.

12/9/2025, 7:56:47 AM


by: bad_username

AI currently lacks the following to really gain a &quot;G&quot; and reliably be able to replace humans at scale:<p>- Radical massive multimodality. We perceive the world through many wide-band high-def channels of information. Computer perception is nowhere near. Same for ability to &quot;mutate&quot; the physical world, not just &quot;read&quot; it.<p>- Being able to be fine-tuned constantly (learn things, remember things) without &quot;collapsing&quot;. Generally having a smooth transition between the context window and the weights, rather than fundamental irreconcilable difference.<p>These are very difficult problems. But I agree with the author that the engine is in the works and the horses should stay vigilant.

12/9/2025, 7:45:20 AM


by: burroisolator

&quot;In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.<p>And not very long after, 93 per cent of those horses had disappeared.<p>I very much hope we&#x27;ll get the two decades that horses did.&quot;<p>I&#x27;m reminded of the idiom &quot;be careful what you wish for, as you might just get it.&quot; Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.

12/9/2025, 1:31:42 AM


by: Mawr

What is this <i>horseshit</i>.<p>What exactly does specifically engine efficiency have to do with horse usage? Cars like the Ford Model T entered mass production somewhere around 1908. Oh, and would you look at the horse usage graph around that date! <i>sigh</i><p>The chess ranking graph seems to be just a linear relationship?<p>&gt; This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.<p>&gt;<p>&gt; Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me &amp; mine ever did.<p>So more == better. <i>sigh</i>. Ran any, you know, <i>studies</i> to see the <i>quality</i> of those answers? I too can consult &#x2F;dev&#x2F;random for answers at a rate of gigabytes per second!<p>&gt; I was one of the first researchers hired at Anthropic.<p>Yeah. I can tell. Somebody&#x27;s high on their own supply here.

12/9/2025, 6:50:59 AM


by: mark242

Someone who makes horseshoes then learns how to make carburetors, because the demand is 10x.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jevons_paradox" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Jevons_paradox</a>

12/9/2025, 3:41:30 AM


by: jameslk

&gt; <i>Back then, me and other old-timers were answering about 4,000 new-hire questions a month.</i><p>&gt; <i>Then in December, Claude finally got good enough to answer some of those questions for us.</i><p>&gt; <i>… Six months later, 80% of the questions I&#x27;d been being asked had disappeared.</i><p>Interesting implications for how to train juniors in a remote company, or in general:<p>&gt; <i>We find that sitting near teammates increases coding feedback by 18.3% and improves code quality. Gains are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when sitting near colleagues.</i><p><a href="https:&#x2F;&#x2F;pallais.scholars.harvard.edu&#x2F;sites&#x2F;g&#x2F;files&#x2F;omnuum5926&#x2F;files&#x2F;2025-11&#x2F;Power%20of%20Proximity%20to%20Coworkers%20November%202025.pdf" rel="nofollow">https:&#x2F;&#x2F;pallais.scholars.harvard.edu&#x2F;sites&#x2F;g&#x2F;files&#x2F;omnuum592...</a>

12/9/2025, 1:46:27 AM


by: sothatsit

This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:<p>1. The release of Claude Code in February<p>2. The release of Opus 4.5 two weeks ago<p>In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.<p>Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.

12/9/2025, 1:39:42 AM


by: anshulbhide

Yet, this applies for only three industries so far - coding, marketing and customer support.<p>I don&#x27;t think applies for general human intelligence - yet.

12/9/2025, 6:46:55 AM


by: florilegiumson

If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to &quot;replace people&quot; outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don&#x27;t understand the steps needed to get to the highly existential threat that has become a cliché in every &quot;Learn CLAUDE&#x2F;MCP&quot; ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?

12/9/2025, 4:43:19 AM


by: ternus

Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of &quot;good&#x2F;cheap enough&quot; where it displaces the previous best option.

12/9/2025, 2:55:21 AM


by: pbw

This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn&#x27;t mean it won&#x27;t still subjectively happen &quot;fast&quot;.

12/9/2025, 2:13:03 AM


by: websiteapi

funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.<p>plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that&#x27;s the preview. that&#x27;s it. you will lose jobs and you will pay more. congrats.<p>even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn&#x27;t endlessly keep disrupting itself and have a little bit of discipline.<p>so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won&#x27;t let more be built.

12/9/2025, 1:44:35 AM


by: torginus

Oh no, it&#x27;s the lowercase people again.

12/9/2025, 8:51:03 AM


by: cuttothechase

&gt;&gt;This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.<p>Glad I noticed that footnote.<p>Article reeks of false equivalences and incorrect transitive dependencies.

12/9/2025, 2:31:53 AM


by: byronic

my favorite part was where the graphs are all unrelated to each other

12/9/2025, 2:17:10 AM


by:

12/9/2025, 1:42:35 AM


by: kgk9000

I think the author&#x27;s point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of &quot;good enough&quot; in that particular field.

12/9/2025, 2:32:12 AM


by: gaigalas

People back then were primarily improving engines, not making articles about engines being better than horses. That&#x27;s why it&#x27;s different now.

12/9/2025, 7:30:09 AM


by: kazinator

Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.

12/9/2025, 1:52:39 AM


by: leowoo91

We still have chess grandmasters if you have noticed..

12/9/2025, 2:16:47 AM


by:

12/9/2025, 1:43:46 AM


by: chairmansteve

4000 questions a month from new hireds. How many of those were repeated many times. A lot. So if they&#x27;d built a wiki?<p>I am not an AI sceptic.. I use it for coding. But this article is not compelling.

12/9/2025, 5:19:26 AM


by: johnsmith1840

I mean it&#x27;s hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don&#x27;t know how we could watch current AI and anyone can say we have that.<p>The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.<p>Maybe it&#x27;s one massive breakthrough away or maybe it&#x27;s dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don&#x27;t know.

12/9/2025, 2:08:36 AM


by: narrator

Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo&#x27;s David, but cyberpunk.

12/9/2025, 1:38:47 AM


by: john-radio

I&#x27;ve never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, &quot;Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor&quot; could just be a throwaway thought, and wouldn&#x27;t serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones&#x27;s post here is probably 99.9% napkin math &#x2F; AI output, but I do feel like it adds to the argument without distracting from it.<p>(A parenthetical comment explaining where he ballparked the measurements for himself, the &quot;cheapest human labor,&quot; and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)

12/9/2025, 2:19:52 AM


by: tomxor

Terrible comparison.<p>Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.<p>Now LLMs, what is their purpose? What is the purpose of a human?<p>I&#x27;m not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that&#x27;s called bullshitting - not a coincidence).<p>They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It&#x27;s been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.

12/9/2025, 3:04:32 AM


by: wrs

Point taken, but it&#x27;s hard to take a talk seriously when it has a graph showing AI becoming 80% of GDP! What does the &quot;P&quot; even stand for then?

12/9/2025, 1:42:11 AM


by: pansa2

&gt; <i>90% of the horses in the US disappeared</i><p>Where did they go?

12/9/2025, 3:01:36 AM


by: globular-toast

And what happened to human population? It skyrocketed. So humans are going to get replaced by AI and human population will skyrocket again? This analogy doesn&#x27;t work.

12/9/2025, 7:19:17 AM


by: glitchc

Conclusion: Soylent..?

12/9/2025, 2:16:35 AM


by: WhyOhWhyQ

Humans design the world to our benefit, horses do not.

12/9/2025, 1:47:25 AM


by: AstroBen

Cool, now lets make a big list of technologies that <i>didn&#x27;t</i> take off like they were expected to

12/9/2025, 1:48:44 AM


by: mrtesthah

LLMs can <i>only</i> hallucinate and cannot reason or provide answers outside of their training set distribution. The architecture needs to fundamentally change in order to reach human equivalence, no matter how many benchmarks they appear to hit.

12/9/2025, 5:36:41 AM


by: conartist6

I thought this was going to be about how much more intelligent horses are than AIs and I was disappointed

12/9/2025, 2:02:55 AM


by: blondie9x

This post is kind of sad. It feels like he&#x27;s advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.

12/9/2025, 2:26:47 AM


by: fizlebit

yeah but machines don&#x27;t produce horseshit, or do they? (said in the style of Vsauce)

12/9/2025, 1:55:15 AM


by: echelon

&gt; And not very long after, 93 per cent of those horses had disappeared.<p>&gt; I very much hope we&#x27;ll get the two decades that horses did.<p>&gt; But looking at how fast Claude is automating my job, I think we&#x27;re getting a lot less.<p>This &quot;our company is onto the discovery that will put you all out of work (or kill you?)&quot; rhetoric makes me angry.<p>Something this powerful and disruptive (if it is such) doesn&#x27;t need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.<p>I&#x27;ve seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.<p>Meanwhile Google, apart from perhaps Kilpatrick, is just silent.

12/9/2025, 1:51:55 AM


by: kaluga

[dead]

12/9/2025, 7:09:27 AM


by: GreenJacketBoy

[dead]

12/9/2025, 6:30:02 AM


by: Bleedblood

[dead]

12/9/2025, 6:58:54 AM


by: adventured

It&#x27;s astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It&#x27;s now pervasive across nearly every AI thread here.<p>As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.<p>I think it&#x27;s a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There&#x27;s no time to form powerful unions to stop what&#x27;s happening, it&#x27;s already far too late.

12/9/2025, 1:42:16 AM


by: kangs

hello faster horses

12/9/2025, 1:32:25 AM