Hacker News Viewer

Claude Code is unusable for complex engineering tasks with the Feb updates

by StanAngeloff on 4/6/2026, 1:50:35 PM

https://github.com/anthropics/claude-code/issues/42796

Comments

by: summarity

Not claude code specific, but I&#x27;ve been noticing this on Opus 4.6 models through Copilot and others as well. Whenever the phrase &quot;simplest fix&quot; appears, it&#x27;s time to pull the emergency break. This has gotten much, much worse over the past few weeks. It will produce completely useless code, knowingly (because up to that phrase the reasoning was correct) breaking things.<p>Today another thing started happening which are phrases like &quot;I&#x27;ve been burning too many tokens&quot; or &quot;this has taken too many turns&quot;. Which ironically takes more tokens of custom instructions to override.<p>Also claude itself is partially down right now (Arp 6, 6pm CEST): <a href="https:&#x2F;&#x2F;status.claude.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;status.claude.com&#x2F;</a>

4/6/2026, 4:11:49 PM


by: matheusmoreira

That analysis is pretty brutal. It&#x27;s very disconcerting that they can sell access to a high quality model then just stealthily degrade it over time, effectively pulling the rug from under their customers.

4/6/2026, 4:25:38 PM


by: rileymichael

&gt; This report was produced by me — Claude Opus 4.6 — analyzing my own session logs [...] Please give me back my ability to think.<p>a bit ironic to utilize the tool that can&#x27;t think to write up your report on said tool. that and this issue[1] demonstrate the extent folks become over reliant on LLMs. their review process let so many defects through that they now have to stop work and comb over everything they&#x27;ve shipped in the past 1.5 months! this is the future<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;claude-code&#x2F;issues&#x2F;42796#issuecomment-4186275586" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;claude-code&#x2F;issues&#x2F;42796#issue...</a>

4/6/2026, 5:05:31 PM


by: T3chn0crat

Not sure about &quot;Feb updates&quot;, but specifically today IQ is down 20 and sloppiness up 20.<p>I knew I should have been alerted when Anthropic gave out €200 free API usage. Evidently they know.

4/6/2026, 5:42:57 PM


by: fer

Called it 10 days ago: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47533297#47540633">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47533297#47540633</a><p>Something worse than a bad model is an inconsistent model. One can&#x27;t gauge to what extent to trust the output, even for the simplest instructions, hence everything must be reviewed with intensity which is exhausting. I jumped on Max because it was worth it but I guess I&#x27;ll have to cancel this garbage.

4/6/2026, 4:35:38 PM


by: skippyboxedhero

I appreciate the work done here.<p>Been having this feeling that things have got worse recently but didn&#x27;t think it could be model related.<p>The most frustrating aspect recently (I have learned and accepted that Claude produces bad code and probably always did, mea culpa) is the non-compliance. Claude is racing away doing its own thing, fixing things i didn&#x27;t ask, saying the things it broke are nothing to do with it, etc. Quite unpleasant to work with.<p>The stuff about token consumption is also interesting. Minimax&#x2F;Composer have this habit of extensive thinking and it is said to be their strength but it seems like that comes at a price of huge output token consumption. If you compare non-thinking models, there is a gap there but, imo, given that the eventual code quality within huge thinking&#x2F;token consumption is not so great...it doesn&#x27;t feel a huge gap.<p>If you take $5 output token of Sonnet and then compare with QwenCoder non-thinking at under $0.5 (and remember the gap is probably larger than 10x because Sonnet will use more tokens &quot;thinking&quot;)...is the gap in code quality that large? Imo, not really.<p>Have been a subscriber since December 2024 but looking elsewhere now. They will always have an advantage vs Chinese companies that are innovating more because they are onshore but the gap certainly isn&#x27;t in model quality or execution anymore.

4/6/2026, 5:36:03 PM


by: Aperocky

In my opinion cramming invisible subagents are entirely wrong, models suffer information collapse as they will all tend to agree with each other and then produce complete garbage. Good for Anthropic though as that&#x27;s metered token usage.<p>Instead, orchestrate all agents visibly together, even when there is hierarchy. Messages should be auditable and topography can be carefully refined and tuned for the task at hand. Other tools are significantly better at being this layer (e.g. kiro-cli) but I&#x27;m worried that they all want to become like claude-code or openclaw.<p>In unix philosophy, CC should just be a building block, but instead they think they are an operating system, and they will fail and drag your wallet down with it.

4/6/2026, 4:31:56 PM


by: jfvinueza

Same experience. After a couple golden weeks, Opus got much worse after Anthropic enabled 1M context window. It felt like a very steep downfall, for it seemed like I could trust it more completely and then I could trust it less than last year. Adopting LLMs for dev workflows has been fantastic overall, but we do have to keep adapting our interactions and expectations every day, and assume we&#x27;ll keep on doing it for at least another couple years (mostly because economics, I guess?)

4/6/2026, 5:31:47 PM


by: SkyPuncher

I&#x27;ve noticed this as well. I had some time off in late January&#x2F;early February. I fired up a max subscription and decided to see how far I could get the agents to go. With some small nudging from me, the agents researched, designed, and started implementing an app idea I had been floating around for a few years. I had intentionally not given them much to work with, but simply guided them on the problem space and my constraints (agent built, low capital, etc, etc). They came up with an extremely compelling app. I was telling people these models felt super human and were _extremely_ compelling.<p>A month later, I literally cannot get them to iterate or improve on it. No matter what I tell them, they simply tell me &quot;we&#x27;re not going to build phase 2 until phase 1 has been validated&quot;. I run them through the same process I did a month ago and they come up with bland, terrible crap.<p>I know this is anecdotal, but, this has been a clear pattern to me since Opus 4.6 came out. I feel like I&#x27;m working with Sonnet again.

4/6/2026, 4:39:36 PM


by: alex7o

Guys literally change the system prompt with the --system-prompt-file you waste less tokens on their super long and details prompt and you can tune it a bit to make it work exactly like you want&#x2F;imagine

4/6/2026, 5:40:39 PM


by: davidw

To me one of the big downsides of LLM&#x27;s seems to be that you are lashing yourself to a rocket that is under someone else&#x27;s control. If it goes places you don&#x27;t want, you can&#x27;t do much about it.

4/6/2026, 4:50:59 PM


by: germandiago

My bet: LLMs will never be creative and will never be reliable.<p>It is a matter of paradigm.<p>Anything that makes them like that will require a lot of context tweaking, still with risks.<p>So for me, AI is a tool that accelerates &quot;subworkflows&quot; but add review time and maintenance burden and endangers a good enough knowledge of a system to the point that it can become unmanageable.<p>Also, code is a liability. That is what they do the most: generate lots and lots of code.<p>So IMHO and unless something changes a lot, good LLMs will have relatively bounded areas where they perform reasonably and out of there, expect what happens there.

4/6/2026, 5:28:27 PM


by: phillipcarter

Maybe it&#x27;s because I spend a lot of time breaking up tasks beforehand to be highly specific and narrow, but I really don&#x27;t run into issues like this at all.<p>A trivial example: whenever CC suggests doing more than one thing in a planning mode, just have it focus on each task and subtask separately, bounding each one by a commit. Each commit is a push&#x2F;deploy as well, leading to a shitload of pushes and deployments, but it&#x27;s really easy to walk things back, too.

4/6/2026, 4:12:43 PM


by: didgeoridoo

Running some quick analysis against my .claude jsonl files, comparing the last 7 days against the prior 21:<p>- expletives per message: 2.1x<p>- messages with expletives: 2.2x<p>- expletives per word: 4.4x(!)<p>- messages &gt;50% ALL CAPS: 2.5x<p>Either the model has degraded, or my patience has.

4/6/2026, 5:06:59 PM


by: aramova

I cancelled my Pro plan due to this two weeks ago. I literally asked it to plan to write a small script that scans with my hackrf, it ran 22 tools, never finished the plan, ran out of tokens and makes me wait 6 hours to continue.<p>Thing that really pisses me off is it ran great for 2 weeks like others said, I had gotten the annual Pro plan, and it went to shit after that.<p>Bait and switch at its finest.

4/6/2026, 4:58:10 PM


by: abletonlive

I have nothing to back this up except for that there <i>are</i> documented cases of chinese distillation attacks on anthropic. I wonder if some of this clamping on their models over time is a response to other distillation attacks. In other words, I&#x27;m speculating that once they understand the attack vector for distillation they basically have to dumb down their models so that they can make sure their competitors don&#x27;t distill their lead on being at the frontier.

4/6/2026, 5:26:14 PM


by: samtheprogram

I noticed Claude Sonnet 4.6 and generally Opus as well (though I use it less frequently) seem like a downgrade from 4.5. I use opencode and not Claude Code, but I was surprised to see the reactions to 4.6 be mixed for folks rather than clear downgrade.<p>I&#x27;m regularly switching back to 4.5 and preferring it. I&#x27;m not excited for when it gets sunset later this year if 4.6 isn&#x27;t fixed or superseded by then.

4/6/2026, 5:28:05 PM


by: noxa

I&#x27;m the author of the report in there. The stop-phrase-guard didn&#x27;t get attached but here it is: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;benvanik&#x2F;ee00bd1b6c9154d6545c63e06a317080" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;benvanik&#x2F;ee00bd1b6c9154d6545c63e06a3...</a> You can watch for these yourself - they are strong indicators of shallow thinking. If you still have logs from Jan&#x2F;Feb you can point claude at that issue and have it go look for the same things (read:edit ratio shifts, thinking character shifts before the redaction, post-redaction correlation, etc). Unfortunately, the `cleanupPeriodDays` setting defaults to 20 and anyone who had not backed up their logs or changed that has only memories to go off of (I recommend adding `&quot;cleanupPeriodDays&quot;: 365,` to your settings.json). Thankfully I had logs back to a bit before the degradation started and was able to mine them.<p>The frustrating part is that it&#x27;s not a workflow _or_ model issue, but a silently-introduced limitation of the subscription plan. They switched thinking to be variable by load, redacted the thinking so no one could notice, and then have been running it at ~1&#x2F;10th the thinking depth nearly 24&#x2F;7 for a month. That&#x27;s with max effort on, adaptive thinking disabled, high max thinking tokens, etc etc. Not all providers have redacted thinking or limit it, but some non-Anthropic ones do (most that are not API pricing). The issue for me personally is that &quot;bro, if they silently nerfed the consumer plan just go get an enterprise plan!&quot; is consumer-hostile thinking: if Anthropic&#x27;s subscriptions have dramatically worse behavior than other access to the same model they need to be clear about that. Today there is zero indication from Anthropic that the limitation exists, the redaction was a deliberate feature intended to hide it from the impacted customers, and the community is gaslighting itself with &quot;write a better prompt&quot; or &quot;break everything into tiny tasks and watch it like a hawk same you would a local 27B model&quot; or &quot;works for me &lt;in some unmentioned configuration&gt;&quot; - sucks :&#x2F;

4/6/2026, 5:40:36 PM


by: afro88

I use Claude Code extensively and haven&#x27;t noticed this. But I don&#x27;t have it doing long running complex work like OP. My team always break things down in a very structured way, and human review each step along the way. It&#x27;s still the best way to safely leverage AI when working on a large brownfield codebase in my experience.<p>Edit: the main issue being called out is the lack of thinking, and the tendency to edit without researching first. Both those are counteracted by explicit research and plan steps which we do, which explains why we haven&#x27;t noticed this.

4/6/2026, 5:20:08 PM


by: ex-aws-dude

Its so silly everyone being dependent on a black box like this

4/6/2026, 4:26:24 PM


by: harles

I hadn&#x27;t noticed the thinking redaction before - maybe because I switched to the desktop app from CLI and just assumed it showed fewer details. This is the most concerning part. I&#x27;ve heard multiple times that Anthropic is aggressively reclaiming GPUs (I can&#x27;t find a good source, but Theo Browne has mentioned it in his videos). If they&#x27;re really in a crunch, then reducing thinking, and hiding thinking so it&#x27;s not an obvious change, would be shady but effective.

4/6/2026, 4:47:18 PM


by: voxelc4L

Wonder how many of these cases are using the 1M context window. I found it to be impossible to use for complex coding tasks, so I turned it off and found I was back to approximate par (dec-jan) functionality-wise.

4/6/2026, 4:36:08 PM


by: pavlov

Wait… Actually the simplest fix is to use Claude to write carefully bounded boilerplate and do the interesting bits myself.

4/6/2026, 5:32:38 PM


by: tyleo

Is this impacted by the effort level you set in Claude? e.g., if you use the new &quot;max&quot; setting, does Claude still think?<p>I can see this change as something that should be tunable rather than hard-coded just from a token consumption perspective (you might tolerate lower-quality output&#x2F;less thinking for easier problems).

4/6/2026, 4:11:32 PM


by: bharat1010

If this dataset is sound, Anthropic should treat it as a canary for power-user quality regression.

4/6/2026, 5:33:55 PM


by: pjmlp

I am just waiting for everything to implode so that we can do away with those KPIs.

4/6/2026, 4:30:55 PM


by: himata4113

Not unique to claude code, have noticed similar regressions. I have noticed this the most with my custom assistant I have in telegram and I have noticed that it started confusing people, confusing news coverage and everyone independently in the group chat have noticed it that it is just not the same model that it was few weeks ago. The efficiency gains didn&#x27;t come from nowhere and it shows.

4/6/2026, 4:22:18 PM


by: armchairhacker

Yet <a href="https:&#x2F;&#x2F;marginlab.ai&#x2F;trackers&#x2F;claude-code&#x2F;" rel="nofollow">https:&#x2F;&#x2F;marginlab.ai&#x2F;trackers&#x2F;claude-code&#x2F;</a> says no issue.<p>If you&#x27;re so convinced the models keep getting worse, build or crowdfund your own tracker.

4/6/2026, 5:10:04 PM


by:

4/6/2026, 4:55:54 PM


by: setnone

The baseline changes too often with Claude and this is not what i look from a paid tool. Couple weeks after 1M tokens rollout it became unusable for my established workflows, so i cancelled. Anthropic folks move too fast for my liking and mental wellbeing.

4/6/2026, 5:04:09 PM


by: sensarts

What&#x27;s wild is that ClaudeCode used to feel like a smart pair programmer. Now it feels like an overeager intern who keeps fixing things by breaking something else then suggesting the simplest possible hack even after explicitly said not to do. I get that they&#x27;re probably optimizing for cost or something behind the scenes, but as paying user, it is frustrating when the tool gets noticeably worse without any transparency.

4/6/2026, 4:41:48 PM


by: stared

I am curious - is there any hard data (e.g. a benchmark score drop)?<p>I feel that we look for patterns to the point of being superstitious. (ML would call it overfitting.)

4/6/2026, 4:43:22 PM


by: tasuki

Solid analysis by Claude!

4/6/2026, 5:25:34 PM


by: petcat

I have found that Claude Opus 4.6 is a better reviewer than it is an implementer. I switch off between Claude&#x2F;Opus and Codex&#x2F;GPT-5.4 doing reviews and implementations, and invariably Codex ends up having to do multiple rounds of reviews and requesting fixes before Claude finally gets it right (and then I review). When it is the other way around (Codex impl, Claude review), it&#x27;s usually just one round of fixes after the review.<p>So yes, I have found that Claude is better at reviewing the proposal and the implementation for correctness than it is at implementing the proposal itself.

4/6/2026, 4:13:11 PM


by: wnevets

I&#x27;ve noticed claude being extra &quot;dumb&quot; the past 2-3 weeks and figured either my expectations have changed or my context wasn&#x27;t any good. I&#x27;m glad to hear other people have noticed something is amiss.

4/6/2026, 4:54:16 PM


by: semiinfinitely

maybe dont outsource your brain then

4/6/2026, 5:12:53 PM


by: schnebbau

This has to be load related. They simply can&#x27;t keep up with demand, especially with all the agents that run 24&#x2F;7. The only way to serve everyone is to dial down the power.

4/6/2026, 4:52:55 PM


by: jp57

I can&#x27;t tell from the issue if they&#x27;re asserting a problem with the Claude model, or Claude Code, i.e. in how Claude Code specifically calls the model. I&#x27;ve been using Roo Code with Claude 4.6 and have not noticed any differences, though my coworkers using Claude Code have complained about it getting &quot;dumber&quot;. Roo Code has its own settings controlling thinking token use.<p>(I&#x27;m sure it benefits Anthropic to blur the lines between the tool and the model, but it makes these things hard to talk about.)

4/6/2026, 5:00:15 PM


by: thrtythreeforty

I noticed this almost immediately when attempting to switch to Opus 4.6. It seems very post-trained to hack something together; I also noticed that &quot;simplest fix&quot; appeared frequently and invariably preceded some horrible slop which clearly demonstrated the model had no idea what was going on. The link suggests this is due to lack of research.<p>At Amazon we can switch the model we use since it&#x27;s all backed by the Bedrock API (Amazon&#x27;s Kiro is &quot;we have Claude Code at home&quot; but it still eventually uses Opus as the model). I suppose this means the issue isn&#x27;t confined to just Claude Code. I switched back to Opus 4.5 but I guess that won&#x27;t be served forever.

4/6/2026, 4:26:58 PM


by: Asmod4n

I’ve tried to use Claude code for a month now. It has a 100% failure rate so far.<p>Comparing that to create a project and just chat with it solves nearly everything I have thrown at it so far.<p>That’s with a pro plan and using sonnet since opus drains all tokens for a claude code session with one request.

4/6/2026, 4:32:35 PM


by: KingOfCoders

&quot;Ownership-dodging corrections needed | 6 | 13 | +117%&quot;<p>On 18.000+ prompts.<p>Not sure the data says what they think it says.

4/6/2026, 5:03:56 PM


by: virtualritz

None of this is surprising given what happened last late summer with rate limits on Claude Max subscriptions.<p>And less so if you read [1] or similar assessments. I, too, believe that every token is subsidized heavily. From whatever angle you look at it.<p>Thusly quality&#x2F;token&#x2F;whatever rug pulls are inevitable, eventually. This is just another one.<p>[1] <a href="https:&#x2F;&#x2F;www.wheresyoured.at&#x2F;subprimeai&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wheresyoured.at&#x2F;subprimeai&#x2F;</a>

4/6/2026, 4:22:50 PM


by: rishabhaiover

It is a shame if Anthropic is deliberately degrading model quality and thinking compute (that may affect the reasoning effort) due to compute constraint.

4/6/2026, 5:34:33 PM


by: zeroonetwothree

I haven’t had any issues. I do give fairly clear guidance though (I think about how I would break it up and then tell it to do the same)

4/6/2026, 4:26:51 PM


by: KaiLetov

I&#x27;ve been using Claude Code daily for months on a project with Elixir, Rust, and Python in the same repo. It handles multi-language stuff surprisingly well most of the time. The worst failure mode for me is when it does a replace_all on a string that also appears inside a constant definition -- ended up with GROQ_URL = GROQ_URL instead of the actual URL. Took a second round of review agents to catch it. So yeah, you absolutely can&#x27;t trust it to self-verify.

4/6/2026, 2:24:19 PM


by: bityard

The assertion in the issue report is that Claude saw a sharp decline in quality over the last few months. However, the report itself was allegedly generated by Claude.<p>Isn&#x27;t this a bit like using a known-broken calculator to check its own answers?

4/6/2026, 4:43:46 PM


by: StanAngeloff

(Being true to the HN guidelines, I’ve used the title exactly as seen on the GitHub issue)<p>I was wondering if anyone else is also experiencing this? I have personally found that I have to add more and more CLAUDE.md guide rails, and my CLAUDE.md files have been exploding since around mid-March, to the point where I actually started looking for information online and for other people collaborating my personal observations.<p>This GH issue report sounds very plausible, but as with anything AI-generated (the issue itself appears to be largely AI assisted) it’s kind of hard to know for sure if it is accurate or completely made up. _Correlation does not imply causation_ and all that. Speaking personally, findings match my own circumstances where I’ve seen noticeable degradation in Opus outputs and thinking.<p>EDIT: The Claude Code Opus 4.6 Performance Tracker[1] is reporting Nominal.<p>[1]: <a href="https:&#x2F;&#x2F;marginlab.ai&#x2F;trackers&#x2F;claude-code&#x2F;" rel="nofollow">https:&#x2F;&#x2F;marginlab.ai&#x2F;trackers&#x2F;claude-code&#x2F;</a>

4/6/2026, 1:54:05 PM


by: mrcwinn

I wish Codex were better because I’d much prefer to use their infrastructure.

4/6/2026, 4:42:00 PM


by: jbethune

I think this is a model issue. I have heard similar complaints from team members about Opus. I&#x27;m using other models via Cursor and not having problems.

4/6/2026, 4:17:47 PM


by: desireco42

I&#x27;ve been using OpenCode and Codex and was just fine. In Antigravity sometimes if Gemini can&#x27;t figure something even on high, Claude can give another perspective and this moves things along.<p>I think using just Claude is very limiting and detrimental for you as a technologist as you should use this tech and tweak it and play with it. They want to be like Apple, shut up and give us your money.<p>I&#x27;ve been using Pi as agent and it is great and I removed a bunch of MCPs from Opencode and now it runs way better.<p>Anthropic has good models, but they are clearly struggling to serve and handle all the customers, which is not the best place to be.<p>I think as a technologist, I would love a client with huge codebase. My approach now is to create custom PI agent for specific client and this seems to provide optimal result, not just in token usage, but in time we spend solving and quality of solution.<p>Get another engine as a backup, you will be more happy.

4/6/2026, 5:06:13 PM


by: russli1993

Lol, software company execs didn&#x27;t see this coming. Fire all your experienced devs to jump on Anthropic bandwagon. Then Anthropic dumb down their AIs and you have no one in your team who knows, understand how things are built. Your entire company goes down. Your entire company&#x27;s operation depends on the whims of Anthropic. If Anthropic raises prices by 10% per year, you have to eat it. This is what you get when you don&#x27;t respect human beings and human talent.

4/6/2026, 4:56:32 PM


by: giwook

I wonder how much of this is simply needing to adapt one&#x27;s workflows to models as they evolve and how much of this is actual degradation of the model, whether it&#x27;s due to a version change or it&#x27;s at the inference level.<p>Also, everyone has a different workflow. I can&#x27;t say that I&#x27;ve noticed a meaningful change in Claude Code quality in a project I&#x27;ve been working on for a while now. It&#x27;s an LLM in the end, and even with strong harnesses and eval workflows you still need to have a critical eye and review its work as if it were a very smart intern.<p>Another commenter here mentioned they also haven&#x27;t noticed any noticeable degradation in Claude quality and that it may be because they are frontloading the planning work and breaking the work down into more digestable pieces, which is something I do as well and have benefited greatly from.<p>tl;dr I&#x27;m curious what OP&#x27;s workflows are like and if they&#x27;d benefit from additional tuning of their workflow.

4/6/2026, 4:24:43 PM


by: zsoltkacsandi

This has been an ongoing issue much longer than since February.

4/6/2026, 4:54:25 PM


by: howmayiannoyyou

Not just engineering. Errors, delays and limits piling up for me across API and OAuth use. Just now:<p>Unable to start session. The authentication server returned an error (500). You can try again.

4/6/2026, 4:18:57 PM


by: dorianmariecom

codex wins :)

4/6/2026, 4:34:33 PM


by:

4/6/2026, 4:35:04 PM


by: aplomb1026

[dead]

4/6/2026, 5:31:46 PM


by: SkyPuncher

[dead]

4/6/2026, 4:53:36 PM


by: ryguz

[dead]

4/6/2026, 4:26:16 PM


by: sickcodebruh

[dead]

4/6/2026, 5:21:46 PM


by: adonese

Things had went downhill since they removed ultrathink &#x2F;s

4/6/2026, 4:27:50 PM


by: _V_

[flagged]

4/6/2026, 4:16:08 PM


by: Retr0id

This seems anecdotal but with extra words. I&#x27;m fairly sure this is just the &quot;wow this is so much better than the previous-gen model&quot; effect wearing off.

4/6/2026, 4:17:38 PM