Opus 4.7 to 4.6 Inflation is ~45%
by anabranch on 4/18/2026, 4:05:43 PM
https://tokens.billchambers.me/leaderboard
Comments
by: dakiol
We dropped Claude. It's pretty clear this is a race to the bottom, and we don't want a hard dependency on another multi-billion dollar company just to write software<p>We'll be keeping an eye on open models (of which we already make good use of). I think that's the way forward. Actually it would be great if everybody would put more focus on open models, perhaps we can come up with something like the "linux/postgres/git/http/etc" of the LLMs: something we all can benefit from while it not being monopolized by a single billionarie company. Wouldn't it be nice if we don't need to pay for tokens? Paying for infra (servers, electricity) is already expensive enough
4/18/2026, 5:17:13 PM
by: hgoel
The bump from 4.6 to 4.7 is not very noticeable to me in improved capabilities so far, but the faster consumption of limits is very noticeable.<p>I hit my 5 hour limit within 2 hours yesterday, initially I was trying the batched mode for a refactor but cancelled after seeing it take 30% of the limit within 5 minutes. Had to cancel and try a serial approach, consumed less (took ~50 minutes, xhigh effort, ~60% of the remaining allocation IIRC), but still very clearly consumed much faster than with 4.6.<p>It feels like every exchange takes ~5% of the 5 hour limit now, when it used to be maybe ~1-2%. For reference I'm on the Max 5x plan.<p>For now I can tolerate it since I still have plenty of headroom in my limits (used ~5% of my weekly, I don't use claude heavily every day so this is OK), but I hope they either offer more clarity on this or improve the situation. The effort setting is still a bit too opaque to really help.
4/18/2026, 5:31:08 PM
by: andai
For a fair comparison you need to look at the total cost, because 4.7 produces significantly fewer output tokens than 4.6, and seems to cost significantly less on the reasoning side as well.<p>Here is a comparison for 4.5, 4.6 and 4.7 (Output Tokens section):<p><a href="https://artificialanalysis.ai/?models=claude-opus-4-7%2Cclaude-opus-4-6-adaptive%2Cclaude-opus-4-5-thinking" rel="nofollow">https://artificialanalysis.ai/?models=claude-opus-4-7%2Cclau...</a><p>4.7 comes out slightly <i>cheaper</i> than 4.6. But 4.5 is about half the cost:<p><a href="https://artificialanalysis.ai/?models=claude-opus-4-7%2Cclaude-opus-4-6-adaptive%2Cclaude-opus-4-5-thinking#cost" rel="nofollow">https://artificialanalysis.ai/?models=claude-opus-4-7%2Cclau...</a><p>Notably the cost of <i>reasoning</i> has been cut almost in half from 4.6 to 4.7.<p>I'm not sure what that looks like for most people's workloads, i.e. what the cost breakdown looks like for Claude Code. I expect it's heavy on both input and reasoning, so I don't know how that balances out, now that input is more expensive and reasoning is cheaper.<p>On reasoning-heavy tasks, it might be cheaper. On tasks which don't require much reasoning, it's probably more expensive. (But for those, I would use Codex anyway ;)
4/18/2026, 6:39:19 PM
by: kalkin
AFAICT this uses a token-counting API so that it counts how many tokens are in the prompt, in two ways, so it's measuring the tokenizer change in isolation. Smarter models also sometimes produce shorter outputs and therefore fewer output tokens. That doesn't mean Opus 4.7 necessarily nets out cheaper, it might still be more expensive, but this comparison isn't really very useful.
4/18/2026, 5:04:20 PM
by: fathermarz
I have been seeing this messaging everywhere and I have not noticed this. I have had the inverse with 4.7 over 4.6.<p>I think people aren’t reading the system cards when they come out. They explicitly explain your workflow needs to change. They added more levels of effort and I see no mention of that in this post.<p>Did y’all forget Opus 4? That was not that long ago that Claude was essentially unusable then. We are peak wizardry right now and no one is talking positively. It’s all doom and gloom around here these days.
4/18/2026, 6:38:25 PM
by: someuser54541
Should the title here be 4.6 to 4.7 instead of the other way around?
4/18/2026, 4:51:02 PM
by: rectang
For now, I'm planning to stick with Opus 4.5 as a driver in VSCode Copilot.<p>My workflow is to give the agent pretty fine-grained instructions, and I'm always fighting agents that insist on doing too much. Opus 4.5 is the best out of all agents I've tried at following the guidance to do only-what-is-needed-and-no-more.<p>Opus 4.6 takes longer, overthinks things and changes too much; the high-powered GPTs are similarly flawed. Other models such as Sonnet aren't nearly as good at discerning my intentions from less-than-perfectly-crafted prompts as Opus.<p>Eventually, I quit experimenting and just started using Opus 4.5 exclusively knowing this would all be different in a few months anyway. Opus cost more, but the value was there.<p>But now I see that 4.7 is going to replace both 4.5 and 4.6 in VSCode Copilot, and with a 7.5x modifier. Based on the description, this is going to be a price hike for slower performance — and if the 4.5 to 4.6 change is any guide, more overthinking targeted at long-running tasks, rather than fine-grained. For me, that seems like a step backwards.
4/18/2026, 5:27:10 PM
by: gsleblanc
It's increasingly looking naive to assume scaling LLMs is all you need to get to full white-collar worker replacement. The attention mechanism / hopfield network is fundamentally modeling only a small subset of the full human brain, and all the increasing sustained hype around bolted-on solutions for "agentic memory" is, in my opinion, glaring evidence that these SOTA transformers alone aren't sufficient even when you just limit the space to text. Maybe I'm just parroting Yann LeCun.
4/18/2026, 5:54:50 PM
by: glerk
I'd be ok with paying more if results were good, but it seems like Anthropic is going for the Tinder/casino intermittent reinforcement strategy: optimized to keep you spending tokens instead of achieving results.<p>And yes, Claude models are generally more fun to use than GPT/Codex. They have a personality. They have an intuition for design/aesthetics. Vibe-coding with them feels like playing a video game. But the result is almost always some version of cutting corners: tests removed to make the suite pass, duplicate code everywhere, wrong abstraction, type safety disabled, hard requirements ignored, etc.<p>These issues are not resolved in 4.7, no matter what the benchmarks say, and I don't think there is any interest in resolving them.
4/18/2026, 5:58:01 PM
by: tiffanyh
I was using Opus 4.7 just yesterday to help implement best practices on a single page website.<p>After just ~4 prompts I blew past my daily limit. Another ~7 more prompts & I blew past my weekly limit.<p>The entire HTMl/CSS/JS was less than 300 lines of code.<p>I was shocked how fast it exhausted my usage limits.
4/18/2026, 5:18:42 PM
by: couchdb_ouchdb
Comments here overall do not reflect my experience -- i'm puzzled how the vast majority are using this technology day to day. 4.7 is absolute fire and an upgrade on 4.6.
4/18/2026, 6:10:00 PM
by: autoconfig
My initial experience with Opus 4.7 has been pretty bad and I'm sticking to Codex. But these results are meaningless without comparing outcome. Wether the extra token burn is bad or not depends on whether it improves some quality / task completion metric. Am I missing something?
4/18/2026, 5:40:14 PM
by: templar_snow
Brutal. I've been noticing that 4.7 eats my Max Subscription like crazy even when I do my best to juggle tasks (or tell 4.7 to use subagents with) Sonnet 4.6 Medium and Haiku. Would love to know if anybody's found ideal token-saving approaches.
4/18/2026, 5:24:58 PM
by: QuadrupleA
One thing I don't see often mentioned - OpenAI API's auto token caching approach results in MASSIVE cost savings on agent stuff. Anthropic's deliberate caching is a pain in comparison. Wish they'd just keep the KV cache hot for 60 seconds or so, so we don't have to pay the input costs over and over again, for every growing conversation turn.
4/18/2026, 6:23:38 PM
by: bobjordan
I've spent the past 4+ months building an internal multi-agent orchestrator for coding teams. Agents communicate through a coordination protocol we built, and all inter-agent messages plus runtime metrics are logged to a database.<p>Our default topology is a two-agent pair: one implementer and one reviewer. In practice, that usually means Opus writing code and Codex reviewing it.<p>I just finished a 10-hour run with 5 of these teams in parallel, plus a Codex run manager. Total swarm: 5 Opus 4.7 agents and 6 Codex/GPT-5.4 agents.<p>Opus was launched with:<p>`export CLAUDE_AUTOCOMPACT_PCT_OVERRIDE=35 claude --dangerously-skip-permissions --model 'claude-opus-4-7[1M]' --effort high --thinking-display summarized`<p>Codex was launched with:<p>`codex --dangerously-bypass-approvals-and-sandbox --profile gpt-5-4-high`<p>What surprised me was usage: after 10 hours, both my Claude Code account and my Codex account had consumed 28% of their weekly capacity from that single run.<p>I expected Claude Code usage to be much higher. Instead, on these settings and for this workload, both platforms burned the same share of weekly budget.<p>So from this datapoint alone, I do not see an obvious usage-efficiency advantage in switching from Opus 4.7 to Codex/GPT-5.4.
4/18/2026, 5:47:27 PM
by: tailscaler2026
Subsidies don't last forever.
4/18/2026, 5:06:11 PM
by: anabranch
I wanted to better understand the potential impact for the tokenizer change from 4.6 and 4.7.<p>I'm surprised that it's 45%. Might go down (?) with longer context answers but still surprising. It can be more than 2x for small prompts.
4/18/2026, 4:05:43 PM
by: KellyCriterion
Yesterday, I killed my weekly limit with just three prompts and went into extra usage for ~18USD on top
4/18/2026, 5:37:51 PM
by: monkpit
Does this have anything to do with the default xhigh effort?
4/18/2026, 5:58:44 PM
by: razodactyl
If anyone's had 4.7 update any documents so far - notice how concise it is at getting straight to the point. It rewrote some of my existing documentation (using Windsurf as the harness), not sure I liked the decrease in verbosity (removed columns and combined / compressed concepts) but it makes sense in respect to the model outputting less to save cost.<p>To me this seems more that it's trained to be concise by default which I guess can be countered with preference instructions if required.<p>What's interesting to me is that they're using a new tokeniser. Does it mean they trained a new model from scratch? Used an existing model and further trained it with a swapped out tokeniser?<p>The looped model research / speculation is also quite interesting - if done right there's significant speed up / resource savings.
4/18/2026, 5:33:07 PM
by: jimkleiber
I wonder if this is like when a restaurant introduces a new menu to increase prices.<p>Is Opus 4.7 that significantly different in quality that it should use that much more in tokens?<p>I like Claude and Anthropic a lot, and hope it's just some weird quirk in their tokenizer or whatnot, just seems like something changed in the last few weeks and may be going in a less-value-for-money direction, with not much being said about it. But again, could just be some technical glitch.
4/18/2026, 5:53:55 PM
by: alphabettsy
I’m trying to understand how this is useful information on its own?<p>Maybe I missed it, but it doesn’t tell you if it’s more successful for less overall cost?<p>I can easily make Sonnet 4.6 cost way more than any Opus model because while it’s cheaper per prompt it might take 10x more rounds (or never) solve a problem.
4/18/2026, 6:07:12 PM
by: ivanfioravanti
Probably due to the new tokenizer: <a href="https://www.claudecodecamp.com/p/i-measured-claude-4-7-s-new-tokenizer-here-s-what-it-costs-you" rel="nofollow">https://www.claudecodecamp.com/p/i-measured-claude-4-7-s-new...</a>
4/18/2026, 6:05:23 PM
by: aray07
Came to a similar conclusion after running a bunch of tests on the new tokenizer<p>It was on the higher end of Anthropics range - closer to 30-40% more tokens<p><a href="https://www.claudecodecamp.com/p/i-measured-claude-4-7-s-new-tokenizer-here-s-what-it-costs-you" rel="nofollow">https://www.claudecodecamp.com/p/i-measured-claude-4-7-s-new...</a>
4/18/2026, 5:51:55 PM
by: ausbah
is it really unthinkable that another oss/local model will be released by deepseek, alibaba, or even meta that once again give these companies a run for their money
4/18/2026, 5:02:57 PM
by:
4/18/2026, 5:49:11 PM
by: napolux
Token consumption is huge compared to 4.6 even for smaller tasks. Just by "reasoning" after my first prompt this morning I went over 50% over the 5 hours quota.
4/18/2026, 5:44:20 PM
by: ben8bit
Makes me think the model could actually not even be smarter necessarily, just more token dependent.
4/18/2026, 5:17:26 PM
by: l5870uoo9y
My impression the reverse is true when upgrading to GPT-5.4 from GPT-5; it uses fewer tokens(?).
4/18/2026, 5:17:46 PM
by: matt3210
Did anyone expect the price to go down? The point of new models is to raise prices
4/18/2026, 5:10:22 PM
by: coldtea
This, the push towards per-token API charging, and the rest are just a sign of things to come when they finally establish a moat and full monoply/duopoly, which is also what all the specialized tools like Designer and integrations are about.<p>It's going to be a very expensive game, and the masses will be left with subpar local versions. It would be like if we reversed the democratization of compilers and coding tooling, done in the 90s and 00s, and the polished more capable tools are again all proprietary.
4/18/2026, 4:56:42 PM
by:
4/18/2026, 4:57:15 PM
by: axeldunkel
the better the tokenizer maps text to its internal representation, the better the understanding of the model what you are saying - or coding! But 4.7 is much more verbose in my experience, and this probably drives cost/limits a lot.
4/18/2026, 5:38:24 PM
by: alekseyrozh
Is it just me? I don't feel difference between 4.6 and 4.7
4/18/2026, 6:33:24 PM
by: QuadrupleA
Definitely seems like AI money got tight the last month or two - that the free beer is running out and enshittification has begun.
4/18/2026, 6:25:48 PM
by: dackdel
releases 4.8 and deletes everything else. and now 4.8 costs 500% more than 4.7. i wonder what it would take for people to start using kimi or qwen or other such.
4/18/2026, 5:26:36 PM
by: varispeed
I spent one day with Opus 4.7 to fix a bug. It just ran in circles despite having the problem "in front of its eyes" with all supporting data, thorough description of the system, test harness that reproduces the bug etc. While I still believe 4.7 is much "smarter" than GPT-5.4 I decided to give it ago. It was giving me dumb answers and going off the rails. After accusing it many times of being a fraud and doing it on purpose so that I spend more money, it fixed the bug in one shot.<p>Having a taste of unnerfed Opus 4.6 I think that they have a conflict of interest - if they let models give the right answer first time, person will spend less time with it, spend less money, but if they make model artificially dumber (progressive reasoning if you will), people get frustrated but will spend more money.<p>It is likely happening because economics doesn't work. Running comparable model at comparable speed for an individual is prohibitively expensive. Now scale that to millions of users - something gotta give.
4/18/2026, 6:21:29 PM
by: Shailendra_S
45% is brutal if you're building on top of these models as a bootstrapped founder. The unit economics just don't work anymore at that price point for most indie products.<p>What I've been doing is running a dual-model setup — use the cheaper/faster model for the heavy lifting where quality variance doesn't matter much, and only route to the expensive one when the output is customer-facing and quality is non-negotiable. Cuts costs significantly without the user noticing any difference.<p>The real risk is that pricing like this pushes smaller builders toward open models or Chinese labs like Qwen, which I suspect isn't what Anthropic wants long term.
4/18/2026, 5:14:34 PM
by: silverwind
Still worth it imho for important code, but it shows that they are hitting a ceiling while trying to improve the model which they try to solve by making it more token-inefficient.
4/18/2026, 5:31:12 PM
by: blahblaher
Conspiracy time: they released a new version just so hey could increase the price so that people wouldn't complain so much along the lines of "see this is a new version model, so we NEED to increase the price") similar to how SaaS companies tack on some shit to the product so that they can increase prices
4/18/2026, 5:23:44 PM
by: mvkel
The cope is real with this model. Needing an instruction manual to learn how to prompt it "properly" is a glaring regression.<p>The whole magic of (pre-nerfed) 4.6 was how it magically seemed to understand what I wanted, regardless of how perfectly I articulated it.<p>Now, Anth says that needing to explicitly define instructions are as a "feature"?!
4/18/2026, 5:21:42 PM
by: DeathArrow
We (my wallet and I) are pretty happy with GLM 5.1 and MiniMax 2.7.
4/18/2026, 6:18:01 PM
by: bparsons
Had a pretty heavy workload yesterday, and never hid the limit on claude code. Perhaps they allowed for more tokens for the launch?<p>Claude design on the other hand seemed to eat through (its own separate usage limit) very fast. Hit the limit this morning in about 45 mins on a max plan. I assume they are going to end up spinning that product off as a separate service.
4/18/2026, 5:51:25 PM
by: ai_slop_hater
Does anyone know what changed in the tokenizer? Does it output multiple tokens for things that were previously one token?
4/18/2026, 4:56:43 PM
by: therobots927
Wow this is pretty spectacular. And with the losses anthro and OAI are running, don’t expect this trend to change. You will get incremental output improvements for a dramatically more expensive subscription plan.
4/18/2026, 4:53:00 PM
by: justindotdev
i think it is quite clear that staying with opus 4.6 is the way to go, on top of the inflation, 4.7 is quite... dumb. i think they have lobotomized this model while they were prioritizing cybersecurity and blocking people from performing potentially harmful security related tasks.
4/18/2026, 4:56:27 PM
by: micromacrofoot
The latest qwen actually performs a little better for some tasks, in my experience<p>latest claude still fails the car wash test
4/18/2026, 5:18:36 PM
by: fny
I'm going to suggest what's going on here is Hanlon's Razor for models: "Never attribute to malice that which is adequately explained by <i>a model's</i> stupidity."<p>In my opinion, we've reached some ceiling where more tokens lead only to incremental improvements. A conspiracy seems unlikely given all providers are still competing for customers and a 50% token drives infra costs up dramatically too.
4/18/2026, 5:04:33 PM
by: jeremie_strand
[dead]
4/18/2026, 6:28:48 PM
by: chandureddyvari
[dead]
4/18/2026, 5:43:03 PM
by: monkeydust
'sixxxx, seeeeven'....sorry have little kids, couldn't resist but perhaps that explains what's going on!
4/18/2026, 5:25:52 PM