Claude Opus 4.7 costs 20–30% more per session
by aray07 on 4/17/2026, 3:29:36 PM
https://www.claudecodecamp.com/p/i-measured-claude-4-7-s-new-tokenizer-here-s-what-it-costs-you
Comments
by: louiereederson
LLMs exist on a logaritmhic performance/cost frontier. It's not really clear whether Opus 4.5+ represent a level shift on this frontier or just inhabits place on that curve which delivers higher performance, but at rapidly diminishing returns to inference cost.<p>To me, it is hard to reject this hypothesis today. The fact that Anthropic is rapidly trying to increase price may betray the fact that their recent lead is at the cost of dramatically higher operating costs. Their gross margins in this past quarter will be an important data point on this.<p>I think the tendency for graphs of model assessment to display the log of cost/tokens on the x axis (i.e. Artificial Analysis' site) has obscured this dynamic.
4/17/2026, 4:13:01 PM
by: speedgoose
The "multiplier" on Github Copilot went from 3 to 7.5. Nice to see that it is actually only 20-30% and Microsoft wanting to lose money slightly slower.<p><a href="https://docs.github.com/fr/copilot/reference/ai-models/supported-models#model-multipliers" rel="nofollow">https://docs.github.com/fr/copilot/reference/ai-models/suppo...</a>
4/17/2026, 5:27:45 PM
by: _pdp_
IMHO there is a point where incremental model quality will hit diminishing returns.<p>It is like comparing an 8K display to a 16K display because at normal viewing distance, the difference is imperceptible, but 16K comes at significant premium.<p>The same applies to intelligence. Sure, some users might register a meaningful bump, but if 99% can't tell the difference in their day-to-day work, does it matter?<p>A 20-30% cost increase needs to deliver a proportional leap in perceivable value.
4/17/2026, 4:34:31 PM
by: namnnumbr
The title is a misdirection. The token counts may be higher, but the cost-per-task may not be for a given intelligence level. Need to wait to see Artificial Analysis' Intelligence Index run for this, or some other independent per-task cost analysis.<p>The final calculation assumes that Opus 4.7 uses the exact same trajectory + reasoning output as Opus 4.6. I have not verified, but I <i>assume</i> it not to be the case, given that Opus 4.7 on Low thinking is strictly better than Opus 4.6 on Medium, etc., etc.
4/17/2026, 4:55:01 PM
by: _fat_santa
A question I've been asking alot lately (really since the release of GPT-5.3) is "do I really need the more powerful model"?<p>I think a big issue with the industry right now is it's constantly chasing higher performing models and that comes at the cost of everything else. What I would love to see in the next few years is all these frontier AI labs go from just trying to create the most powerful model at any cost to actually making the whole thing sustainable and focusing on efficiency.<p>The GPT-3 era was a taste of what the future could hold but those models were toys compare to what we have today. We saw real gains during the GPT-4 / Claude 3 era where they could start being used as tools but required quite a bit of oversight. Now in the GPT-5 / Claude 4 era I don't really think we need to go much further and start focusing on efficiency and sustainability.<p>What I would love the industry to start focusing on in the next few years is not on the high end but the low end. Focus on making the 0.5B - 1B parameter models better for specific tasks. I'm currently experimenting with fine-tuning 0.5B models for very specific tasks and long term I think that's the future of AI.
4/17/2026, 5:01:32 PM
by: synergy20
that's what i feel, going to use codex more
4/17/2026, 6:32:58 PM
by: montjoy
It appears that they are testing using Max. For 4.7 Anthropic recognizes the high token usage of max and recommends the new xhigh mode for most cases. So I think the real question is whether 4.7 xhigh is “better” than 4.6 max.<p>> max: Max effort can deliver performance gains in some use cases, but may show diminishing returns from increased token usage. This setting can also sometimes be prone to overthinking. We recommend testing max effort for intelligence-demanding tasks.<p>> xhigh (new): Extra high effort is the best setting for most coding and agentic use cases<p>Ref: <a href="https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-prompting-best-practices" rel="nofollow">https://platform.claude.com/docs/en/build-with-claude/prompt...</a>
4/17/2026, 5:32:50 PM
by: atonse
Just yesterday I was happy to have gotten my weekly limit reset [1]. And although I've been doing a lot of mockup work (so a lot of HTML getting written), I think the 1M token stuff is absolutely eating up tokens like CRAZY.<p>I'm already at 27% of my weekly limit in ONE DAY.<p><a href="https://news.ycombinator.com/item?id=47799256">https://news.ycombinator.com/item?id=47799256</a>
4/17/2026, 4:20:38 PM
by: sipsi
I tried to do my usual test (similar to pelican but a bit more complex) but it ran out of 5 hour limit in 5 minutes. Then after 5 hours I said "go on" and the results were the worst I've ever seen.
4/17/2026, 4:41:26 PM
by: uberman
On actual code, I see what you see a 30% increase in tokens which is in-line with what they claim as well. I personally don't tend to feed technical documentation or random pros into llms.<p>Given that Opus 4.6 and even Sonnet 4.6 are still valid options, for me the question is not "Does 4.7 cost more than claimed?" but "What capabilities does 4.7 give me that 4.6 did not?"<p>Yesterday 4.6 was a great option and it is too soon for me to tell if 4.7 is a meaningful lift. If it is, then I can evaluate if the increased cost is justified.
4/17/2026, 3:44:56 PM
by: jmward01
Claude code seems to be getting worse on several fronts and better on others. I suspect product is shifting from 'make it great' to 'make it make as much money for us as possible and that includes gathering data'.<p>Recently it started promoting me for feedback even though I am on API access and have disabled this. When I did a deep dive of their feedback mechanism in the past (months ago so probably changed a lot since then) the feedback prompt was pushing message ids even if you didn't respond. If you are on API usage and have told them no to training on your data then anything pushing a message id implies that it is leaking information about your session. It is hard to keep auditing them when they push so many changes so I am now 'default they are stealing my info' instead of believing their privacy/data use policy claims. Basically, my level of trust is eroding fast in their commitment to not training on me and I am paying a premium to not have that happen.
4/17/2026, 5:23:51 PM
by: JohnMakin
30% more token use, but even by their benchmarks, don't appear to have any real big successes there, and some regressions. What's the point? It doesn't do any better on the suite of obedience/compliance tests I've written for 4.6, and in some tests, got worse, despite their claim there it is better. Anecdotally, it was gobbling so many tokens on even the simplest queries I immediately shut it off and went back to 4.5.<p>Why release this?
4/17/2026, 6:30:33 PM
by: margorczynski
It doesn't look good for Anthropic, especially considering they are burning billions in investor money.<p>Looks like they lost the mandate of heaven, if Open AI plays it right it might be their end. Add to that the open source models from China.
4/17/2026, 6:00:29 PM
by: taosx
Claude seems so frustrating lately to the point where I avoid and completely ignore it. I can't identify a single cause but I believe it's mostly the self-righteousness and leadership that drive all the decisions that make me distrust and disengage with it.
4/17/2026, 5:04:10 PM
by: JimmaDaRustla
Am I dumb, or are they not explaining what level thinking they're using? We all read the Anthropic blog post yesterday - 4.7 max consumes/produces an incredible number of tokens and it's not equivalent to 4.6 max; xhigh is the new "max".
4/17/2026, 6:31:35 PM
by: Yukonv
Some broad assumptions are being made that plans give you a precise equivalent to API cost. This is not the case with reverse engineering plan usage showing cached input is free [0]. If you re-run the math removing cached input the usage cost is ~5-34% more. Was the token plan budget increase [1] proportional to account for this? Can’t say with certainty. Those paying API costs though the price hike is real.<p>[0] <a href="https://she-llac.com/claude-limits" rel="nofollow">https://she-llac.com/claude-limits</a><p>[1] <a href="https://xcancel.com/bcherny/status/2044839936235553167" rel="nofollow">https://xcancel.com/bcherny/status/2044839936235553167</a>
4/17/2026, 4:51:23 PM
by: jmward01
Yeah. I just did a day with 4.7 and I won't be going back for a while. It is just too expensive. On top of the tokenization the thinking seems like it is eating a lot more too.
4/17/2026, 4:24:40 PM
by: redml
It does cost more but I found the quality of output much higher. I prefer it over the dumbing of effort/models they were doing for the last two months. They have to get users used to picking the appropriate model for their task (or have an automatic mode - but still let me force it to a model).
4/17/2026, 6:01:40 PM
by: sysmax
Well, LLMs are priced per token, and most of the tokens are just echoing back the old code with minimal changes. So, a lot of the cost is actually paying for the LLM to echo back the same code.<p>Except, it's not that trivial to solve. I tried experimenting with asking the model to first give a list of symbols it will modify, and then just write the modified symbols. The results were OK, but less refined than when it echoes back the entire file.<p>The way I see it is that when you echo back the entire file, the process of thinking "should I do an edit here" is distributed over a longer span, so it has more room to make a good decision. Like instead of asking "which 2 of the 10 functions should you change" you're asking it "should you change method1? what about method2? what about method3?", etc., and that puts less pressure on the LLM.<p>Except, currently we are effectively paying for the LLM to make that decision for *every token*, which is terribly inefficient. So, there has to be some middle ground between expensively echoing back thousands of unchanged tokens and giving an error-ridden high-level summary. We just haven't found that middle ground yet.
4/17/2026, 5:06:05 PM
by: iknowstuff
Interesting because I already felt like current models spit out too much garbage verbose code that a human would write in a far more terse, beautiful and grokable way
4/17/2026, 4:12:51 PM
by: DiscourseFan
Yeah I noticed today, I had it work up a spreadsheet for me and I only got 3 or 4 turns in the conversation before it used up all my (pro) credits. It wasn't even super-complicated or anything, only moderately so.
4/17/2026, 6:02:31 PM
by: khalic
Just hit my quota with 20x for the first time today…
4/17/2026, 5:46:40 PM
by: technotony
Not only that but they seem to have cut my plan ability to use Sonnet too. I have a routine that used to use about 40% of my 5 hour max plan tokens, then since yesterday it gets stopped because it uses the whole 100%. Anyone else experience this?
4/17/2026, 5:23:37 PM
by: yuanzhi1203
We noticed this two weeks ago where we found some of our requests are unexpected took more tokens than measured by count_tokens call. At the end they were Anthropic's A/B testing routing some Opus 4.6 calls to Opus 4.7.<p><a href="https://matrix.dev/blog-2026-04-16.html" rel="nofollow">https://matrix.dev/blog-2026-04-16.html</a> (We were talking to Opus 4.7 twelve days ago)
4/17/2026, 5:44:41 PM
by: qq66
This is the backdoor way of raising prices... just inflate the token pricing. It's like ice cream companies shrinking the box instead of raising the price
4/17/2026, 4:48:27 PM
by: beej71
News like this always makes me wonder about running my own model, something I've never done. A couple thousand bucks can get you some decent hardware, it looks like, but is it good for coding? What is your all's experience?<p>And if it's not good enough for coding, what kind of money, if any, would make it good enough?
4/17/2026, 5:03:11 PM
by: thibran
For me there is no point in using Claude Opus 4.7, it's too expensive since it does not do 100% of the job. Since AI can anyway only do 90% of most tasks, I can use another model and do the remaining 15-30% myself.
4/17/2026, 5:26:32 PM
by: bugsense
I would use a service like Straion.com to avoid the forths and back. It increases token consumption but I can get things right the first time.
4/17/2026, 6:29:48 PM
by: adaptive_loop
Every time a new model comes out, I'm left guessing what it means for my token budget in order to sustain the quality of output I'm getting. And it varies unpredictably each time. Beyond token efficiency, we need benchmarks to measure model output quality per token consumed for a diverse set of multi-turn conversation scenarios. Measuring single exchanges is not just synthetic, it's unrealistic. Without good cost/quality trade-off measures, every model upgrade feels like a gamble.
4/17/2026, 5:03:29 PM
by: rambojohnson
So intelligence has turned into a utility per Sam Altman et al., and now the same companies get to hike the price of accessing it by 20–30%, right as it’s becoming the backbone of how teams actually ship work. People are pushing out so much, so fast that last week’s output is already a blur. I’ve got colleagues who refuse to go back to writing any of this stuff by hand.<p>And now maintaining that pace means absorbing arbitrary price increases, shrugged off with “we were operating at a loss anyway.”<p>It stops being “pay to play” and starts looking more like pay just to stay in the ring, while enterprise players barely feel the hit and everyone else gets squeezed out.<p>Market maturing my butthole... it’s obviously a dependency being priced in real time. Tech is an utter shit show right now, compounded by the disaster of the unemployment market still reeling from the overhiring of 2020.<p>save up now and career pivot. pick up gardening.
4/17/2026, 6:09:45 PM
by:
4/17/2026, 6:26:21 PM
by: saltyoldman
I was sort of hoping that the peak is something like $15 per hour of vibe help (yes I know some of you burn $15 in 12milliseconds), and that you can have last year's best or the current "nano/small" model at $1 per hour.<p>But it looks like it's just creeping up. Probably because we're paying for construction, not just inference right now.
4/17/2026, 6:26:02 PM
by: kburman
Anthropic must be loving it. It's free money.
4/17/2026, 6:01:10 PM
by: aliljet
This is the reality I'm seeing too. Does this mean that the subscriptions (5x, 10x, 20x) are essentially reduced in token-count by 20-30%?
4/17/2026, 5:09:14 PM
by: lacoolj
This is probably an adjacent result of this (from anthropic launch post):<p>> In Claude Code, we’ve raised the default effort level to xhigh for all plans.<p>Try changing your effort level and see what results you get
4/17/2026, 4:52:44 PM
by: ndom91
`/model claude-opus-4-6`
4/17/2026, 5:37:08 PM
by: curioussquirrel
Claude's tokenizers have actually been getting less efficient over the years (I think we're at the third iteration at the least since Sonnet 3.5). And if you prompt the LLM in a language other than English, or if your users prompt it or generate content in other languages, the costs go higher even more. And I mean hundreds of percent more for languages with complex scripts like Tamil or Japanese. If you're interested in the research we did comparing tokenizers of several SOTA models in multiple languages, just hit me up.
4/17/2026, 4:52:46 PM
by: rbren
Good reminder to choose model-agnostic tooling!
4/17/2026, 5:41:58 PM
by: rafram
Pretty funny that this article was clearly written by Claude.
4/17/2026, 4:26:53 PM
by: markrogersjr
4.7 one-shot rate is at least 20-30% higher for me
4/17/2026, 4:27:05 PM
by: omega3
Contrary to people here who feel the price increases, reduction of subscription limits etc are the result of the Anthropic models being more expensive to run than the API & subscription revenue they generate I have a theory that Anthropic has been in the enshittification & rent seeking phase for a while in which they will attempt to extract as much money out of existing users as possible.<p>Commercial inference providers serve Chinese models of comparable quality at 0.1x-0.25x. I think Anthropic realised that the game is up and they will not be able to hold the lead in quality forever so it's best to switch to value extraction whilst that lead is still somewhat there.
4/17/2026, 5:34:54 PM
by: varispeed
Don't forget that the model doesn't have an incentive to give right solution the first time. At least with Opus 4.6 after it got nerfed, it would go round in circles until you tell it to stop defrauding you and get to correct solution. That not always worked though. I found starting session again and again until less nerfed model was put on the request. Still all points to artificially make customer pay more.
4/17/2026, 4:54:35 PM
by: dallen33
I'm still using Sonnet 4.6 with no issues.
4/17/2026, 4:05:49 PM
by: Bingolotto
Talked to Claude earlier today and Opus 4.7 cost up to 35% more.
4/17/2026, 5:20:32 PM
by: encoderer
In my “repo os” we have an adversarial agent harness running gpt5.4 for plan and implementation and opus4.6 for review. This was the clear winner in the bake-off when 5.4 came out a couple months ago.<p>Re-ran the bake-off with 4.7 authoring and… gpt5.4 still clearly winning. Same skills, same prompts, same agents.md.
4/17/2026, 4:52:17 PM
by: therobots927
As a regular listener of Ed Zitron this comes as absolutely no surprise. Once you understand the levels of obfuscation available to anthro / OAI you will realize that they have almost certainly hit a model plateau ~1 year ago. All benchmark improvements since have come at a high compute cost. And the model used when evaluating said benchmarks is not the same model you get with your subscription.<p>This is already becoming apparent as users are seeing quality degrade which implies that anthropic is dropping performance across the board to minimize financial losses.
4/17/2026, 5:27:57 PM
by: bcjdjsndon
Because those braniacs added 20-30% more system prompt
4/17/2026, 4:27:54 PM
by: ricardobeat
I can’t stand reading this. One article. Many words. Not written by a human.<p>Feels like LLMs are devolving into having a single, instantly recognizable and predictable writing style.
4/17/2026, 5:07:58 PM
by: stefan_
I don't know anything about tokens. Anthropic says Pro has "more usage*", Max has 5x or 20x "more usage*" than Pro. The link to "usage limits" says "determines how many messages you can send". Clearly no one is getting billed for tokens.
4/17/2026, 4:32:42 PM
by: CodingJeebus
The fundamental problem with these frontier model companies is that they're incentivized to create models that burn through more tokens, full stop. It's a tale as old as capitalism: you wake up every day and choose to deliver more value to your customers or your shareholders, you cannot do both simultaneously forever.<p>People love to throw around "this is the dumbest AI will ever be", but the corollary to that is "this is the most aligned the incentives between model providers and customers will ever be" because we're all just burning VC money for now.
4/17/2026, 4:29:37 PM
by: texttopdfnet
[dead]
4/17/2026, 5:20:31 PM
by: texttopdfnet
[dead]
4/17/2026, 5:19:26 PM
by: throwaway613746
[dead]
4/17/2026, 4:51:30 PM
by: mikert89
The compute is expensive, what is with this outrage? People just want free tools forever?
4/17/2026, 4:41:11 PM
by: xd1936
And what about with Caveman[1]?<p>1. <a href="https://github.com/juliusbrussee/caveman" rel="nofollow">https://github.com/juliusbrussee/caveman</a>
4/17/2026, 4:14:50 PM