The Claude Code Source Leak: fake tools, frustration regexes, undercover mode
by alex000kim on 3/31/2026, 1:04:30 PM
Related ongoing thread: <i>Claude Code's source code has been leaked via a map file in their NPM registry</i> - <a href="https://news.ycombinator.com/item?id=47584540">https://news.ycombinator.com/item?id=47584540</a><p>Also related: <a href="https://www.ccleaks.com" rel="nofollow">https://www.ccleaks.com</a>
https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/
Comments
by: Levitating
I am still just shocked that Claude Code was written in Typescript, not C++, Rust or Python.<p>It also somehow messed up my alacritty config when I first used it. Who knows what other ~/.config files it modifies without warning.
4/1/2026, 9:25:32 AM
by: mzajc
There are now several comments that (incorrectly?) interpret the undercover mode as only hiding internal information. Excerpts from the actual prompt[0]:<p><pre><code> NEVER include in commit messages or PR descriptions: - The phrase "Claude Code" or any mention that you are an AI - Co-Authored-By lines or any other attribution BAD (never write these): - 1-shotted by claude-opus-4-6 - Generated with Claude Code - Co-Authored-By: Claude Opus 4.6 <…> </code></pre> This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. It's especially worrying that the prompt is explicitly written for contributions to public repositories.<p>[0]: <a href="https://github.com/chatgptprojects/claude-code/blob/642c7f944bbe5f7e57c05d756ab7fa7c9c5035cc/src/utils/undercover.ts#L39" rel="nofollow">https://github.com/chatgptprojects/claude-code/blob/642c7f94...</a>
3/31/2026, 7:06:31 PM
by: geoffbp
“Some bullet points are gated on process.env.USER_TYPE === 'ant' — Anthropic employees get stricter/more honest instructions than external use”<p>Interesting!
3/31/2026, 8:46:07 PM
by: Reason077
> <i>"Anti-distillation: injecting fake tools to poison copycats"</i><p>Plot twist: Chinese competitors end up developing real, useful versions of Claude's fake tools.
3/31/2026, 7:13:22 PM
by: HeytalePazguato
The hooks system is the most underappreciated thing in what leaked. PreToolUse, PostToolUse, session lifecycle, all firing via curl to a local server. Clean enough to build real tooling on top of without fighting it.<p>The frustration regex is funny but honestly the right call. Running an LLM call just to detect "wtf" would be ridiculous.<p>KAIROS is what actually caught my attention. An always-on background agent that acts without prompting is a completely different thing from what Claude Code is today. The 15 second blocking budget tells me they actually thought through what it feels like to have something running in the background while you work, which is usually the part nobody gets right.
4/1/2026, 1:29:17 AM
by: autocracy101
I made a visual guide for this <a href="https://ccunpacked.dev" rel="nofollow">https://ccunpacked.dev</a>
3/31/2026, 10:14:57 PM
by: blcknight
My GitHub <i>fork</i> of anthropics/claude-code just got taken down with a DMCA notice lol<p>It did not have a copy of the leaked code...<p>Anthropic thinking 1) they can unring this bell, and 2) removing forks from people who have contributed (well, what little you can contribute to their repo), is ridiculous.<p>---<p>DMCA: <a href="https://github.com/github/dmca/blob/master/2026/03/2026-03-31-anthropic.md" rel="nofollow">https://github.com/github/dmca/blob/master/2026/03/2026-03-3...</a><p>GitHub's note at the top says: "Note: Because the reported network that contained the allegedly infringing content was larger than one hundred (100) repositories, and the submitter alleged that all or most of the forks were infringing to the same extent as the parent repository, GitHub processed the takedown notice against the entire network of 8.1K repositories, inclusive of the parent repository."
4/1/2026, 12:02:10 AM
by: peacebeard
The name "Undercover mode" and the line `The phrase "Claude Code" or any mention that you are an AI` sound spooky, but after reading the source my first knee-jerk reaction wouldn't be "this is for pretending to be human" given that the file is largely about hiding Anthropic internal information such as code names. I encourage looking at the source itself in order to draw your conclusions, it's very short: <a href="https://github.com/alex000kim/claude-code/blob/main/src/utils/undercover.ts" rel="nofollow">https://github.com/alex000kim/claude-code/blob/main/src/util...</a>
3/31/2026, 6:42:57 PM
by: Andebugulin
Regex for swearing detected, user needs to get more API tokens, he is very very pissed.
4/1/2026, 7:18:35 AM
by: matheusmoreira
But what does Claude <i>do</i> when it detects user fruatration?! Don't leave us hanging here!
4/1/2026, 10:03:29 AM
by: fatcullen
The buddy feature the article mentions is planned for release tomorrow, as a sort of April Fools easter egg. It'll roll out gradually over the day for "sustained Twitter buzz" according to the source.<p>The pet you get is generated based off your account UUID, but the algorithm is right there in the source, and it's deterministic, so you can check ahead of time. Threw together a little app to help, not to brag but I got a legendary ghost <a href="https://claudebuddychecker.netlify.app/" rel="nofollow">https://claudebuddychecker.netlify.app/</a>
3/31/2026, 7:49:36 PM
by: ripbozo
I don't understand the part about undercover mode. How is this different from disabling claude attribution in commits (and optionally telling claude to act human?)<p>On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.
3/31/2026, 6:41:54 PM
by: causal
I'm amazed at how much of what my past employers would call trade secrets are just being shipped in the source. Including comments that just plainly state the whole business backstory of certain decisions. It's like they discarded all release harnesses and project tracking and just YOLO'd everything into the codebase itself.<p>Edit: Everyone is responding "comments are good" and I can't tell if any of you actually read TFA or not<p>> “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”<p>This is just revealing operational details the agent doesn't need to know to set `MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3`
3/31/2026, 6:58:49 PM
by: girvo
I'd really recommend putting a modicum of work into cleaning up obvious AI generated output. It's rude, otherwise, to the humans you're expecting to read this.
3/31/2026, 10:12:00 PM
by: layer8
> Sometimes a regex is the right tool.<p>I’d argue that in this case, it isn’t. Exhibit 1 (from the earlier thread): <a href="https://github.com/anthropics/claude-code/issues/22284" rel="nofollow">https://github.com/anthropics/claude-code/issues/22284</a>. The user reports that this caused their account to be banned: <a href="https://news.ycombinator.com/item?id=47588970">https://news.ycombinator.com/item?id=47588970</a><p>Maybe it would be okay as a first filtering step, before doing actual sentiment analysis on the matches. That would at least eliminate obvious false positives (but of course still do nothing about false negatives).
3/31/2026, 7:41:00 PM
by: simianwords
> The multi-agent coordinator mode in coordinatorMode.ts is also worth a look. The whole orchestration algorithm is a prompt, not code.<p>So much for langchain and langraph!! I mean if Anthropic themselves arent using it and using a prompt then what’s the big deal about langchain
3/31/2026, 6:44:55 PM
by: galaxyLogic
The irony of ironies is in the last paragraph:<p>" ...accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”
4/1/2026, 5:10:57 AM
by: artyom
I'm still amazed that something as ubiquitous as "daemon mode" is still unreleased.<p>- Claude Chat: built like it's 1995, put business logic in the button click() handler. Switch to something else in in the UI and a long running process hard stops. Very Visual Basic shovelware.<p>- Claude Cowork: same but now we're smarter, if you change the current convo we don't stop the underlying long-running process. 21st century FTW!<p>- Claude Code: like chat, but in the CLI<p>- Claude Dispatch: an actual mobile <i>client</i> app, not the whole thing bundled together.<p>- Daemon mode: proper long-running background process, still unreleased.
4/1/2026, 12:37:43 AM
by: preston-kwei
I’m more curious how this impacts trust than anything else.<p>In the span of basically a week, they accidentally leaked Mythos, and then now the entire codebase of CC. All while many people are complaining about their usage limits being consumed quickly.<p>Individually, each issue is manageable (Because its exciting looking through leaked code). But together, it starts to feel like a pattern.<p>At some point, I think the question becomes whether people are still comfortable trusting tools like this with their codebases, not just whether any single incident was a mistake.
3/31/2026, 11:07:22 PM
by: getverdict
Two things worth separating here: the leak mechanism and the leak contents.<p>The mechanism is a build pipeline issue. Bun generates source maps by default, and someone didn't exclude the .map file from the npm publish. There's an open Bun issue (oven-sh/bun#28001) about this exact behavior. One missing line in .npmignore or the package.json files field. Same category of error as the Axios compromise earlier this week — npm packaging configuration is becoming a recurring single point of failure across the ecosystem.<p>The contents are more interesting from a security architecture perspective. The anti-distillation system (injecting fake tool definitions to poison training data scraped from API traffic) is a defensive measure that only works when its existence is secret. Now that it's public, anyone training on Claude Code API traffic knows to filter for it. The strategic value evaporated the moment the .map file hit the CDN.<p>The undercover mode discussion is being framed as deception, but the actual security question is narrower: should AI-authored contributions to public repositories carry attribution? That's an AI identity disclosure question that the industry hasn't settled. The code shows Anthropic made a specific product decision — strip AI attribution in public commits from employee accounts. Whether that's reasonable depends on whether you think AI authorship is material information for code reviewers.<p>The frustration regex is the least interesting finding technically but the most revealing culturally. A company with frontier-level NLP capability chose a regex over an inference call for sentiment detection. The engineering reason is obvious (latency and cost), but it tells you something about where even AI companies draw the line on using their own models.
4/1/2026, 3:18:54 AM
by: pixl97
>Claude Code also uses Axios for HTTP.<p>Interesting based on the other news that is out.
3/31/2026, 6:02:08 PM
by: deepsun
It is super weird that developers have to run a binary blob on their machines. It's 2026, all the major developer CLI tools are open-source anyway. What's the point for Anthropic to even make it secret?
4/1/2026, 1:16:58 AM
by: stavros
Can someone clarify how the signing can't be spoofed (or can it)? If we have the source, can't we just use the key to now sign requests from other clients and pretend they're coming from CC itself?
3/31/2026, 7:11:22 PM
by: wg0
I have yet to see such a company that's so insecure that they would keep their CLI closed source even when the secret sauce is in the model that they control already and is closed source.<p>Not only that, wouldn't allow other CLIs to be used either.
3/31/2026, 8:48:20 PM
by: Aperocky
It's completely baffling to me why a client that must run on third party environment is behind closed source.
4/1/2026, 12:54:39 AM
by: seanwilson
Anyone else have CI checks that source map files are missing from the build folder? Another trick is to grep the build folder for several function/variable names that you expect to be minified away.
3/31/2026, 6:42:05 PM
by: Gen_ArmChair
The Claude Code leak suggests multi-agent orchestration is largely driven by prompts (e.g., “do not rubber-stamp weak work”), with code handling execution rather than enforcing decisions.<p>Prompts are not hard constraints—they can be interpreted, deprioritized, or reasoned around, especially as models become more capable.<p>From what’s visible, there’s no clear evidence of structural governance like voting systems, hard thresholds, or mandatory human escalation. That means control appears to be policy (prompts), not enforcement (code).<p>This raises the core issue: If governance is “prompts all the way down,” it’s not true governance—it’s guidance.<p>And as model capability increases, that kind of governance doesn’t get stronger—it becomes easier to bypass without structural constraints.<p>Has anyone actually implemented structural governance for agent swarms — voting logic, hard thresholds, REQUIRES_HUMAN as architecture not instruction?
4/1/2026, 7:36:58 AM
by: simianwords
> The obvious concern, raised repeatedly in the HN thread: this means AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them. It’s one thing to hide internal codenames. It’s another to have the AI actively pretend to be human.<p>I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code.
3/31/2026, 6:35:04 PM
by: SquibblesRedux
Can fully AI‑generated code be copyrightable? Is there evidence that the leaked code was AI-generated?
3/31/2026, 9:44:44 PM
by: evil-olive
> So I spent my morning reading through the HN comments and leaked source.<p>> This was one of the first things people noticed in the HN thread.<p>> The obvious concern, raised repeatedly in the HN thread<p>> This was the most-discussed finding in the HN thread.<p>> Several people in the HN thread flagged this<p>> Some in the HN thread downplayed the leak<p>when the original HN post is already at the top of the front page...why do we need a separate blogpost that just summarizes the comments?
3/31/2026, 7:49:13 PM
by: senfiaj
> Frustration detection via regex (yes, regex)<p>/\b(wtf|wth|ffs|omfg|shit(ty|tiest)?|dumbass|horrible|awful| piss(ed|ing)? off|piece of (shit|crap|junk)|what the (fuck|hell)| fucking? (broken|useless|terrible|awful|horrible)|fuck you| screw (this|you)|so frustrating|this sucks|damn it)\b/<p>Personally, I'm generally polite even towards AI and even when frustrated. I simply point out the its mistakes instead of using emotional words.
4/1/2026, 12:07:11 AM
by: stephbook
Sounds like there's still a lot of value in Typescript (otherwise they could have open sourced.)<p>Plus there's demand for skilled TS software devs that don't ship your company's roadmap using a js.map<p>20,000 agents and none of them caught it...
3/31/2026, 10:12:58 PM
by: pjoubert
I'm curious about what people are not looking for about Claude code. What's missing and nobody is talking about? Any clue?
4/1/2026, 1:31:23 AM
by: jwilliams
I used to swear at Claude. To be honest, I thought it helped get results (maybe this is "oldschool" LLM thinking), but I realized it was just making me annoyed.
4/1/2026, 4:25:28 AM
by: tietjens
This is very much AI written, right? The voice sounds like Claude.
3/31/2026, 8:50:40 PM
by: marcd35
> 250,000 wasted API calls per day<p>How much approximate savings would this actually be?
3/31/2026, 7:38:01 PM
by: devhouse
Claude Code’s Source Code Leaked Through npm. Here’s What Actually Happened. <a href="https://www.everydev.ai/p/tool-claude-codes-source-code-leaked-through-npm-heres-what-actually-happened" rel="nofollow">https://www.everydev.ai/p/tool-claude-codes-source-code-leak...</a>
4/1/2026, 5:44:31 AM
by: karim79
We're about to reach AGI. One regex at a time...
3/31/2026, 8:56:44 PM
by: sheepscreek
> As one Twitter reply put it: “accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”<p>To err is human. AI is trained on human content. Hence, to err is AI. The day it stops making mistakes will be the beginning of the end. That would mean the existence of a consciousness that has no weakness. Great if it’s on your side. Terrible otherwise.
4/1/2026, 1:22:00 AM
by: ChicagoDave
Meanwhile Claude Code is still awesome. I don’t see my self switching to OpenAI (seriously bad mgmt and possibly the first domino to fall if there is a correction) or Gemini (Google ethics cough cough).
4/1/2026, 12:11:50 AM
by: motbus3
I am curious about these fake tools.<p>They would either need to lie about consuming the tokens at one point to use in another so the token counting was precise.<p>But that does not make sense because if someone counted the tokens by capturing the session it would certainly not match what was charged.<p>Unless they would charge for the fake tools anyway so you never know they were there
3/31/2026, 7:07:42 PM
by: viccis
>This was the most-discussed finding in the HN thread. The general reaction: an LLM company using regexes for sentiment analysis is peak irony.<p>>Is it ironic? Sure. Is it also probably faster and cheaper than running an LLM inference just to figure out if a user is swearing at the tool? Also yes. Sometimes a regex is the right tool.<p>I'm reading an LLM written write up on an LLM tool that just summarizes HN comments.<p>I'm so tired man, what the hell are we doing here.
3/31/2026, 7:41:28 PM
by: olalonde
I'm surprised that they don't just keep the various prompts, which are arguably their "secret sauce", hidden server side. Almost like their backend and frontend engineers don't talk to each other.
3/31/2026, 8:50:43 PM
by: seertaak
The irony of an IP scraper on an absolutely breathtaking, epic scale getting its secret sauce "scraped" - because the whole app is vibe coded (and the vibe coders appear to be oblivious to things like code obfuscation cuz move fast!)...<p>And so now the copy cats can ofc claim this is totally not a copy at all, it's actually Opus. No license violation, no siree!<p>It's fucking hilarious is what it is, it's just too much.
3/31/2026, 9:50:33 PM
by: armanj
> Anti-distillation: injecting fake tools to poison copycats<p>Does this mean `huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled` is unusable? Had anyone seen fake tool calls working with this model?
3/31/2026, 7:40:38 PM
by: mmaunder
Come on guys. Yet another article distilling the HN discussion in the original post, in the same order the comments appear in that discussion? Here's another since y'all love this stuff: <a href="https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know" rel="nofollow">https://venturebeat.com/technology/claude-codes-source-code-...</a>
3/31/2026, 7:35:01 PM
by: mordae
> “Do not rubber-stamp weak work” and “You must understand findings before directing follow-up work. Never hand off understanding to another worker.”<p>:-D
3/31/2026, 10:32:03 PM
by: reenorap
What effect will this have on their IPO? Can someone take the code and make a clone?
4/1/2026, 1:19:08 AM
by: shreyssh
The undercover mode is the part that should terrify everyone building with agents.
3/31/2026, 11:51:44 PM
by: martin-t
As per <a href="https://drewdevault.com/2025/04/20/2025-04-20-Tech-sector-restistance.html" rel="nofollow">https://drewdevault.com/2025/04/20/2025-04-20-Tech-sector-re...</a> I congratulate the employee responsible.
4/1/2026, 5:29:19 AM
by: baby
Can someone ask claude to write a deep dive on how compaction works and why it’s so slow? (I still can’t fathom why they wouldn’t just add a user message “compact the conversation we’ve just had”
4/1/2026, 2:05:25 AM
by: simianwords
Guys I’m somewhat suspicious of all the leaks from Anthropic and think it may be intentional. Remember the leaked blog about Mythos?
3/31/2026, 6:38:13 PM
by: gervwyk
how sure are we this entire “accident” is not an aprils fools joke??<p>Genius level AI marketing
3/31/2026, 11:33:11 PM
by: ptrl600
Why didn't they open the source themselves? What's the point of all this secrecy anyway?
3/31/2026, 7:58:46 PM
by: msukkarieh
Built a tool to ask questions on the Claude Code source code: <a href="https://askgithub.com/alex000kim/claude-code" rel="nofollow">https://askgithub.com/alex000kim/claude-code</a>
3/31/2026, 11:17:57 PM
by: try-working
They want "Made with Claude Code" on your PRs as a growth marketing strategy. They don't want it on their PRs, so it looks like they're doing something you're not capable of. Well, you are and they have no secret sauce.
3/31/2026, 10:17:21 PM
by: amelius
A few weeks ago I was using Opus and Sonnet in OpenCode. Is this not possible anymore?
3/31/2026, 7:48:51 PM
by: zingar
I wrote this an hour ago and it seems that Claude might not understand it as frustration:<p>> change the code!!!! The previous comment was NOT ABOUT THE DESCRIPTION!!!!!!! Add to the {implementation}!!!!! This IS controlled BY CODE. *YOU* _MUST_ CHANGE THE CODE!!!!!!!!!!!
3/31/2026, 9:12:46 PM
by: imcritic
Does this mean I can now self host Claude?
4/1/2026, 1:00:04 AM
by: DeathArrow
I hope cheap Chinese models will overtake Anthropic.
4/1/2026, 5:38:15 AM
by: betimd
that’s fun am having exploring this codebase with claude code, inception at its best
3/31/2026, 11:41:21 PM
by: jrflowers
I like that if they decide that your usage looks like distillation it just becomes useless, because there’s no way for the end user to distinguish between it just being sort of crappy or sabotaged intentionally. That’s a cool thing to pay for
3/31/2026, 9:10:37 PM
by: dangus
Something I’ve been thinking about, somewhat related but also tangential to this topic:<p>The more code gets generated by AI, won’t that mean taking source code from a company becomes legal? Isn’t it true that works created with generative AI can’t be copyrighted?<p>I wonder if large companies have throught of this risk. Once a company’s product source code reaches a certain percentage of AI generation it no longer has copyright. Any employee with access can just take it and sell it to someone else, legally, right?
3/31/2026, 7:56:22 PM
by: chadd
re: binary attestation: "Whether the server rejects that outright or just logs it is an open question"<p>...what we did at Snap was just wait for 8-24 hours before acting on a signal<i>, so as not to provide an oracle to attackers. Much harder to figure out what you did that caused the system to eventually block your account if it doesn't happen in real-time.<p></i>(Snap's binary attestation is at least a decade ahead of this, fwiw)
3/31/2026, 11:46:15 PM
by: heliumtera
What a cesspool. So this is the power of being 80x more productive, having infinite llm usage quota? No wonder they had to let Satan take the wheel and went 100% vibe code. Thanks for making a point, llms are a disgrace
4/1/2026, 3:19:45 AM
by:
3/31/2026, 9:24:33 PM
by:
3/31/2026, 7:37:11 PM
by: jsrozner
"and i also wrote this using claude" -- can we just include that at this point?
4/1/2026, 12:02:13 AM
by: eranation
Probably an unpopular opinion but Anthropic are too popular for their own good.<p>1. They are loved, and for good reasons, Sonnet 4 was groundbreaking but Opus 4.6 was for many a turning point in realizing Agentic SDLC real potential. People moved from Cursor to Claude Code in droves, they loved the CLI approach (me too), and the LOVED the subsidized $200 max pro plan (what's not to love, pay $200 instead of $5000 to Cursor...) They are the underdog, the true alternative to "evil" OpenAI or "don't be evil" Google, really standing up against mass surveillance or use of AI for autonomous killing machines. They are standing for the little guy, they are the "what OpenAI should have been" (plus they have better models...) They are the Apple of the AI era.<p>2. They are <i>too</i> loved, so loved that it protects them from legitimate criticism. They make GitHub's status page look good, and they make comcast customer service look like Amazon's. (At least Comcast has customer service), They are "If Dario shoots a customer in the middle of 5th avenue it won't hurt their sales one bit" level of liked. The fact they have the best models (for now) might be their achilles heel, because it hides other issues that might be in the blindspot. And as soon as a better model comes out from a competitor (and it could happen... if you recall OpenAI were the undisputed kinds with GPT 4o for a bit) these will become much more obvious.<p>3. This can hurt them in the long run. Eventually you can't sustain a business where you have not even 2 9s of SLA, can't handle customer support or sales (either with humans or worse for them - if they can't handle this with AI how do they expect to sell their own dream where AI does everything?). I'm sure they'll figure it out, they have huge growth and these are growth pains, but at some point, if they don't catch up with demand, the demand won't stay there forever the moment OpenAI/Google/someone else release a better model.<p>4. They inadvertently made all of the cybersecurity sector a potential enemy. Yes, all of them use Anthropic models, and probably many of them use Claude Code, but they know they might be paying the bills of their biggest competitor. Their shares drop whenever Anthropic even hints of a new model. Investors cut their valuations because they worry Anthropic will eat them for breakfast. I don't know about you, but if you ask me, having the people who live and breath security indirectly threatened by you, is not the best thing in the world, especially when your source code is out in the open for them to poke holes in...<p>5. the SaaS pocalypse - many of Claude Code's customers are... SaaS companies, that the same AI is "going to kill", again, if there was another provider that showed a bit more care about the entire businesses it's going to devour, if they also had even marginally better models... would the brand loyalty stay?<p>Side note: I'm an Claude Enterprise customer, I can't get a human to respond to anything, even using the special "enterprise support" methods, and I'm not the only one, I know people who can't get a sales person, not to mention support, to buy 150 + seats (Anthropic's answer was - release self serve enterprise onboarding, which by the way is "pay us $20 which does not include usage, usage is at market prices, same as getting an API key", you pay for convenience and governance, p.s. you can't cancel enterprise, it's 20 seats min, for 1 year, in advance, so make sure you really need it, the team plan is great for most cases but it lacks the $200 plan, only the $100 5x plan).
4/1/2026, 4:16:40 AM
by: OfirMarom
Undercover mode is the most concerning part here tbh.
3/31/2026, 6:27:29 PM
by: driftcode
[dead]
4/1/2026, 9:47:49 AM
by: jamiemallers
[dead]
4/1/2026, 8:08:14 AM
by: edinetdb
[dead]
4/1/2026, 6:46:16 AM
by: theblacksun
[dead]
4/1/2026, 8:53:07 AM
by: jeremie_strand
[dead]
4/1/2026, 3:48:03 AM
by: Sim-In-Silico
[dead]
4/1/2026, 3:19:42 AM
by: algolint
[flagged]
4/1/2026, 2:48:56 AM
by: navilai
[dead]
4/1/2026, 3:13:02 AM
by: pmakhija3
[dead]
4/1/2026, 9:10:54 AM
by: aiedwardyi
[dead]
4/1/2026, 10:06:57 AM
by: barazany
[dead]
3/31/2026, 10:21:41 PM
by: algolint
[flagged]
4/1/2026, 2:50:09 AM
by: aplomb1026
[dead]
3/31/2026, 10:06:17 PM
by: noritaka88
[dead]
3/31/2026, 10:24:04 PM
by: calebjang
[dead]
3/31/2026, 10:01:08 PM
by: Jaco07
[dead]
3/31/2026, 8:06:49 PM
by: skrun_dev
[dead]
3/31/2026, 7:53:58 PM
by: alcor-z
[dead]
4/1/2026, 2:11:13 AM
by: alcor-z
[dead]
4/1/2026, 2:11:48 AM
by: 68768-8790
[dead]
3/31/2026, 10:06:59 PM
by: wrkxapp
why claude bring back 4o u dumb fks
4/1/2026, 12:23:42 AM
by: thomasgeelens
Can somebody tell me what this means for the company?
3/31/2026, 8:53:14 PM
by: saadn92
The feature flag names alone are more revealing than the code. KAIROS, the anti-distillation flags, model codenames those are product strategy decisions that competitors can now plan around. You can refactor code in a week. You can't un-leak a roadmap.
3/31/2026, 7:09:27 PM