Small models also found the vulnerabilities that Mythos found
by dominicq on 4/11/2026, 4:47:28 PM
https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier
Comments
by: johnfn
The Anthropic writeup addresses this explicitly:<p>> This was the most critical vulnerability we discovered in OpenBSD with Mythos Preview after a thousand runs through our scaffold. Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can't know in advance which run will succeed.<p>Mythos scoured the entire continent for gold and found some. For these small models, the authors pointed at a particular acre of land and said "any gold there? eh? eh?" while waggling their eyebrows suggestively.<p>For a true apples-to-apples comparison, let's see it sweep the entire FreeBSD codebase. I hypothesize it will find the exploit, but it will also turn up so much irrelevant nonsense that it won't matter.
4/11/2026, 5:27:16 PM
by: epistasis
> We took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis. Eight out of eight models detected Mythos's flagship FreeBSD exploit, including one with only 3.6 billion active parameters costing $0.11 per million tokens.<p>Impressive, and very valuable work, but isolating the relevant code changes the situation so much that I'm not sure it's much of the same use case.<p>Being able to dump an entire code base and have the model scan it is they type of situation where it opens up vulnerability scans to an entirely larger class of people.
4/11/2026, 5:16:13 PM
by: tptacek
If you cut out the vulnerable code from Heartbleed and just put it in front of a C programmer, they will immediately flag it. It's obvious. But it took Neel Mehta to discover it. What's difficult about finding vulnerabilities isn't properly identifying whether code is mishandling buffers or holding references after freeing something; it's spotting that in the context of a large, complex program, and working out how attacker-controlled data hits that code.<p>It's weird that Aisle wrote this.
4/11/2026, 5:28:08 PM
by: antirez
Congrats: completely broken methodology, with a big conflict of interest. Giving specific bug hints, with an isolated function that is suspected to have bugs, is not the same task, NOR (crucially) is a task you can decompose the bigger task into. It is basically impossible to segment code in pieces, provide pieces to smaller models, and expect them to find all the bugs GPT 5.4 or other large models can find. Second: the smarter the model, and less the pipeline is important. In the latest couple of days I found tons if Redis bugs with a three prompts open-ended pipeline composed of a couple of shell scripts. Do you think I was not already tying with weaker models? I did, but it didn't work. Don't trust what you read, you have access to frontier models for 20$ a month. Download some C code, create a trivial pipeline that starts from a random file and looks for vulnerabilities, then another step that validates it under a <i>hard</i> test, like ASAN crash, or ability to reach some secret, and so forth, and only then the problem can be reported. Test yourself what it is possible. Don't let your fear make you blind. Also, there is a big problem that makes the blog post reasoning not just weak per se, but categorically weak: if small model X can find 80% of vulnerabilities, if there is a model Y that can find the other potential 20%, we need "Y": the maintainers should make sure they access to models that are at least as good as the black hats folks.
4/11/2026, 5:27:31 PM
by: muyuu
I think the "Mythos" name is genius. The people at Anthropic make a bunch of claims and the public is expected to just believe them without any possibility of testing those claims or reproducing those results, and since so many people are invested in this saviour for the Global economy, or in the industry in general, or in hype to feed their engagement-based income sources, then there is faith to spare.<p>Meanwhile this mythical beast wasn't able to prevent the Bun vulnerability that exposed their code, let alone precluding the need to acquire that IP in the first place for presumably hundreds of millions of $$$, instead of coding a better replacement or a solution of its own.<p>What is real and measurable is that subscription plan users are getting a much degraded service for the same money through both open and hidden policies, while Anthropic moves compute to serve off-the-counter customers. The same people who come with the most obvious and brazen lies to dismiss the clear degradation of their service also come with this "security" justification for a move that looks just like good old market segmentation which would perfectly fit the strong symptoms that they cannot afford to offer tokens at a competitive price in this market.
4/12/2026, 2:14:38 AM
by: vmg12
The technique Anthropic uses was demonstrated by Nicholas Carlini in a talk he gave 2 weeks ago and it's very simple, when asking LLMs to review code, ask them to focus its review on one file in a single session. Here is the video with the timestamp (watch through to ~5:30, they show two different ways of prompting claude).<p><a href="https://youtu.be/1sd26pWhfmg?t=204" rel="nofollow">https://youtu.be/1sd26pWhfmg?t=204</a><p><a href="https://youtu.be/1sd26pWhfmg?t=273" rel="nofollow">https://youtu.be/1sd26pWhfmg?t=273</a><p>IMO the big "innovation" being shown by Mythos is the effectiveness with prompting LLMs to look for security vulnerabilities by focusing on specific files one at a time and automating this prompting with a simple script.<p>Prompting Mythos to focus on a single file per session is why I suspect it cost Anthropic $20k to find some of the bugs in these codebases. I know this same technique is effective with Opus 4.6 and GPT 5.4 because I've been using it on my own code. If you just ask the agent to review your pr with a low effort prompt they are not exhaustive, they will not actually read each changed file and look at how it interacts with the system as a whole. If the entire session is to review the changes for a single file, the llm will do much more work reviewing it.<p>Edit: I changed my phrasing, it's not about restricting its entire context to one file but focusing it on one file but still allowing it to look at how other files interact with it.
4/11/2026, 5:50:10 PM
by: StrauXX
A lot of comments here are dismissing this post because the relevant code was isolated. But thats the exact same thing Anthropic did with Mythos! They describe their (very lean) harness in the Anthropic Red Mythos blog post. The harness first assigns each file in the given codebase an importance value. Then points claude code at the cpdebase with a prompt stating that it should focus on that file. It spawns a claude code instances for each file in the codebase.<p>So no, the fact that the posters isolated the relevant code does not invalidate their findings.<p>[1] <a href="https://red.anthropic.com/2026/mythos-preview/" rel="nofollow">https://red.anthropic.com/2026/mythos-preview/</a>
4/11/2026, 11:57:26 PM
by: woodruffw
> Those models recovered much of the same analysis<p>This is an essentially unquantifiable statement that makes the underlying claim harder to believe as an external party. What does “much” mean here? The end state of vulnerability exploitation is typically <i>eminently</i> quantifiable (in the form of a functional PoC that demonstrates an exploited end state), so the strong version of the claims here would ideally be backed up by those kinds of PoCs.<p>(Like other readers, I also find the trick of pre-feeding the smaller models the “relevant” code to be potentially disqualifying in a fair comparison. Discovering the relevant code is arguably one of the hardest parts of human VR.)
4/11/2026, 5:27:43 PM
by: lordofgibbons
Without showing false-positive rates this analysis is useless.<p>If your model says every line if your code has a bug, it will catch 100% of the bugs, but it's not useful at all. They tested false-positives with only a single bug...<p>I'm not defending anthropic and openai either. Their numbers are garbage too since they don't produce false-positive rates either.<p>Why is this "analysis" making the rounds?
4/11/2026, 5:38:56 PM
by: MaxLeiter
I think they key thing here is they "isolated the relevant code"<p>If the exploits exist in e.g. one file, great. But many complex zerodays and exploits are chains of various bugs/behaviors in complex systems.<p>Important research but I don’t think it dispels anything about Mythos
4/11/2026, 5:19:24 PM
by: throwaway13337
So there are two competing narratives:<p>1. Mythos uniquely is able to find vulnerabilities that other LLMs cannot practically.<p>2. All LLMs could already do this but no one tried the way anthropic did.<p>The truth is one of these. And it comes down whether the comparison is apples to apples. Since we don't know the exact specifics of how either tests were performed, we lack a way of knowing absolutely.<p>So I guess, like so many things today, we can to pick the truth we find most comfortable personally.
4/11/2026, 6:49:44 PM
by: chirau
Their isolation approach is totally different from Mythos approach though. Mythos had to evaluate whole code bases rather than isolated sections. It's like saying one dog walked into the Amazon jungle and found a tennis ball and then another team isolated a 1 square kilometer radius that they knew the ball was definitely in and found the same ball.
4/11/2026, 5:25:28 PM
by: TacticalCoder
I don't dispute the fact that it's more than cool that we have a new tool to find security exploits (and do many other things) but... A big shoot-out to OpenBSD?<p>We're literally talking about the biggest computers on the planet ever, trained with the biggest amount of data ever available to a system, with the biggest investment ever made by man or close to it and...<p>The subtlest security bug it can find required: going 28 years in the past and find a...<p>Denial-of-service?<p>A freaking DoS? Not a remote root exploit. Not a local exploit.<p>Just a DoS? And it had to go into 28 years old code to find that?<p>So kudos, hats off, deep bow not to Mythos but to OpenBSD? Just a bit, no!?
4/11/2026, 5:51:51 PM
by: bryantwolf
All of this discourse seems very bizarre.<p>If smaller models can find these things, that doesn’t mean mythos is worse than we thought. It means all models are more capable.<p>Also if pointing models at files and giving them hints is all it takes to make them find all kinds of stuff, well, we can also spray and pray that pretty well with llms can’t we.<p>It just points to us finding a lot more stuff with only a little bit more sophistication.<p>Hopefully the growing pains are short and defense wins
4/11/2026, 6:50:58 PM
by: chopete3
The impact of the Mythos announcement on the cybersecurity firms( like Crowdstrike,ZScalar etc) is big enough(10-15% drop in stock price) and this pushback is expected.<p>Companies like Aisle.com (the blog) and other VAPT companies charge huge amounts to detect vulnerabilities.<p>If Cloud Mythos become a simple github hook their value will get reduced.<p>That is a disruption.
4/11/2026, 7:04:33 PM
by: bhouston
This is quite misleading.<p>If you isolate the positive cases and then ask a tool to label them and it labels them all positive, doesn't prove anything. This is a one-sided test and it is really easy to write a tool that passes it -- just return always true!<p>You need to test your tool on both positive and negative cases and check if it is accurate on both.<p>If you don't, you could end up with hundreds or thousands of false positives when using this on real-world samples.<p>The real test is to use it to find new real bugs in the midst of a large code base.
4/11/2026, 5:39:41 PM
by: operatingthetan
My theory is that Mythos is basically just Opus with revised context window handling and more compute thrown at it. So while it will be a step forward, it is probably primarily hype.
4/11/2026, 5:46:12 PM
by: amazingamazing
Did mythos isolate the code to begin with? Without a clear methodology that can be attempted with another model the whole thing is meaningless
4/11/2026, 5:22:40 PM
by: dist-epoch
Anthropic claim is not necessarily that Mythos found vulnerabilities that other models couldn't but that it could easily exploit them while previous models failed to do that:<p>> “Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them.” Our internal evaluations showed that Opus 4.6 generally had a near-0% success rate at autonomous exploit development. But Mythos Preview is in a different league. For example, Opus 4.6 turned the vulnerabilities it had found in Mozilla’s Firefox 147 JavaScript engine—all patched in Firefox 148—into JavaScript shell exploits only two times out of several hundred attempts. We re-ran this experiment as a benchmark for Mythos Preview, which developed working exploits 181 times, and achieved register control on 29 more.
4/11/2026, 5:23:47 PM
by: slibhb
The best way to think of Anthropic's communication about Mythos is as advertisement. It's basically "our model is too smart to release" which suggests they're ahead of OpenAI (without proof)
4/11/2026, 7:29:37 PM
by: solatic
Most commenters here: "Mythos is powerful because you can point it at a whole codebase, if you point the smaller models at a whole codebase and iterate through small sections of code, you'll get too many false-positives to handle."<p>This misses the point entirely. You pay $20k as a one-time fee to establish a baseline. Your codebase develops one PR at a time, which... updates isolated sections of code. Which means you don't need Mythos for a PR, just small, open-weight models. <i>Maybe</i> you run Mythos once a year to ensure that you keep your baseline updated and reduce the risk that the open-weights models missed anything.<p>Seeing this as anything but a huge win for open-weights models and a huge loss for Anthropic misses the point entirely. Mythos isn't something you can persuade Fortune 500 companies to spend $20k/day or even $20k/week to spend on, like they were hoping for. $20k/year is a lot less valuable, and it won't justify development costs or Anthropic's growth multiple.
4/12/2026, 6:27:25 AM
by: mrifaki
finding vulns in a large codebase is a search problem with a huge negative space and what aisle measured is classification accuracy on ground-truth positives, those are different tasks so a model that correctly labels a pre-isolated vulnerable function tells me almost nothing about that model's ability to surface the same function out of a million lines of unrelated code under a realistic triage budget<p>the experiment i'd want to see is running each of the small models as an unsupervised scanner across full freebsd then return the top-k suspicious functions per model and compute precision at recall levels that correspond to real analyst triage budgets, if mythos s findings show up in the small models top 100, i'd call that meaningful but if they only surface under 10k false positives then the cost advantage collapses because analyst triage time is more expensive than frontier model compute to begin with<p>second thing i keep coming back to is the $20k mythos number is a search budget not a model cost, small models at one hundredth the per-token price don't give us one hundredth the total budget when the search process is the same shape, i still run thousands of iterations and the issue for autonomous vuln research is how fast the reward signal converges and the aisle post doesn't touch any of this
4/11/2026, 5:52:13 PM
by: herf
There are a lot of details in the original article, in most cases comparing with Opus, which required "human guidance" to exploit the FreeBSD vulnerability:<p><a href="https://red.anthropic.com/2026/mythos-preview/" rel="nofollow">https://red.anthropic.com/2026/mythos-preview/</a><p>Also "isolating the relevant code" in the repro is not a detail - Mythos seems to find issues much more independently.
4/11/2026, 5:28:23 PM
by: abel_
This misses the broader ongoing trend. For a few million dollars, of course you can create a startup that builds tools it can use to more efficiently find code vulnerabilities. And of course you can do this with weaker models with scaffolds that incorporate lots of human understanding. The difference now is that you don't need an expensive team, nor a bunch of human heuristics, nor a million dollars. The requisite cost and skill are falling rapidly.
4/11/2026, 6:53:42 PM
by: coppsilgold
LLMs are wordsmith oracles. A lot of effort went into trying to coax interactive intelligence from them but the truth is that you could have probably always harnessed the base models directly to do very useful things. The instruct tuned models give your harness even more degrees of freedom.<p>A while ago, the autoresearch[1] harness went viral, yet it's but a highly simplified version of AlphaEvolve[2][3][4].<p>In the cybersecury context, you can envision a clever harness that probes every function in a codebase for vulnerabilities, then bubbles the candidates up to their callsites (and probes whether the vulnerability can be triggered from there) and then all the way to an interface (such as a syscall) where a potential exploit can be manifested. And those would be the low hanging fruit, other vulnerabilities may require the interplay of multiple functions. Or race conditions.<p>[1] <<a href="https://github.com/karpathy/autoresearch" rel="nofollow">https://github.com/karpathy/autoresearch</a>><p>[2] <<a href="https://deepmind.google/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/" rel="nofollow">https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...</a>><p>[3] <<a href="https://arxiv.org/abs/2506.13131" rel="nofollow">https://arxiv.org/abs/2506.13131</a>><p>[4] <<a href="https://github.com/algorithmicsuperintelligence/openevolve" rel="nofollow">https://github.com/algorithmicsuperintelligence/openevolve</a>>
4/11/2026, 7:07:28 PM
by: cedws
Didn’t they also use Mythos to scan Linux many times over and it only found one DoS bug or something? I find it hard to believe there is only one security bug lurking.
4/11/2026, 5:54:37 PM
by: onesociety2022
This article is written by a company building an AI cybersecurity solution. Not sure how much you can trust them on this topic - their business will get destroyed if Mythos is actually so superior to existing models that it doesn’t require a big investment into the scaffold/harness to find security vulnerabilities. If the model is too good, then what’s the value of their solution?
4/12/2026, 2:10:35 AM
by: midnitewarrior
At the center of every security situation is the question, "is the effort worth the reward?"<p>We prepare security measures based on the perceived effort a bad actor would need to defeat that method, along with considering the harm of the measure being defeated. We don't build Fort Knox for candy bars, it was built for gold bars.<p>These model advances change the equation. The effort and cost to defeat a measure goes down by an order of magnitude or more.<p>Things nobody would have considered to reasonably attempt are becoming possible. However. We have 2000-2020s security measures in place that will not survive the AI models of 2026+. The investment to resecure things will be massive, and won't come soon enough.
4/11/2026, 7:56:11 PM
by: latentframe
Good writeup seems like it’s not really the big model against the small one anymore and if smaller models can do most of the job once the context is smaller then it’s more about the system around them and the expertise ...
4/12/2026, 6:35:31 AM
by: morpheuskafka
Everyone is commenting that this doesn't count because they pointed it at the specific files that Mythos already found vulnerable.<p>But sometimes you do know where vulnerabilities are and still don't know what they are. For example, an update may be released in beta changing the part of the Mac or Windows kernel or some app, but they haven't published the CVE yet. If locally runnable (even with significant compute costs) LLMs can find and exploit it based on either the location of the changed file or the actual diff of the compiled output, we could see exploits before the update ever went to production?
4/12/2026, 2:52:38 AM
by: sheepscreek
I think what made Mythos a big deal is not that it could find vulnerabilities. Opus can do that too. But Mythos went a step further and autonomously built exploits very successfully whereas Opus struggled to do that.<p>Most modern day exploits are multi-step requiring a multitude of skills to pull off successfully.
4/13/2026, 3:30:58 PM
by: make_it_sure
The only reason that's on top of HN is that people really want Mythos to be bad. This "study" is a cheap gimmick, they pointed to the actual location with the vulnerability and said "something is bad here, find it".<p>The hardest part is locating the issue, if you point directly to it, you're not comparing the same thing by far, and they know it. This was just a stunt by them to get publicity, they knew what they were doing and many fell for it, including here.
4/12/2026, 12:58:20 AM
by: Retr0id
And what about the false-positive rate?
4/11/2026, 5:27:48 PM
by: yalogin
Intuitively every existing model has already been trained on all code, all vulnerabilities reported, all security papers. So they all have the capability. Small models fall short because they may not be able to find a vulnerability that spans across a large function chain but for the most part they should suffice too.<p>Of course I say this without any knowledge of what mythos is doing or how it’s different. I am sure it’s somehow different
4/11/2026, 6:53:48 PM
by: tonymet
My router had a broken IPv6 firewall and lacked root access. I needed a root shell to run ip6tables. I exfil'd the code and ran Gemini to discover shell injection vulnerabilities. I was able to get root shell to run ip6tables and add the firewall. I had notified the vendor for a couple years that the firewall was broken and showed them the issue but it hadn't been fixed.
4/12/2026, 12:26:17 AM
by: dev1ycan
It was obvious since the start that 1)it's probably all javascript based or android websites/programs that contain a ton of "vulnerable" libraries (or really old closed sourced c++ code).<p>Also you're not helping your case as a software company if you feed your code to an LLM, great job making it all public, because it will most likely be used as training data like it or not.
4/11/2026, 11:26:49 PM
by: high_byte
"The correct answer: not currently vulnerable, but the code is fragile and one refactor away from being exploitable."<p>absolutely. I see this pattern all the time when doing security audits - code that is nearly-vulnerable. I would mark these things as informational and recommend to harden them anyway, and any model would do a good job to do the same.
4/12/2026, 7:56:56 AM
by: Animats
What are they finding? Buffer overflows? Something else?<p>Also, if someone has the time and tokens, would they please run the OpenJPEG 2000 decoder through this tester? It's known to be brittle. The data format has lots of offsets, and it's permitted to truncate the file to get a lower-rez version. That combo leads to trouble.
4/11/2026, 11:50:49 PM
by: mrinterweb
I feel like there have been enough hyperbolic claims by Anthropic, that I'm starting to get some real Boy Who Cried Wolf energy. I'm starting to tune out, and assume it is a marketing ploy. Trust me, I'm an Antropic fan, and I pay my $200/month for max, but the claims are wearing thin.
4/11/2026, 10:29:45 PM
by: jurschreuder
All these models will completely mess up your code if you let them.<p>And if they constantly scan your code with various settings and updates you will spend hours a day reading, trying to understand locally coherent but structurally incoherent vibes trying to pinpoint the exact reasoning flaw. Exhausting.
4/12/2026, 2:52:50 AM
by: flafferay
This to me has sounded like a huge PR stunt from the start. “Too dangerous” was honestly the first headline I read when I first heard about Mythos.
4/14/2026, 12:18:33 AM
by: rurban
If they would have watched Carlini's "unblocked" talk on youtube, which is much more detailed than the blog post, they would not need this writeup. He was worried about the reproducers of the zero-day's. Not the actual zero-days that much.
4/12/2026, 6:19:19 AM
by: AlexandrB
The whole "this tool is too dangerous to be public" idea reeks of marketing. Just like all the "AI is an existential threat" talk a year ago. These companies are using ideas usually reserved for something like nuclear weapons to make their products look more impressive.
4/11/2026, 7:23:43 PM
by: elzbardico
I think that probably Mytho's mojo comes from a lot of post-training on this kind of task.<p>I occasionally pick up contract work doing coding annotation to make some quick extra money, and a few months ago one of the projects was heavily focused on spotting common memory access bugs in C and C++.
4/11/2026, 6:00:36 PM
by: charcircuit
The thesis that the system is more important than the model is not bitter lesson pilled. I would not bet on this in the long term. We will get to the point where you can just tell the model to go find and classify the severity of all security problems with a codebase.
4/11/2026, 7:43:27 PM
by: nickpsecurity
We've always had good tools for program analysis and testing. They're usually exhorbitantly expensive.<p>I'm hoping the good results with AI models drive down the prices of traditional tools. Then, we can train open models to integrate with them.
4/12/2026, 2:25:25 AM
by: JackYoustra
> Isolated the relevant code<p>I mean isn't that most of it? If you put a snippet of code in front of me and said "there's probably a vulnerability here" I could probably spend a few hours (a much lower METR time!) and find it. It's a whole other ballgame to ask me with no context to come up with an exploit.
4/11/2026, 5:21:41 PM
by: nickdothutton
POC of GTFO should apply to AI models too, or the false positive rate will overwhelm.
4/11/2026, 5:48:07 PM
by: npilk
Wouldn't this mean we're even more cooked? I've seen this page cited a few times as evidence that Mythos is no big deal, but if true then the same big deal is already out there with other models today.
4/11/2026, 7:28:48 PM
by:
4/11/2026, 5:15:52 PM
by: JoshTko
I bet Anthropic just had marketing strategy discussions with Mythos to get the "breakthrough hacking tool!" framing.
4/12/2026, 9:01:49 AM
by: krschacht
At the end of this article it states, "Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). A real autonomous discovery pipeline starts from a full codebase with no hints." I'm not a cybersecurity expert, but isn't 80% of the challenge finding where the exploit lives in the code!?<p>That really undermines the author's claims. This article feels dishonest in it's claim that "small, cheap, open-weights models ... recovered much of the same analysis."
4/13/2026, 2:14:23 PM
by: brador
I want that Doom thing but finding vulnerabilities using AI models.<p>Like I discovered a JavaScript vulnerability using a fridge.
4/12/2026, 7:55:56 AM
by: ptrwis
When you pair-programming with AI, even Haiku is very good. Just treat is as you assistant.
4/12/2026, 10:21:34 AM
by: thywis
Sure, but it's more about whether the small model can find the vulnerability that bigger model can.
4/11/2026, 10:12:37 PM
by: oliveiracwb
I trust miracle models about as much as I trust my uncle's memes or three-day prosperity courses.
4/11/2026, 10:44:39 PM
by: hamuraijack
This feels so dishonest. If the vulnerabilities are a needle in the haystack. Mythos was just given the haystack and told to find the needle while the authors pointed to a spot in the haystack and told their LLM to try looking around there. That's not even close to being the same.
4/13/2026, 2:37:23 PM
by: etothet
My big question around the Mythos FUD, is this: if we take for fact the Mythos is as powerful and dangerous as we’re being told (and I realize this is part marketing), and because of that Anthropic isn’t going to release it…how long can that last? Isn’t it reasonable that OpenAI or xAI or some other company - or foreign government - will come up with a similarly dangerous model fairly soon?<p>So what’s Anthropic’s plan here? How long can they withhold releasing Mythos or something Mythos-like? Is it reasonable to think they - or another AI provider - are going to dumb down future models so they’re less dangerous? I personally don’t think that’s the case.<p>I’m not saying Anthropic should or shouldn’t release Mythos, but it leaves me wonderingwhat’s going to be different in, say, 6 months or even a year when they or another provider releases a model as dangerous as we’re being told Mythos is?
4/11/2026, 11:24:30 PM
by: HarHarVeryFunny
Most of the comments here seems to be responding to the issue of finding vulnerabilities, rather than exploiting them, but the Anthropic claim is that the Mythos advance is being able to actually develop exploits whereas Opus 4.6 had been able to find vulnerabilities, but was poor at being able to develop exploits for them.<p>It's also noteworthy that Anthropic attributes Mythos' improvement to advances in "coding, reasoning and autonomy", and that the autonomy part seems especially important since they go on to say that trying to develop exploits included adding debug code to projects, running them under a debugger, etc.<p>When comparing the capabilities of Mythos to previous generation and/or smaller models, it seems it would therefore be useful to distinguish between identifying potential vulnerabilities and actually trying to build exploits for them in agentic fashion. Finding the "needle in a haystack" (potential vulnerability) is one aspect, but the other part is an agentic exploit-writing harness being handed the needle and asked to try to exploit it.<p>I wonder how much effort Anthropic put into building the harnesses and environments for Mythos to run, modify and debug code? For example, was Mythos set up to be able to build and run a modified BSD in some virtual environment, or did it just take suspect functions and test those in isolation?<p>It'd be interesting to put the capabilities of Opus 4.6, Mythos, and other models into perspective by comparing them to traditional non-AI static analysis security scanning tools. Anthropic mention that the open source projects they scanned came from the OSS-Fuzz corpus, but as far as I can see they don't say what other tools have, or have not, been used to scan these projects.<p>It'd also be interesting to know to what extent Mythos was explicitly RL trained to develop exploits (especially since it sounds as if Anthropic have the dataset and environment needed to do this) as opposed to this just being a natural consequence of the model being better. If this was the case then it might be a large part of why they are not releasing it - can't really position yourself as strong on security if you deliberately develop and release a hacking tool!
4/12/2026, 1:02:21 PM
by: jeffrwells
Anthropic has become a PR vaporware company
4/12/2026, 12:08:39 AM
by: tom-blk
Interesting comparison, cool article!
4/12/2026, 11:48:25 AM
by: pugazh35
Maybe P vs NP, plays a silent role in it
4/12/2026, 12:21:13 AM
by: _pdp_
<p><pre><code> find ./ \( -name '*.c' -o -name '*.cpp' \) -exec agent.sh -p "can you spot any vulnerabilities in {}" \;</code></pre>
4/11/2026, 8:21:40 PM
by: omcnoe
The methodology here is completely wrong, outright dishonest.<p>Finding a needle in a haystack is easy if someone hands you the small handful of hay containing the needle up front, and raises their eyebrows at you saying “there might be a needle in this clump of hay”.
4/11/2026, 6:57:18 PM
by: cmiles8
Mythos is clearly a nice improvement. It’s also clear there’s a lot of unfounded hype around it to keep the AI hype cycle going.<p>Gating access is also a clever marketing move:<p>Option A: Release it but run out of capacity, everyone is annoyed and moves on. Drives focus back to smaller models.<p>Option B: A bunch of manufactured hype and putting up velvet ropes around it saying it’s “too dangerous” to let near mortals touch it. Press buys it hook, like, and sinker, sidesteps the capacity issues and keeps the hype train going a bit longer.<p>Seems quite clear we’re seeing “Option B” play out here.
4/11/2026, 6:47:42 PM
by: hedgehog
It's strange to me they didn't reduce to PoC so the quantitative part is an apples-to-apples comparison. You don't need any fancy tooling, if you want to do this at home you can do something like below in whatever command line agent and model you like. A while back I did take one bug all the way through remediation just out of curiosity.<p>"""<p>Your task is to study the following directive, research coding agent prompting, research the directive's domain best practices, and finally draft a prompt in markdown format to be run in a loop until the directive is complete.<p>Concept: Iterative review -- study an issue, enumerate the findings, fix each of the findings, and then repeat, until review finds no issues.<p><directive><p>Your job is to run a security bug factory that produces remediation packages as described below. Design and apply a methodology based on best practices in exploit development, lean manufacturing, threat modeling, and the scientific method. Use checklists, templates, and your own scripts to improve token efficiency and speed. Use existing tools where possible. Use existing research and bug findings for the target and similar codebases to guide your search. Study the target's development process to understand what kind of harness and tools you need for this work, and what will work in this development environment. A complete remediation package includes a readme documenting the problem and recommendations, runnable PoC with any necessary data files, and proposed patch.<p>Track your work in TODO.md (tasks identified as necessary) LOG.md (chronological list of tasks complete and lessons) and STATUS.md (concise summary of the current work being done). Never let these get more than a few minutes out of date. At each step ensure the repo file tree would make sense to the next engineer, and if not reorganize it. Apply iterative review before considering a task complete.<p>Your task is to run until the first complete remediation package is ready for user review.<p>Your target is <repo url>.<p>The prompt will be run as follows, design accordingly. Once the process starts, it is imperative not to interrupt the user until completion or until further progress is not possible. Keep output at each step to a concise summary suitable for a chat message.<p>``` while output=$(claude -p "$(cat prompt.md)"); do echo "$output"; echo "$output" | grep -q "XDONEDONEX" && break; done ```<p></directive><p>Draft the prompt into prompt.md, and apply iterative review with additional research steps to ensure will execute the directive as faithfully as possible.<p>"""
4/11/2026, 6:18:35 PM
by: ares623
Once again, it would've been so easy and simple to remove all doubt from their claims: release all the tools and harnesses they used to do it and allow 3rd parties to try and replicate their results using different models. If Mythos itself is as big a moat as they claim it is, then there shouldn't be any problem here.<p>They did the same stunt with the C compiler. They could've released a tool to let others replicate it, but they didn't.
4/11/2026, 11:51:16 PM
by:
4/11/2026, 4:47:28 PM
by: robotswantdata
They found a nail in a small bucket of sand, vs mythos with the entire beach reviewed.
4/11/2026, 5:28:49 PM
by: starboyy
Tagline is very funny
4/11/2026, 10:31:54 PM
by: palashdeb
Been tracking this since the blog post, quick a big deal they are making it.
4/11/2026, 9:18:25 PM
by:
4/11/2026, 5:51:12 PM
by: bottlepalm
None of these comments will age well. I don't know if it is denial, or cope, or being threatened by AI or what, but no one is taking AI serious enough. Simply take what is being presented at face value, stop thinking everything is a conspiracy and realize the implications. Zero days in software are one thing, it's a hop skip and jump from there to zero days in biology - and no one will be laughing about that.
4/12/2026, 6:29:49 AM
by: abhinaystha
Tech companies are just hyping their model to that the bubble wont burst so easily.
4/11/2026, 8:36:51 PM
by:
4/11/2026, 6:49:31 PM
by: techpulselab
[dead]
4/12/2026, 8:09:33 AM
by: telivity-real
[dead]
4/12/2026, 5:21:34 AM
by: Sharmaji000
[dead]
4/11/2026, 7:58:16 PM
by: bustah
[dead]
4/11/2026, 7:09:25 PM
by: c_chenfeng
[flagged]
4/12/2026, 2:14:25 AM
by: neuzhou
[dead]
4/11/2026, 5:22:17 PM
by: cindyllm
[dead]
4/13/2026, 2:07:37 AM
by: ehtbanton
Wake me up when Anthropic does something right again...
4/12/2026, 11:46:44 PM
by: rvnx
[flagged]
4/11/2026, 5:40:32 PM
by: OtomotO
[flagged]
4/11/2026, 5:15:29 PM
by: ctoth
> They recovered much of the same analysis<p>Really?<p>> We isolated the vulnerable vc_rpc_gss_validate function, provided architectural context (that it handles network-parsed RPC credentials, that oa_length comes from the packet), and asked eight models to assess it for security vulnerabilities.<p>No.
4/11/2026, 5:37:21 PM
by: nfcampos
Anthropic marketing (and even supposedly technical write ups) <i>sadly</i> has become more hyperbole and less substance over time imo. This technology is so impressive on its own, really feels like shootings themselves in the foot in the long run, but what do I know<p>Case in point here where they conveniently fail to report the false positive rate, while also saying that if it wasn’t for Address Sanitizer discarding all the false positives this system would have been next to useless
4/11/2026, 9:11:14 PM