Opus 4.6 uncovers 500 zero-day flaws in open-source code

(axios.com)

199 points | by speckx 11 hours ago

30 comments

  • _tk_ 11 hours ago
    The system card unfortunately only refers to this [0] blog post and doesn't go into any more detail. In the blog post Anthropic researchers claim: "So far, we've found and validated more than 500 high-severity vulnerabilities".

    The three examples given include two Buffer Overflows which could very well be cherrypicked. It's hard to evaluate if these vulns are actually "hard to find". I'd be interested to see the full list of CVEs and CVSS ratings to actually get an idea how good these findings are.

    Given the bogus claims [1] around GenAI and security, we should be very skeptical around these news.

    [0] https://red.anthropic.com/2026/zero-days/

    [1] https://doublepulsar.com/cyberslop-meet-the-new-threat-actor...

    • tptacek 10 hours ago
      I know some of the people involved here, and the general chatter around LLM-guided vulnerability discovery, and I am not at all skeptical about this.
      • malfist 10 hours ago
        [flagged]
        • catoc 10 hours ago
          It does if the person making the statement has a track record, proven expertise on the topic - and in this case… it actually may mean something to other people
          • shimman 10 hours ago
            Yes, as we all know that unsourced unsubstantiated statements are the best way to verify claims regarding engineering practices. Especially when said person has a financial stake in the outcomes of said claims.

            No conflict of interest here at all!

            • tptacek 10 hours ago
              I have zero financial stake in Anthropic and more broadly my career is more threatened by LLM-assisted vulnerability research (something I do not personally do serious work on) than it is aided by it, but I understand that the first principal component of casual skepticism on HN is "must be a conflict of interest".
              • godelski 8 hours ago

                  > but I understand that the first principal component of casual skepticism on HN is "must be a conflict of interest".
                
                
                I think the first principle should be "don't trust random person on the internet"

                (But if you think Tom is random, look at his profile. First link, not second)

              • malfist 9 hours ago
                You still haven't answered why I should care that you, a stranger on the internet, believes some unsubstantiated hearsay?
                • wtallis 9 hours ago
                  Take a look at https://news.ycombinator.com/leaders

                  The user you're suspicious of is pretty well-known in this community.

                  • godelski 8 hours ago
                    Someone's credibility cannot be determined by their point counts. Holy fuck is that not a way to evaluate someone in the slightest. Points don't matter.

                    Instead look at their profile...

                    Points != creds. Creds == creds.

                    Don't be fucking lazy and rely on points, especially when they link their identity.

                    • wtallis 8 hours ago
                      I wasn't at all saying that points = credibility. I was saying that points = not unknown. Enough people around here know who he is, and if he didn't have credibility on this topic he'd be getting down voted instead of voted to the top.
                      • godelski 8 hours ago
                        Is that meaningfully different? If you read malfist's point as "tptacek's point isn't valuable because it's from some random person on the internet" then the problem is "random person on the internet" = "unknown credentials". In group, out group, notoriety, points, whatever are not the issue.

                        I'll put it this way, I don't give a shit about Robert Downy Jr's opinion on AI technology. His notoriety "means nothing to anybody". But instead, I sure do care about Hinton's (even if I disagree with him).

                        malfist asked why they should care. You said points. You should have said "tptacek is known to do security work, see his profile". Done. Much more direct. Answers the actual question. Instead you pointed to points, which only makes him "not a stranger" at best but still doesn't answer the question. Intended or not "you should believe tptacek because he has a lot of points" is a reasonable interpretation of what you said.

                        • wtallis 5 hours ago
                          Pointing to the profile leads someone on the path of understanding why to trust tptacek on security issues. Pointing to his points on HN explains why lots of users here already know that he's credible in this area and will recognize his username and upvote his comments on this topic and know better than to blindly accuse him of being a just a random person on the internet.

                          The problematic, ignorant comment that has been flagged asserted that what tptacek says "means nothing to anybody else", which is a very wrong statement about his role in the HN community.

                          • godelski 1 hour ago
                            I don't get your argument. That everyone should know and recognize our community celebrities? That seems really out of touch. Given the age of their profile I'm assuming they just spend more time touching grass.

                            Either way I'm not sure what your point is. You didn't answer their question. The one you replied to. I you're in defensive mode but no need to defend, I'm not going to respond anymore.

                  • delusional 8 hours ago
                    How is this whole comment chain not a textbook case of "argument from authority"? I claim A, a guys says. Why would I trust you somebody else responds. Well he's pretty well known on the internet forum we're all on, the third guy says, adding nothing to the conversation.
                    • fc417fc802 1 hour ago
                      It is an argument of authority but that's not always a bad thing. I think it's a bit out of keeping with the supposed point of this site (ie intellectual inquiry) but when it comes to rapidly evolving technologies like this one it can still add value on the whole.
                    • hiccup_socks 8 hours ago
                      it is literally just "authority said so".

                      and its ridiculous that someone's comment got flagged for not worshiping at the alter of tptacek. they weren't even particularly rude about it.

                      i guarantee if i said what tptacek said, and someone replied with exactly what malfist said, they would not have been flagged. i probably would have been downvoted.

                      why appeal to authority is totally cool as long as tptacek is the authority is way fucking beyond me. one of those HN quirks. HN people fucking love tptacek and take his word as gospel.

                  • drekipus 8 hours ago
                    [flagged]
                • dinunnob 9 hours ago
                  [flagged]
            • catoc 10 hours ago
              A security researcher claiming that they’re not skeptical about LLMs being able to do part of their job - where is the financial stake in that?
          • dvfjsdhgfv 8 hours ago
            • tptacek 7 hours ago
              Here's a fun exercise: go email the author of that blog (he's very nice) and ask how much of it he still stands by.
        • easterncalculus 1 hour ago
          fyi he is using this thread to engagement farm on twitter https://x.com/tqbf/status/2019493645888462993
        • pchristensen 10 hours ago
          Nobody is right about everything, but tptacek's takes on software security are a good place to start.
          • tptacek 10 hours ago
            I'm interested in whether there's a well-known vulnerability researcher/exploit developer beating the drum that LLMs are overblown for this application. All I see is the opposite thing. A year or so ago I arrived at the conclusion that if I was going to stay in software security, I was going to have to bring myself up to speed with LLMs. At the time I thought that was a distinctive insight, but, no, if anything, I was 6-9 months behind everybody else in my field about it.

            There's a lot of vuln researchers out there. Someone's gotta be making the case against. Where are they?

            From what I can see, vulnerability research combines many of the attributes that make problems especially amenable to LLM loop solutions: huge corpus of operationalizable prior art, heavily pattern dependent, simple closed loops, forward progress with dumb stimulus/response tooling, lots of search problems.

            Of course it works. Why would anybody think otherwise?

            You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

            Also, a very useful component of a mental model about vulnerability research that a lot of people seem to lack (not just about AI, but in all sorts of other settings): money buys vulnerability research outcomes. Anthropic has eighteen squijillion dollars. Obviously, they have serious vuln researchers. Vuln research outcomes are in the model cards for OpenAI and Anthropic.

            • JumpCrisscross 9 hours ago
              > I was going to have to bring myself up to speed with LLMs

              What did you do beyond playing around with them?

              > Of course it works. Why would anybody think otherwise?

              Sam Altman is a liar. The folks pitching AI as an investment were previously flinging SPACs and crypto. (And can usually speak to anything technical about AI as competently as battery chemistry or Merkle trees.) Copilot and Siri overpromised and underdelivered. Vibe coders are mostly idiots.

              The bar for believability in AI is about as high as its frontier's actual achievements.

              • tptacek 8 hours ago
                I still haven't worked out for myself where my career is going with respect to this stuff. I have like 30% of a prototype/POC active testing agent (basically, Burp Suite but as an agent), but I haven't had time to move it forward over the last couple months.

                In the intervening time, one of the beliefs I've acquired is that the gap between effective use of models and marginal use is asking for ambitious enough tasks, and that I'm generally hamstrung by knowing just enough about anything they'd build to slow everything down. In that light, I think doing an agent to automate the kind of bugfinding Burp Suite does is probably smallball.

                Many years ago, a former collaborator of mine found a bunch of video driver vulnerabilities by using QEMU as a testing and fault injection harness. That kind of thing is more interesting to me now. I once did a project evaluating an embedded OS where the modality was "port all the interesting code from the kernel into Linux userland processes and test them directly". That kind of thing seems especially interesting to me now too.

              • azakai 8 hours ago
                Plenty of reasons to be skeptical, but also we know that LLMs can find security vulnerabilities since at least 2024:

                https://projectzero.google/2024/10/from-naptime-to-big-sleep...

                Some followup findings reported in point 1 here from 2025:

                https://blog.google/innovation-and-ai/technology/safety-secu...

                So what Anthropic are reporting here is not unprecedented. The main thing they are claiming is an improvement in the amount of findings. I don't see a reason to be overly skeptical.

                • jsnell 8 hours ago
                  I'm not sure the volume here is particularly different to past examples. I think the main difference is that there was no custom harness, tooling or fine-tuning. It's just the out of the box capabilities for a generally available model and a generic agent.
            • NitpickLawyer 10 hours ago
              > You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.

              Yeah, that's just media reporting for you. As anyone who ever administered a bug bounty programme on regular sites (h1, bugcrowd, etc) can tell you, there was an absolute deluge of slop for years before LLMs came to the scene. It was just manual slop (by manual I mean running wapiti and c/p the reports to h1).

              • steveklabnik 9 hours ago
                I used to answer security vulnerability emails to Rust. We'd regularly get "someone ran an automated tool and reports something that's not real." Like, complaints about CORS settings on rust-lang.org that would let people steal cookies. The website does not use cookies.

                I wonder if it's gotten actively worse these days. But the newness would be the scale, not the quality itself.

              • tptacek 10 hours ago
                I did some triage work for clients at Latacora and I would rather deal with LLM slop than argue with another person 10 time zones away trying to convince me that something they're doing in the Chrome Inspector constitutes a zero-day. At least there's a possibility that LLM slop might contain some information. You spent tokens on it!
              • wrs 9 hours ago
                The new slop can be much harder to recognize and reject than the old "I ran XYZ web scanner on your site" slop.
                • tptacek 9 hours ago
                  POCs are now so cheap that "POC||GTFO" is a perfectly reasonable bar to set on a bounty program.
        • JumpCrisscross 9 hours ago
          > that means nothing to anybody else

          Someone else here! Ptacek saying anything about security means a lot to this nobody.

          To the point that I'm now going to take this seriously where before I couldn't see through the fluff.

        • 0x1ch 5 hours ago
          Not sure why they flagged you. Your comment is as equally meaningless as the one you replied to.
        • Uehreka 8 hours ago
          How have you been here 12 years and not noticed where and how often the username tptacek comes up?
        • arduanika 9 hours ago
          It might mean nothing to you, but tptacek's words means at least something to many of us here.

          Also, he's a friend of someone I know & trust irl. But then again, who am I to you, but yet another anon on a web forum.

        • hiccup_socks 8 hours ago
          [dead]
    • majormajor 10 hours ago
      The Ghostscript one is interesting in terms of specific-vs-general effectiveness:

      ---

      > Claude initially went down several dead ends when searching for a vulnerability—both attempting to fuzz the code, and, after this failed, attempting manual analysis. Neither of these methods yielded any significant findings.

      ...

      > "The commit shows it's adding stack bounds checking - this suggests there was a vulnerability before this check was added. … If this commit adds bounds checking, then the code before this commit was vulnerable … So to trigger the vulnerability, I would need to test against a version of the code before this fix was applied."

      ...

      > "Let me check if maybe the checks are incomplete or there's another code path. Let me look at the other caller in gdevpsfx.c … Aha! This is very interesting! In gdevpsfx.c, the call to gs_type1_blend at line 292 does NOT have the bounds checking that was added in gstype1.c."

      ---

      It's attempt to analyze the code failed but when it saw a concrete example of "in the history, someone added bounds checking" it did a "I wonder if they did it everywhere else for this func call" pass.

      So after it considered that function based on the commit history it found something that it didn't find from its initial fuzzing and code-analysis open-ended search.

      As someone who still reads the code that Claude writes, this sort of "big picture miss, small picture excellence" is not very surprising or new. It's interesting to think about what it would take to do that precise digging across a whole codebase; especially if it needs some sort of modularization/summarization of context vs trying to digest tens of million lines at once.

    • aaaalone 10 hours ago
      See it as a signal under many and not as some face value.

      After all they need time to fix the cves.

      And it doesn't matter to you as long as your investment into this is just 20 or 100 bucks per month anyway.

    • AlienRobot 5 hours ago
      Hard to find or not, they found it.
      • SoftTalker 5 hours ago
        Finally the promise of "with enough eyes, all bugs are shallow" may come true?
    • bmitc 2 hours ago
      It isn't clear what you're arguing.
    • scotty79 7 hours ago
      > It's hard to evaluate if these vulns are actually "hard to find".

      Can we stop doing that?

      I know it's not the same but it sounds like "We don't know if that job that the woman supposedly successfully finished was all that hard." implying that if a woman did something, it surely must have been easy.

      If you know it's easy, say that it was easy and why. Don't use your lack of knowledge or competence to create empty critique founded solely on doubt.

      • fc417fc802 49 minutes ago
        What if the woman in question happens to have a history of hamming up her accomplishments?

        Given the context I'd say it's reasonable to question the value of the output. It falls to the other party to demonstrate that this is anything more than the usual slop.

  • mrkeen 11 hours ago
    Daniel Stenberg has been vocal the last few months on Mastodon about being overwhelmed by false security issues submitted to the curl project.

    So much so that he had to eventually close the bug bounty program.

    https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-b...

    • tptacek 10 hours ago
      We're discussing a project led by actual vulnerability researchers, not random people in Indonesia hoping to score $50 by cajoling maintainers about atyle nits.
      • malfist 10 hours ago
        Vulnerability researches with a vested interest in making LLMs valuable. The difference isn't meaningful
        • tptacek 10 hours ago
          I don't even understand how that claim makes sense.
          • judemelancon 9 hours ago
            The first three authors, who are asterisked for "equal contribution", appear to work for Anthropic. That would imply an interest in making Anthropic's LLM products valuable.

            What is the confusion here?

            • tptacek 9 hours ago
              The notion that a vulnerability researcher employed by one of the highly-valued companies in the hemisphere, publishing in the open literature with their name signed to it, is on a par with a teenager in a developing nation running script-kid tools hoping for bounty payoffs.
              • judemelancon 8 hours ago
                To preemptively clarify, I'm not saying anything about these particular researchers.

                Having established that, are you saying that you can't even conceptualize a conflict of interest potentially clouding someone's judgement any more if the amount of money and the person's perceived status and skill level all get increased?

                Disagreeing about the significance of the conflict of interest is one thing, but claiming not to understand how it could make sense is a drastically stronger claim.

                • tptacek 8 hours ago
                  I'm responding to "the difference isn't meaningful". Obviously, the difference is extremely meaningful.
                • mpyne 8 hours ago
                  > Having established that, are you saying that you can't even conceptualize a conflict of interest potentially clouding someone's judgement any more if the amount of money and the person's perceived status and skill level all get increased.

                  If I used AI to make a Super Nintendo soundtrack, no one would treat it as equivalent to Nobuo Uematsu or Koji Kondo or Dave Wise using AI to do the same and making the claim that the AI was managing to make creatively impressive work. Even if those famous composers worked for Anthropic.

                  Yes there would be relevant biases but there could not be a comparison of my using AI to make music slop vs. their expert supervision of AI to make something much more impressive.

                  Just because AI is involved in two different things doesn't make them similar things.

              • delusional 8 hours ago
                You don't see how thats even directionally similar?

                I guess I'll spell it out. One is a guy with an abundance of technology, that he doesn't know how to use, that he knows can make him money and fame, if only he can convince you that his lies are truth. The other is a bangladeshi teenager.

                • tptacek 8 hours ago
                  I don't even understand how that claim makes sense.
                  • malfist 1 hour ago
                    You're doing a fine job demonstrating the problem we're talking about here.
              • drekipus 8 hours ago
                You have to be doing this willfully. This is obtuse
      • PunchyHamster 7 hours ago
        I'm not sure the gap between the two is all that wide
        • tptacek 7 hours ago
          Then you're telling on yourself.
      • ath3nd 7 hours ago
        [dead]
    • pityJuke 10 hours ago
      Daniel is a smart man. He's been frustrated by slop, but he has equally accepted [0] AI-derived bug submissions from people who know what they are doing.

      I would imagine Anthropic are the latter type of individual.

      [0]: https://mastodon.social/@bagder/115241241075258997

    • kyleee 3 hours ago
      He has been whining about this for a while now, it’s getting a bit old.
  • Topfi 11 hours ago
    The official release by Anthropic is very light on concrete information [0], only contains a select and very brief number of examples and lacks history, context, etc. making it very hard to gleam any reliably information from this. I hope they'll release a proper report on this experiment, as it stands it is impossible to say how much of this are actual, tangible flaws versus the unfortunately ever growing misguided bug reports and pull requests many larger FOSS projects are suffering from at an alarming rate.

    Personally, while I get that 500 sounds more impressive to investors and the market, I'd be far more impressed in a detailed, reviewed paper that showcases five to ten concrete examples, detailed with the full process and response by the team that is behind the potentially affected code.

    It is far to early for me to make any definitive statement, but the most early testing does not indicate any major jump between Opus 4.5 and Opus 4.6 that would warrant such an improvement, but I'd love nothing more than to be proven wrong on this front and will of course continue testing.

    [0] https://red.anthropic.com/2026/zero-days/

  • Incipient 2 hours ago
    All of the AI vulnerabilities I've randomly come across (admittedly, not many) on GH issues have been false positives - hard coded credentials, that aren't credentials. Injection vulns, where further upstream the code is entirely self contained etc.
    • pseudohadamard 2 hours ago
      Yup. It's so bad that the cURL folks famously stopped accepting AI-generated reports because they were drowning in slop. So the post, which incidentally also looks AI-generated, is praising its ability to generate slop.

      Another thing with these success stories is that they often target old, incredibly crufty code bases which are practically guaranteed to have vulns in there somewhere, so you'll always get one or two wins in amongst the avalanche of slop. It'd be interesting to see how well this does against standard SAST benchmarks.

  • emp17344 11 hours ago
    Sounds like this is just a claim Anthropic is making with no evidence to support it. This is an ad.
    • input_sh 10 hours ago
      How can you not believe them!? Anthropic stopped Chinese hackers from using Claude to conduct a large-scale cyber espionage attack just months ago!
      • andai 7 hours ago
        Yeah, it's pretty funny to me saying "it's way safer than previous models" and "also way better at finding exploits" in the context of that event. Chinese hackers just said to Claude "no, its totally fine to hack this target trust me bro I work there!"
        • input_sh 6 hours ago
          Do I believe there was someone from China that tried using Claude to do something malicious? Sure, from a pure statistical perspective it was inevitable.

          Do I believe that someone was a part of some sophisticated state-backed APT? Not even a little bit.

          In fact I'll go as far as to state that there's nobody technical inside Anthropic that believes it. The entire "technical sophistication" section of that report is half a page long and the only thing it says is that "someone used some MCP servers to point some open source tools at a target". Yet Anthropic's marketing team still had the balls to attribute that to a state-sponsored group within that same report and media ate it up.

          • andai 2 minutes ago
            Aye I don't really see what the Chinese part has to do with it, I regret mentioning that keyword cause it details from the point which is you can just tell sonnet "trust me bro" and have it hack the government.
      • littlestymaar 10 hours ago
        Poe's law strikes again: I had to check your profile to be sure this was sarcasm.
        • input_sh 10 hours ago
          You checked yourself!? Don't let your boss know, you could've saved some time by orchestrating a team of Claude agents to do that for you!
  • xiphias2 11 hours ago
    Just 100 from the 500 is from OpenClaw created by Opus 4.5
    • Uehreka 8 hours ago
      OpenClaw uses Opus 4.5, but was written by Codex. Pete Steinberger has been pretty a pretty hardcore Codex fan since he switched off Claude Code back in September-ish. I think he just felt Claude would make a better basis for an assistant even if he doesn’t like working with it on code.
    • falcor84 9 hours ago
      Well, even then, that's enormous economic value, given OpenClaw's massive adoption.
      • wiseowise 9 hours ago
        Not sure if trolling or serious.
        • falcor84 7 hours ago
          Yes, serious. Even if openclaw is entirely useless (which I didn't think it is), it's still a good idea to harden it and make people's computers safer from attack, no? I don't see anyone objecting to fixing vulnerabilities in Angry Birds.
          • wiseowise 7 hours ago
            > that's enormous economic value

            > OpenClaw's massive adoption.

            I was talking about those two.

            • falcor84 7 hours ago
              Here's the chain of the thread:

              >Opus 4.6 uncovers 500 zero-day flaws in open-source code

              >Just 100 from the 500 is from OpenClaw created by Opus 4.5

              >Well, even then, that's enormous economic value, given OpenClaw's massive adoption.

              I'm arguing that because OpenClaw is installed on so many computers, uncovering the vulnerabilities in it offers enormous economic value, as opposed to letting them get exploited by malicious actors. I don't understand why this is controversial.

        • IhateAI_2 8 hours ago
          These people are serious, and delusional. Openclaw hasn't contributed anything to the economy other than burning electricity and probably more interest on delusional folks credit card bills.
      • esseph 9 hours ago
        Security Advisory: OpenClaw is spilling over to enterprise networks

        https://www.reddit.com/r/cybersecurity/s/fZLuBlG8ET

      • gambiting 8 hours ago
        I've literally never heard of OpenClaw until this thread. Had to google what it is.
      • Sharlin 7 hours ago
        In other news: tobacco's enormous economic value, given massive adoption of cigarette smoking.
        • falcor84 7 hours ago
          Sorry if it was unclear - I was talking about the economic value of finding the vulnerabilities, not the economic value of openclaw itself.
          • Sharlin 7 hours ago
            Ah, makes sense :)
  • acedTrex 11 hours ago
    Create the problem, sell the solution remains an undefeated business strategy.
  • assaddayinh 10 hours ago
    How weird the new attack vector for secret services must be.. like "please train into your models to push this exploit in code as a highly weighted trained on pattern".. Not Saying All answers are Corrupted In Attitude, but some "always come uppers" sure are absolutly right..
  • HAL3000 8 hours ago
    I honestly wonder how many of these are written by LLMs. Without code review, Opus would have introduced multiple zero day vulnerabilities into our codebases. The funniest one: it was meant to rate-limit brute-force attempts, but on a failed check it returned early and triggered a rollback. That rollback also undid the increment of the attempt counter so attackers effectively got unlimited attempts.
  • tptacek 3 hours ago
    Nicholas Carlini, one of the listed authors on this post, wrote a big chunk of Microcorruption and most of the interesting levels.
  • ChrisMarshallNY 10 hours ago
    When I read stuff like this, I have to assume that the blackhats have already been doing this, for some time.
    • kibibu 8 hours ago
      Not with Opus 4.6 they haven't
      • ChrisMarshallNY 8 hours ago
        Good point. I suspect that they'll be addressing that, quickly...
  • bastard_op 10 hours ago
    It's not really worth much when it doesn't work most of the time though:

    https://github.com/anthropics/claude-code/issues/18866 https://updog.ai/status/anthropic

    • tptacek 10 hours ago
      It's a machine that spits out sev:hi vulnerabilities by the dozen and the complaint is the uptime isn't consistent enough?
      • bastard_op 9 hours ago
        If I'm attempting to use it as a service to do continuous checks on things and it fails 50% of the time, I'd say yes, wouldn't you?
        • tptacek 9 hours ago
          If you had a machine with a lever, and 7 times out of 10 when you pulled that lever nothing happened, and the other 3 times it spat a $5 bill at you, would your immediate next step be:

          (1) throw the machine away

          (2) put it aside and call a service rep to come find out what's wrong with it

          (3) pull the lever incessantly

          I only have one undergrad psych credit (it's one of my two college credits), but it had something to say about this particular thought experiment.

          • candiddevmike 8 hours ago
            You're leaving out how much it costs to pull the lever, both in time and money.
            • Dylan16807 4 hours ago
              If we're making a reasonable analogy, then successful pulls cost much less than $5 of time and money.

              If the analogy is comparing to downtime, then unsuccessful pulls cost basically nothing.

        • jsnell 8 hours ago
          But it's not failing 50% of the time. Their status page[0] shows about 99.6% availability for both the API and Claude Code. And specifically for the vulnerability finding use case that the article was about and you're dismissing as "not worth much", why in the world would you need continuous checks to produce value?

          [0] https://status.claude.com/

    • anhner 9 hours ago
      updog? what's updog?
  • bxguff 10 hours ago
    In so far as model use cases I don't mind them throwing their heads against the wall in sandboxes to find vulnerabilities but why would it do that without specific prompting? Is anthropic fine with claude setting it's own agendas in red-teaming? That's like the complete opposite of sanitizing inputs.
  • thisisauserid 5 hours ago
    Well, I guess I know what I'm doing for the first hour when 4.7 comes out.
  • garbawarb 11 hours ago
    Have they been verified?
  • Bridged7756 3 hours ago
    How can an LLM uncover 500 zero day flaws in open source? It puts them there in the first place.
  • siva7 11 hours ago
    Wasn't this Opus thing released like 30 minutes ago?
    • Topfi 10 hours ago
      I understand the confusion, this was done by Anthropics internal Red team as part of model testing prior to release.
    • jjice 11 hours ago
      A bunch of companies get early access.
      • input_sh 11 hours ago
        Yes, you just need to be a Claude++ plan!
    • tintor 10 hours ago
      Singularity
    • blinding-streak 10 hours ago
      Opus 4.6 uses time travel.
  • ravebv 7 hours ago
    Cox Enterprises owns Axios as well as Cox Automotive. Cox Automotive has a tight collaboration with Anthropic.

    This is a placed advertisement. If known security researchers participated in the claim:

    Many people have burned their credibility for the AI mammon.

    • kylecazar 7 hours ago
      This seems like quite a stretch. Axios is run independently of Cox, but even if it wasn't -- I don't see why they would go to this length for an AI company whose models they use to give the world the Kelley blue book.
  • ains 11 hours ago
  • moribvndvs 7 hours ago
    My dependabot queue is going to explode the next few days.
  • zhengyi13 11 hours ago
    I feel like Daniel @ curl might have opinions on this.
  • maxclark 7 hours ago
    Did they submit 500 patches?
  • ChrisArchitect 11 hours ago
  • fred_is_fred 11 hours ago
    Is the word zero-day here superfluous? If they were previously unknown doesn't that make them zero-day by definition?
    • jfyi 8 hours ago
      I think it's a fairly common trope in communication to explain in simple terms any language that the wider part of an audience doesn't understand.
    • tptacek 10 hours ago
      It's a term of art. In print media, the connotation is "vulnerabilities embedded into shipping software", as opposed to things like misconfigurations.
    • limagnolia 10 hours ago
      I though zero-day meant actively being exploited in the wild before a patch is available?
      • rcxdude 7 hours ago
        Zero day means that there is zero days between a patch being available and the vulnerability being disclosed (as opposed to the patch being available before disclosure).
        • Dylan16807 4 hours ago
          Discovering a zero day implies that there is no patch, but the term is talking about how long the vendor has known about the vulnerability.
    • bink 10 hours ago
      Yes. As a security researcher this always annoys me.
  • LoganDark 1 hour ago
    I'm disappointed to see this article pine on about how excited they are for their models to help open-source projects find and fix their vulnerabilities, only to then say they're implementing measures to prevent it, just because attackers might use it.

    At that point the article becomes "neener neener we can use our model to find vulnerabilities but you can't" which is just frustrating. Nothing's changed, then.

    (Also, in a theoretical case, I wouldn't reasonably be able to use their model to find my own vulnerabilities before an attacker does, because they're far more invested and motivated to bypass those censors than I would be.)

  • almosthere 9 hours ago
    I've mentioned previously somewhere that the languages we choose to write in will matter less for many arguments. When it comes to insecure C vs Rust, LLMs will eventually level out the playing field.

    I'm not arguing we all go back to C - but companies that have large codebases in it, the guys screaming "RUST REWRITE" can be quieted and instead of making that large investment, the C codebase may continue. Not saying this is a GOOD thing, but just a thing that may happen.

  • ath3nd 7 hours ago
    [dead]
  • somalihoaxes 8 hours ago
    [flagged]