Believe the Checkbook

(robertgreiner.com)

90 points | by rg81 5 hours ago

10 comments

  • RandallBrown 4 hours ago
    > The bottleneck isn’t code production, it is judgment.

    It always surprises me that this isn't obvious to everyone. If AI wrote 100% of the code that I do at work, I wouldn't get any more work done because writing the code is usually the easy part.

    • phantasmish 17 minutes ago
      At my company doubling the writing-code part of software projects might speed them up 5%. I think even that’s optimistic.

      Imperfectly fixing obvious problems in our processes could gain us 20%, easy.

      Which one are we focusing on? AI. Duh.

    • skybrian 2 hours ago
      I'm retired now, but I spent many hours writing and debugging code during my career. I believed that implementing features was what I was being paid to do. I was proud of fixing difficult bugs.

      A shift to not writing code (which is apparently sometimes possible now) and managing AI agents instead is a pretty major industry change.

    • linhns 2 hours ago
      Well you should be surprised by the number of people who do not know this. Klarna is probably the most popular example where the CEO was all about creating more code, then fired everyone before regretting
    • xnx 38 minutes ago
      Lots of people have good judgement but don't know the arcane spells to cast to get a computer to do what they want.
    • add-sub-mul-div 3 hours ago
      I'll stare at a blank editor for an hour with three different solutions in my head that I could implement, and type nothing until a good enough one comes to mind that will save/avoid time and trouble down the road. That last solution is not best for any simple reason like algorithmic complexity or anything that can be scraped from web sites.
      • aaroninsf 1 hour ago
        No shade on your skills, but for most problems, this is already false; the solutions have already been scraped.

        All OSS has been ingested, and all the discussion in forum like this about it, and the personal blog posts and newsletters about it; and the bug tracking; and theh pull requests, and...

        and training etc. is only going to get better and filtering out what is "best."

        • al_borland 35 minutes ago
          A vast majority of the problems I’m asked to solve at work do not have open-source code I can simply copy or discussion forums that already decided the best answer. Enterprise customers rarely put that stuff out there. Even if they did, it doesn’t account for the environment the solution sit in, possible future integrations, off-the-wall requests from the boss, or knowing that internal customer X is going to want some other wacky thing, so we need to make life easy on our future selves.

          At best, what I find online are basic day 1 tutorials and proof on concept stuff. None of it could be used in production where we actually need to handle errors and possible failure situations.

        • add-sub-mul-div 4 minutes ago
          The point is that the best solution is based on specific context of my situation and the right judgment couldn't be known by anyone outside of my team/org.
    • gowld 3 hours ago
      I don't understand this thinking.

      How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?

      Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?

      • kibwen 3 hours ago
        "Writing code" is not the goal. The goal is to design a coherent logical system that achieves some goal. So the practice of programming is in thinking hard about what goal I want to achieve, then thinking about the sort of logical system that I could design that would allow me to verifiably achieve that goal, then actually banging out the code that implements the abstract logical system that I have in my head, then iterating to refine both the abstract system and its implementation. And as a result of being the one who produced the code, I have certainty that the code implements the system I have in mind, and that the system it represents is for for the purpose of achieving the original goals.

        So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.

        And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.

        The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.

        • ryandrake 2 hours ago
          The actual act of typing code into a text editor and building it could be the least interesting and least valuable part of software development. A developer who sees their job as "writing code" or a company leader who sees engineers' jobs as "writing code" is totally missing where the value is created.

          Yes, there is artistry, craftsmanship, and "beautiful code" which shouldn't be overlooked. But I believe that beautiful code comes from solid ideas, and that ugly code comes from flawed ideas. So, as long as the (human-constructed) idea is good, the code (whether it is human-typed or AI-generated) should end up beautiful.

          • RunSet 11 minutes ago
            Raising the question: Where is the beautiful machine-generated code?
      • RandallBrown 3 hours ago
        In my experience (and especially at my current job) bottlenecks are more often organizational than technical. I spend a lot of time waiting for others to make decisions before I can actually proceed with any work.

        My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)

        A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.

        I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.

      • jgeada 2 hours ago
        All you did was changing the programming language from (say) Python to English. One is designed to be a programming language, with few ambiguities etc. The other is, well, English.

        Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.

        • rootusrootus 37 minutes ago
          Exactly. LLMs are faster for me when I don't care too much about the exact form the functionality takes. If I want precise results, I end up using more natural language to direct the LLM than it takes if I just write that part of the code myself.

          I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.

      • layer8 2 hours ago
        > Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?

        It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.

      • scott_w 3 hours ago
        I think OP is closer to the latter. How I typically have been using Copilot is as a faster autocomplete that I read and tweak before moving on. Too many years of struggling to describe a task to Siri left me deciding “I’ll just show it what I want” rather than tell.
  • zamadatix 3 hours ago
    Something about the way the article sets up the conversation nags at me a bit - even though it concludes with statements and reasoning I generally agree quite well with. It sets out what it wants to argue clearly at the start:

    > Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished... The Bun acquisition blows a hole in that story.”

    But what the article actually discusses and demonstrates by the end of the article is how the aspects of engineering beyond writing the code is where the value in human engineers is at this point. To me that doesn't seem like an example of a revealed preference in this case. If you take it back to the first part of the original quote above it's just a different wording for AI being the code writer and engineering being different.

    I think what the article really means to drive against is the claim/conclusion "because AI can generate lots of code we don't need any type of engineer" but that's just not what the quote they chose to set out against is saying. Without changing that claim the acquisition of Bun is not really a counterexample, Bun had just already changed the way they do engineering so the AI wrote the code and the engineers did the other things.

    • croes 3 hours ago
      But the engineers can do it because they have written lots of code before. Where will these engineers get their experience in the future.

      And what about vibe coding? The whole point and selling point of many AI companies is that you don’t need experience as a programmer.

      So they sell something that isn’t true, it’s not FSD for coding but driving assistance.

      • zamadatix 2 hours ago
        These are all things I'd rather have seen the article set out to talk about as well, instead it opens up to disprove a statement saying AI can write the coding portion of the engineering problem by means of showing it being used that way with Bun to mean Anthropic must not actually think that.
    • fwip 2 hours ago
      I mean, it smells an AI slop article, so it's hard to expect much coherence.
  • faxmeyourcode 1 hour ago
    While I agree with the premise of the article, even if it was a bit shallow, this claim made at the beginning is also still true:

    > Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”

    Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.

  • neilv 4 hours ago
    > Treat AI as force multiplication for your highest-judgment people. The ones who can design systems, navigate ambiguity, shape strategy, and smell risk before it hits. They’ll use AI to move faster, explore more options, and harden their decisions with better data.

    Clever pitch. Don't alienate all the people who've hitched their wagons to AI, but push valuing highly-skilled ICs as an actionable leadership insight.

    Incidentally, strategy and risk management sound like a pay grade bump may be due.

  • conductr 4 hours ago
    People speak in relative terms and hear in absolutes. Engineers will never completely vanish, but it will certainly feel like it if labor demand is reduced enough.

    Technically, there’s still a horse buggy whip market, an abacus market, and probably anything else you think technology consumed. It’s just a minuscule fraction of what it once was.

    • marcosdumay 3 hours ago
      > but it will certainly feel like it if labor demand is reduced enough

      All the last productivity multipliers in programming led to increased demand. Do you really think the market is saturated now? And what saturated it is one of the least impactful "revolutionary" tools we got in our profession?

      Keep in mind that looking at statistics won't lead to any real answer, everything is manipulated beyond recognition right now.

  • TheCraiggers 1 hour ago
    How do I know they didn't buy them just to make sure their competitors couldn't?
    • kubb 1 hour ago
      Can anyone tell me the leading theory explaining the acquisition?

      I can’t see how buying a runtime for the sake of Claude Code makes sense.

  • jollyllama 3 hours ago
    "Believe the checkbook? Why do that when I can get pump-faked into strip-mining my engineering org?"- VPs everywhere
  • hapless 5 hours ago
    The ten dollar word for this is “revealed preferences”
    • recursive 4 hours ago
      I learned that phrase from one of the bold sentences in this article.
  • drcode 3 hours ago
    The bun acquisition is driven by current AI capabilities.

    This argument requires us to believe that AI will just asymptote and not get materially better.

    Five years from now, I don't think anyone will make these kinds of acquisitions anymore.

    • nitwit005 4 minutes ago
      An Anthropic engineer was getting some attention for saying six months: https://www.reddit.com/r/ClaudeAI/comments/1p771rb/anthropic...

      I assume this is at least partially a response to that. They wouldn't buy a company now if it would actually happen that fast.

    • 0x3f 3 hours ago
      > This argument requires us to believe that AI will just asymptote and not get materially better.

      That's not what asymptote means. Presumably what you mean is the curve levelling off, which it already is.

      • SoftTalker 2 hours ago
        This seems overly pedantic. The intended meaning is clear.
    • bigstrat2003 1 hour ago
      > This argument requires us to believe that AI will just asymptote and not get materially better.

      It hasn't gotten materially better in the last three years. Why would it do so in the next three or five years?

      • bitwize 1 hour ago
        Deep learning and transformers have given step functions in AI's capabilities. It may not happen, but it's reasonable to expect another step-function development soon.
  • Rakshath_1 4 hours ago
    [dead]