AI Can Write Your Code. It Can't Do Your Job

(terriblesoftware.org)

70 points | by antfarm 4 days ago

18 comments

  • _pdp_ 6 hours ago
    To be fair, AI cannot write the code!

    It can write some types of code. It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.

    If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms. The $3B is better spent on coming up with truly novel technology that these companies could monopolise with their models. Well, the can't, not yet.

    My gut feeling tells me that this might be actually possible at some point but at enormous cost that will make it impractical for most intents and purposes. But even if it possible tomorrow, you would still need people that understand the systems because without them we are simply doomed.

    In fact, I would go as much as saying that the demand for programmers will not plummet but skyrocket requiring twice as many programmer we have today. The world simply wont have enough programmers to supply. The reason I think this might actually happen is because the code produced by AI will be so vast overtime that even if humans need to handle/understand 1% that will require more than the 50M developers we have today.

    • DrewADesign 5 hours ago
      If you’re writing simple code, it’s often a one-shot. With medium-complexity code, it gets the first 90% done in a snap. Easily faster than I could ever do it. The problem is that 90% is never the part that sucks up a bunch of time— it’s the final 10%, and in many cases for me, it’s been more hindrance than help. If I’d just taken the driver’s wheel, making heavy use of autocomplete, I’d have done better and with less frustration. Having to debug code I didn’t write that’s an integral part of what I’m building is an annoying context switch for anything non-trivial.
      • hattmall 4 hours ago
        Same... And the errors are often really nonsensical and nested in ways that a human or thinking brain simply would never do
      • Gigachad 5 hours ago
        Yeah that’s been my experience. The generators are shockingly good. But they don’t get it all the way, and then you are left looking at a mountain of code you don’t understand. And by the time you do, you could have just built it yourself.
        • pylua 4 hours ago
          Yeah, but you can ask ai questions about it so you can understand it faster
        • bdangubic 4 hours ago
          > you are left looking at a mountain of code you don’t understand. And by the time you do, you could have just built it yourself.

          SWEs that do not have (or develop) this skill (to fill-in the 10% that doesn’t work and fully understand the 90% that works and very, very quickly) will be plumbers in a few years if not earlier.

    • noremotefornow 6 hours ago
      I’m very confused by this as in my domain space I’ve been able to nearly one-shot most coding assignments since this summer (really Sonnet3.5h) by pointing specific models at well-specified requirements. Things like breaking down a long functional or technical spec document into individual tasks, implementing, testing, deployment and change management. Yes, it’s rather straightforward scripting, like automation on Salesforce. That work is toast and spec-driven development will surge as people go more hands-off the direct manipulation of symbols representing machine instructions, on average.
      • kankerlijer 5 hours ago
        There is vast difference between writing glue code and engineering systems. Who will come up with the next Spring Boot, Go, Rust, io_uring, or whatever, once the profession has completely reduced itself to pleasing short outcomes?
      • recursive 5 hours ago
        Maybe some day we'll collectively figure it out. I'm confused how people are getting so much success out of it. That hasn't been my experience. I'm not sure what I'm doing wrong.
        • bugglebeetle 4 hours ago
          Try using the brainstorming and execute plan loops with the superpowers plugin in Claude Code. It encapsulates the spec driven development process fairly well.
    • felipeerias 4 hours ago
      This misunderstands what LLM-based tools mean for complex software projects. Nobody expects that you should be able to ask them to write you a whole kernel or a web engine.

      Coding agents in particular can be very helpful for senior engineers as a way to carry out investigations, double-check assumptions, or automate the creation of some parts of the code.

      One key point is to use their initial output as a draft, as a starting point that still needs to be checked and iterated, often through pair programming with the same tool.

      The mid-term impact of this transition is hard to anticipate. We will probably get a wide range of cases, from hyper-productive small teams displacing larger but slower ones, to AI-enhanced developers in organisations with uneven adoption quietly enjoying a lot more free time while keeping the same productivity as before.

    • zingar 6 hours ago
      Without arguing with your main point:

      > (few people like writing unit tests)

      The TDD community loves tests and finds writing code without tests more painful than writing tests before code.

      Is your point that the TDD community is a minority?

      > It does a better job at writing unit test (not perfect) then the fellow human programmer

      I see a lot of very confused tests out of cursor etc that do not understand nor communicate intent. Far below the minimum for a decent human programmer.

      • rhines 5 hours ago
        I see tests as more of a test of the programmer's understanding of their project than anything. If you deeply understand the project requirements, API surface, failure modes, etc. you will write tests that enforce correct behaviour. If you don't really understand the project, your tests will likely not catch all regressions.

        AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.

      • g-b-r 3 hours ago
        It's very worrying that this comment was downvoted
    • zingar 6 hours ago
      > it can even find bugs

      This is one of the harder parts of the job IMHO. What is missing from writing “the code” that is not required for bug fixes?

      • csomar 4 hours ago
        LLMs can traverse codebases and do research faster. But I can see this one backfiring badly as structural slop becomes more acceptable since you can throw an LLM at it an fix the bug. Eventually you'll reach a stage of stasis where your tech debt is so high, you can't pay the interest even with an LLM.
    • AndrewKemendo 6 hours ago
      >It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.

      Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system, and public health ... what have the Romans ever done for us?

    • csomar 4 hours ago
      > It is fascinating that it can bootstrap moderately complex projects form a single shot.

      Similar to "git clone bigproject@github.git"? There is nothing fascinating about creating something that has existed around the training set. It is fascinating that the AI can make some variations from the original content though.

      > If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms.

      This is where all the "vibe-coders" disappears. LLMs can write code fast but so does copy-paste. Most of the "vibe-coded" stuff I see on the Internet is non-functional slop that is super-unoptimized and has open supabase databases.

      To be clear, I am not against LLMs or embracing new technologies. I also don't have this idea that we have some kind of "craft" when we have been replacing other people for the last couple decades.

      I've been building a game (fully vibe-coded, that is the rule is that I don't write/read any lines of code) and it has reached a stage where any LLM is unable to make any change without fully breaking it (for the curious: https://qpingpong.codeinput.com). The end result is quite impressive but it is far from replacing anyone who has been doing serious programming anytime soon.

  • IanCal 6 hours ago
    It doesn’t need to do all of a job to reduce total jobs in an area. Remove the programming part then you can reduce the number of people for the same output and/or bring people who can’t program but can do the other parts into the fold.

    > If OpenAI believed GPT could replace software engineers, why wouldn’t they build their own VS Code fork for a fraction of that cost?

    Because believing you can replace some or even most engineers leaves space for hiring the best. It increases the value of the best, and this is all assuming right now - they could believe they have tools coming in two years to replace many more engineers yet still hire them now.

    > You sit in a meeting where someone describes a vague problem, and you’re the one who figures out what they actually need. You look at a codebase and decide which parts to change and which to leave alone. You push back on a feature request because you know it’ll create technical debt that’ll haunt the team for years. You review a colleague’s PR and catch a subtle bug that would’ve broken production. You make a call on whether to ship now or wait for more testing.

    These are all things that LLMs are doing to various degrees of success though. They’re reviewing code, they can (I know because I had this with for 5.1) push back on certain approaches, they absolutely can decide what parts of code adds to change.

    And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?

    • zingar 6 hours ago
      > And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?

      I personally find LLMs to be fantastic for taking my thoughts to a more concrete state through robust debate.

      I see AI turning many other folks’ thoughts into garbage because it so easily heads in the wrong direction and they don’t understand how to build self checking into their thinking.

  • heliumtera 5 hours ago
    It never was about code. What was the last time you said to yourself "I feel like grabbing some code right now!"? Last time you struggled to accomplish something and then screamed "if only I had some code!!!!". This sounds very, very stupid, doesn't it? English is not my native language, but I find this wording very annoying. People/business suffer for having needs unattended or requirements unsatisfied. Latency too high? Have some code! Database locked with long running query? Here, take some code! You want to price exotic financial assets to calculate your risks? Have you tried to generate some code?? This is so strange. I honestly do not think in terms of "code". If your kid asked you about your job, would you say "I use the computer"?
  • mitjam 1 hour ago
    AI accellerates my learning. It helps me understand, do more experiments and lured me into reading more code. I think AI also helps me be more productive, but I‘m less focussed and sooner exhausted. I also fear its addictive potential which is why I force myself to take breaks more often and try to not use it every day.

    AI fundamentally changed the programming experience for me in a positive way, but I‘m glad that it’s not my full time job. I think it can also have bad effects which can not be easily avoided in fulltime roles under market conditions.

  • aryehof 1 hour ago
    I disagree with much of this. Programming isn't just a tool we use in pursuit of being an “Engineer” or whatever aggrandizing title is applied. I cant help but smile at the pretension of it.

    Currently AI models are inconsistent and unpredictable programmers, but less so when applied to non-novel small and focused programming tasks. Maybe that will change resulting in it being able to do your job. Are you just writing lines of code, organized into functions and modules using a “hack it till it works” methodology? If so, I suggest be open to change.

    • tarsinge 4 minutes ago
      Professionally programming is just a means to an end that is solving a business problem. Just like hammering nails or turning screws is not a job in itself. The craft is important, but ultimately what people pay for is a final product (a house, a bridge, a software that does X, etc.), and our job is to build something that match their constraints. What you describe is the equivalent of complex novel construction projects, not what the vast majority of the construction industry is about.

      Building a house is like a CRUD app, it already was a solved problem technically. AI is like prefabs or power tools. If your job and what you were interested in was building houses AI is great. If you were a brick layer not so much.

      Engineer is not an aggrandizing title, it’s the job. Being paid for the hobby of writing code was just an anomaly that AI will close IMO.

  • mikert89 6 hours ago
    Software engineering will get automated, all of the issues with the current models will get worked out in time. People can beg and wish that its not true, but it is. We have a few more good years left and then this career is over
    • Eridrus 4 hours ago
      I think this is a not insane prediction, but much like truck driving and radiology the timeline is likely not that short.

      Waymo has been about to replace the need for human drivers for more than a decade and is just starting to get there in some places, but has had basically no impact on demand yet, and that is a task with much less skill expression.

    • heliumtera 5 hours ago
      If you are so sure AI is more competent than you at your job, who am I to disagree
    • bgwalter 6 hours ago
      So far there are mainly horrible "AI-coded" websites that look like they were produced by the Bootstrap framework in 2014. They use 100% CPU in Firefox despite having no useful functionality.

      It is the Mc Donalds version of programming, except that Mc Donalds does not steal the food they serve.

      • mikert89 5 hours ago
        like half of engineers are vibe coding full time in their jobs, wake up
        • bgwalter 3 hours ago
          If you think you are right, shouldn't you tell them to wake up? They would be the ones promoting their irrelevance.
          • jondwillis 2 hours ago
            They’re promoting their irrelevance because they don’t see the alternative. Get yours while you still can attitude. And they’re probably right.
    • skydhash 6 hours ago
      Why not say that the sky will fall on ou heads while you’re at it? /s
  • groceryheist 2 hours ago
    I'm thinking a lot about this currently as a recent convert (as of Opus 4.5). I think this post is on the right track, but like much of this discourse, it isn't really addressing how the technology will grow and the disciplines will adapt.

    I'm by no means a doomer, but its obviously a huge change.

    Generative coding models will never be 100% perfect. The speed of their convergence to acceptable solutions will decline in complex and novel systems, and at some point there will be diminishing returns to increasing investment in improving their performance.

    The cost of software will fall precipitously and it seems unlikely that the increase in the value of programmers / engineers as they currently practice will offset the decline in the price in software. However, following the law of supply and demand, the supply and the amount of software produced will surely grow, and I think someone has to use the models to build software. I expect being trained in software engineering will be very helpful for making effective use of these tools, but such training may not sufficient for a person to succeed in the new labor market.

    The scope of problem that a valuable engineer is expected to manage will grow enormously, requiring not only new skills in using generative coding/language models, but also in reasoning about the systems they help create. I anticipate growth in crossover PM / engineering roles. I guess that people who generalize across the stack and current sub-disciplines will thrive and valuable specialties and side-disciplines will include software architecture, electrical engineering, robotics, communication, and business management.

    Some people will thrive in this new field, but it may be a difficult transition for many. I suspect that confusion about model capabilities and how to make the most of them and which people are doing valuable things will put a lot of friction and inefficiency into the transition time-frame.

    Last thought, given how great models are at coding compared to general of knowledge, administrative, and bureaucratic work, I expect models are widely used to build systems that are supply shocks on such work. I don't think my argument above applies to such workers. I'm worried most about them.

  • chocoboaus3 6 hours ago
    I used AI to build an app just for myself that parses data (using pandas, python etc, not LLM but an LLM coded it) for a report that i need to produce

    it's purely for myself, no one else.

    I think this is what AI can do at the moment, in terms of mass market SaaS vibe codes, it will be harder. Happy to be proven wrong.

    • PacificSpecific 5 hours ago
      This is my experience as well. It's been great to make an application that's small in scope, doesn't require access to my main project repo and is basically a nice to have value add for the client.

      I was already quite adept in the language and frameworks involved and the risk was very small so it wasn't a big time sink to review the application PR's. Had I not been it would have sucked.

      For me the lesson learned wrt agentic coding is to adjust my expectations relative to online rhetoric and it can be sometimes be useful for small isolated one-offs.

      Also it's once in a blue moon I can think of a program suitable for agentic coding so I wouldn't ever consider purchasing a personal license.

    • allovertheworld 5 hours ago
      Aka its the next stage of stackoverflow/google search
    • zingar 6 hours ago
      What is it that you feel is missing that would take AI from “just for myself” to “mass market SaaS vibes”?
      • chocoboaus3 5 hours ago
        being able to properly deal with scale and security. Being able to be confident that if I am capturing PII data into my application, its as secure as it can be and as secure as if a principal developer had put the architecture together etc.
        • rhines 5 hours ago
          Mass market SAAS will generally just use other products to handle this stuff. And if there does happen to be a leak, they just say sorry and move on, there are very few consequences for security failures.
          • chocoboaus3 3 hours ago
            You're right

            but guess who advises that architecture and implements it... the principal developer/architect.

            You can use good security tools, badly.

          • reactordev 5 hours ago
            What use is privacy and security when all our data lives in a DC in us-east-1?
    • chocoboaus3 6 hours ago
      Yes i cross referenced the data to ensure it was producing the correct numbers vs my manual methods
  • seanmcdirmid 5 hours ago
    I used AI to write some python code and some bazel rules to generate some python code around that to do some new workflow system I wanted to prototype. It just did it, it would make mistakes but since I had it running tests it would fix the code after running the tests.

    The big issue is that I didn’t know the APIs very well, and I’m not much of a Python programmer. I could have done this by hand in around 5 days with a ramp up to get started, but it took less than a day just to tell the AI what I wanted and then to iterate with more features.

  • edg5000 4 hours ago
    > It’s like saying calculators replaced accountants. Calculators automated arithmetic, but arithmetic was never the job. The job was understanding financials, advising clients, making judgment calls, etc. The calculator just made accountants faster at the mechanical part.

    Mechanical and later electical calculators replaced human calculators. Accountants switched from having to delegate computation to owning a calculator.

    • jondwillis 2 hours ago
      Calculators never threatened to be able to approximate human above-median reason and taste though.
  • rwaksmunski 6 hours ago
    It's decent at explaining my code back to me, so I can make sure my intent is visible within code/comments/tracing messages. Not too bad at writing test cases either. I still write my code.
    • zingar 6 hours ago
      Are you saying that you literally write the features by yourself and that you only use LLMs to understand old code and write tests?

      Or a more meta point that “LLMs are capable of a lot”?

  • Sparkyte 6 hours ago
    Likely won't for a while. The race to get all of the memory is likely a squeeze attempt against startups and not consumers. Side effect consumers.

    We need regulations to prevent such large scale abuse of economic goods especially if the final output is mediocre.

  • zingar 6 hours ago
    Reasons why the attempted cursor acquisition might not be about replicating cursor (with or without human help): shutting down competition; market share; understanding user behavior; training data
  • arisAlexis 1 hour ago
    Yet? Always be prudent and include this little word.
  • socketcluster 6 hours ago
    There are many different ways to write code. The more code there is, the more possible versions of the system could have existed to solve that same set of problems; each with different tradeoffs.

    The challenge is writing code in such a way that you end up with a system which solves all the problems it needs to solve in an efficient and intuitive way.

    The difference between software engineering and programming is that software engineering is more like a discovery process; you are considering a lot of different requirements and constraints and trying to discover an optimal solution for now and for the foreseeable future... Programming is just churning out code without much regard for how everything fits together. There is little to no planning involved.

    I remember at university, one of my math lecturers once said "Software engineering? They're not software engineers, they're programmers."

    This is so wrong. IMO, software engineering is the essence of engineering. The complexity is insane and the rules of how to approach problems need to be adapted to different situations. A solution which might be optimal in one situation may be completely inappropriate for a slightly different situation due to a large number of reasons.

    When I worked on electronics engineering team projects at university, everyone was saying that writing the microcontroller software was the hardest part. It's the part most teams struggled with, more so than PCB design... Yet software engineers are looked down upon as members of an inferior discipline... Often coerced into accepting the lesser title of 'developer'.

    I'm certain there will be AIs which can design optimal PCBs, optimal buildings, optimal mechanical parts, long before we have AI which can design optimal software systems.

    • zingar 5 hours ago
      Whenever someone gets into Important Reasons why Software Engineering is Different from Programming, I hear a bunch of things that should just be considered “competent programming”.

      > Programming is just churning out code without much regard for how everything fits together

      What you’re describing is a beginner, not a programmer

      > There is little to no planning involved > trying to discover an optimal solution for now and for the foreseeable future

      I spend so much time fighting against plans that attempt to take too much into account and are unrealistic about how little is known before implementation. If the most Important Difference is that software engineers like planning, does that mean that being SE makes you less effective?

      • socketcluster 3 hours ago
        I disagree with the first point because I worked for quite a few companies on many different projects and less than 10% were doing it even approximately right... And these were all well regarded companies. I even worked for a company backed by YCombinator for over 1 year as a contractor.

        I agree that you shouldn't plan too much, but my experience is that anticipating requirements is possible and highly valuable. It doesn't require planning but it requires intuition and an ability to anticipate problems and adhere to certain principles.

        For example, in my current job, I noticed a pattern that was common to a lot of tasks early on. It was non-obvious. I implemented a module which everyone on my team ended up using for essentially every task thereafter. It could have been implemented in 100 different ways, it could have been implemented at a later time, but the way I implemented it meant that it saved everyone a huge amount of time since early on in the project.

        Also, we didn't have to do any refactoring and were later able to add extra capabilities and implement complex requirement changes to all parts of the code retroactively thanks to this dependency.

        One time we learned that we had to calculate the target date/time differently and our requirements engineer was very worried that this would require a large refactoring to all our processes. It didn't; we changed it in one place and didn't have to update even a single downstream process.

        It was a relatively complex module which required some understanding of the business domain but it provided exactly the right amount of flexibility. Now, all my team members know how to update the config on their own. We haven't yet encountered a case it couldn't handle easily.

        I have similar stories to tell about many companies I worked for. When AI can replace me, it will be able to replace most entrepreneurs and managers.

    • tehjoker 5 hours ago
      I think it’s undeniable as an engineering profession when performance and algorithm choice come into play, that or safe control of embedded or industrial devices. There are other contexts where its engineering too, it’s just engineering in the sense that someone designing commodity headphones is doing electronics engineering.
  • nextworddev 5 hours ago
    The reason why there are meetings is due to existing org layers.

    Thus, the root cause of the meetings' existence is BS mostly. That's why you have BS meetings.

    The fastest way to drive AI adoption is thus by thinning out org layers.

    • recursive 4 hours ago
      Not all meetings are status reports. There is also the working meeting where people figure stuff out.
  • gaigalas 5 hours ago
    When we talk about code, you think it's about code, but it's communication _about solving problems_ which happens to use code as a language.

    If you don't understand that language, code becomes a mystery, and you don't understand what the problem is we're trying to solve.

    It becomes this entity, "the code". A fantasy.

    Truth is: we know. We knew it way before you. Now, can you please stop stating the obvious? There are a lot of problems to solve and not enough time to waste.