I'm OK being left behind, thanks

(shkspr.mobi)

609 points | by coinfused 2 hours ago

107 comments

  • stiiv 2 hours ago
    > If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.

    Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state.

    It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea.

    • augusto-moura 2 hours ago
      What I get a bit annoyed is companies forcing AI tools, getting usage metrics and actively hunting the engineers that don't use the tool "enough", I've never seen anything like it for a technically optional tool. Even in the past, aside from technical limitaions, you were not required to use enough of a tool.

      It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.

      • HolyLampshade 1 hour ago
        > I've never seen anything like it for a technically optional tool

        Cloud had a very similar vibe when it was really running advertising to CIO/CTOs hard. Everything had to be jammed into the cloud, even if it made absolutely no sense for it to be run there.

        This seems to come pretty frequently from visionless tech execs. They need to justify their existence to their boss, and thus try to show how innovative and/or cost cutting they can be.

        • aquariusDue 16 minutes ago
          That and microservices in lieu of a monolith. Or how about being the odd one out a few years ago when suggesting a MPA instead of a SPA when it made sense. I like to think where at the point of everybody is rebuilding their portfolio website with Angular 1 but this time it's Claude Code and a SaaS instead.
        • bee_rider 53 minutes ago
          “Cloud” seems like a better comparison than stuff like cryptocurrency. AI seems totally over-hyped but with some obvious sensible use-cases.
        • rurp 1 hour ago
          I agree with you but man the absurdity of aggressively pushing cloud or AI adoption as a cost cutting move is off the charts.
        • genthree 1 hour ago
          I think this is the result of a c-suite that has never actually done the work of the businesses they're running. The MBAificiation of management. Of course they constantly do brain-dead shit, they literally don't have a clue how anything actually works in "their" own business.
          • stackbutterflow 1 hour ago
            They've been vibe-driving businesses long before we've started vibe-coding software.
            • gopher_space 27 minutes ago
              There weren't really any failure states for the ZIRP "lifestyle CEO". If you remember old black and white movies about pigeons from psych 101 it's been that level of conditioning for how many years now?

              If your CEO doesn't look like a taxi dispatcher he's just moving his wings around waiting for a food pellet.

      • hibikir 1 hour ago
        It's using a bad tool to try to aim at something reasonable-ish: Developers not taking advantage of the tools in places where it's very easy to get use out of them. I have coworkers like that: One spent 3 days researching a bug that Claude found in 10 minutes by pointing it at the logs in the time window and the codebase. And he didn't even find the bug, when Claude nailed it in one.

        But is this something that is best done top to bottom, with a big report, counting tokens? Hell no. This is something that is better found, and tackled at the team level. But execs in many places like easy, visible metrics, whether they are actually helping or not. And that's how you find people playing JIRA games and such. My worse example was a VP has decided that looking at the burndown charts from each team under them, and using their shape as a reasonable metric is a good idea.

        It's all natural signs of a total lack of trust, and thinking you can solve all of this from the top.

        • sarchertech 1 hour ago
          The thing is we’ve always had people who spend more time on their tooling or learn different tools and perform better.

          I’ve seen people use notepad and I’ve seen people who are so good at vim that they look like they’re on editing code directly with their mind.

          Your particular example is extreme and my guess is the coworker is just not great at debugging. I use Claude all the time for finding bugs, but it fails fairly frequently though. I think there’s probably advantage to having some people who don’t use it that often, so you have someone to turn to when it fails.

          I’m definitely not exercising my debugging skills as much as I used to and I’m fairly confident they’ve atrophied.

      • jacobsenscott 1 hour ago
        > t just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.

        This is exactly what's happening. The top 5 or 6 companies in the s&p 500 are running a very sophisticated marketing/pressure campaign to convince every c-suite down stream that they need to force AI on their entire organization or die. It's working great. CEOs don't get fired for following the herd.

        • toomuchtodo 1 hour ago
          ~40-50% of the S&P500 rely on this continuing.

          S&P 500 Concentration Approaching 50% - https://news.ycombinator.com/item?id=47384002 - March 2026

          > No of course there isn't enough capital for all of this. Having said that, there is enough capital to do this for a at least a little while longer. -- Gil Luria (Managing Director and Analyst at D.A. Davidson)

          OpenAI Needs a Trillion Dollars in the Next Four Years - https://news.ycombinator.com/item?id=45394071 - September 2025 (8 comments)

          • karmakurtisaani 38 minutes ago
            Elon Musk is planning to put his AI company into the SpaceX IPO, and accelerate getting it into the major indices, effectively making pension funds, banks and individual investors his bag holders.

            Patric Boyle has a video on this in case you care for the details.

      • bondarchuk 33 minutes ago
        >I've never seen anything like it for a technically optional tool

        If you broaden the comparison (only a little bit) it looks suspiciously like employees being forced to train their own replacement (be that other employees, or factory automation), a regular occurrence.

      • luisgvv 1 hour ago
        I'm just using Copilot CLI for mindless stuff and set it to the premium models to meet the quota, as long as they can't see the prompts I think I should be fine
        • butlike 1 hour ago
          You're not going to get fired. Don't worry about it :)
      • jimmyjazz14 1 hour ago
        Yeah, I found this strange as well, if the tech is so amazing why do developers need to be forced to use it?
        • dash2 1 hour ago
          Maybe there's a positive externality: your individual learning percolates to others and benefits the firm as a whole.
          • whoknowsidont 1 hour ago
            What is there to learn? If anything developers are still the one's training and enhancing the models by giving them more feedback cycles and what works and what doesn't.
        • debatem1 1 hour ago
          I'm encouraging my folks to try it pretty hard because A) I've personally seen the productivity gains and B) using it is at first deeply weird/uncomfortable. Sometimes you've got to convince people to push through that kind of thing.
          • toomuchtodo 1 hour ago
            How you objectively measuring success?

            93% of Developers Use AI Coding Tools. Productivity Hasn't Moved. - https://philippdubach.com/posts/93-of-developers-use-ai-codi... - March 4th, 2026

            • nightski 44 minutes ago
              They measured 16 developers and called it a "study"? That is amusing. Not to mention it was conducted almost a year ago, the tools have already changed dramatically.
              • tehjoker 34 minutes ago
                So just run a new study this year. I do think the tools have improved, but it should show up empirically. The only people for whom the urgency of "right now" is present is for the C-suite and investor class who are fighting to make sure they survive, but it might also be a crisis of their own making. Don't confuse your identity as a worker with the identity of the capitalist class.
      • layer8 1 hour ago
        > I've never seen anything like it for a technically optional tool.

        It has often been the case for technologies though, like “now we’re doing everything in $language and $technology”. If you see LLM coding as a technology in that vein, it’s not a completely new phenomenon, although it does affect developers differently.

        • augusto-moura 1 hour ago
          Language and technology are normal, but we are talking about a metered code editor, nobody have asked me to "use X hours of IntellijIDEA" in the past, or "use git enough or be fired". Tools are never required to be used when they are not needed
        • kjkjadksj 1 hour ago
          Well the tech or the language had some feature to it that lead you to using it. By definition LLM coding doesn’t. It is like the job requirement turned into “ask jeff to write all your code and if you don’t we won’t hire you.”
          • layer8 1 hour ago
            Technologies were imposed by management whether it made sense or not. Like, “all data exchange formats now have to use XML”, or “all applications must be J2EE now”, because it was the new hot thing. “You” weren’t making that choice, management imposed it. That’s the parallel I’m drawing.
      • afpx 1 hour ago
        It's really insane what is happening. My wife manages 70 software developers. Her boss mandated that managers replace 50% of the staff with AI within a year. And, she's scrambling trying to figure out if any of the tools actually work and annoying her team because she keeps pushing AI on them. Unsurprisingly it's only slowed things down and put her in a terrible position.
        • throwaway29130 1 hour ago
          Brutal. But probably all too common. One of my clients has very suddenly gone all-in on agentic AI and they're in this crazy hurry. (Probably the most annoying part is they want to automate stuff that I built a POC for using GPT-4o, two years ago - at the time they saw no use for it, but now they're all-in on the hype.)

          This started literally two weeks ago and a couple of days ago I talked to one of the admin people who wanted an update on the progress I'd made with sanding off some of the rough edges of the very rough implementation that the managing partner had put in place (he bought a Mac Mini, put OpenClaw on it, then gave it admin access to a whole pile of stuff!) I said I needed a couple more days. "Okay," she said, "but I need this quickly, because we're firing people next week."

          They have literally gone from no agentic AI, to discovering OpenClaw, to firing people, in a two-week time span.

          When economists say that the predicted job losses as a result of AI have not yet shown up in the data, I'm genuinely befuddled. Either we don't have long to wait to start seeing them, or there's something wrong with the data, because you can't tell me what I just described above is an isolated phenomenon.

          I also have to say: I've always enjoyed working with this client, but this experience has been a huge turnoff on a number of different levels.

          • genthree 1 hour ago
            For a non-tech case of this, my wife worked at a place that fired like 80% of their writers in anticipation of huge speed-ups they expected from LLMs, a couple years ago.

            They had to hire a bunch of them back less than two months later. The speed-ups were approximately nil and making the editors edit AI slop all day long had them all close to quitting.

            They didn't even wait to see if there were any actual benefits, they just blindly fired a bunch of people based on marketing lies. I can only assume they're the same sorts who fall for Nigerian Prince scams.

        • graemep 1 hour ago
          Maybe what they really want her to do is get rid of 50% of her staff and the AI is just an excuse? In that case she should focus on "who can we do without?" rather than "how can we replace people with AI?"
        • komali2 1 hour ago
          > Her boss mandated that managers replace 50% of the staff with AI within a year

          I bet we could replace nearly all the CEOs in the country with chatgpt controlling a ceo@thatcompany.com email and nobody would notice.

          • bikelang 1 hour ago
            We’d probably get better outcomes too.
      • whateveracct 1 hour ago
        Yes it's very weird - why is my CEO being so nosy about my text editor all of a sudden? Stay in your lane, buddy.
      • genthree 1 hour ago
        We're doing that in my office, forced Cursor use. A good chunk of the "edited by AI" lines in my history were just auto-completing about the same as a traditional intellisense-alike would do (and actually Cursor doesn't seem to supply that, which is frequently annoying and wastes my time, in particular when I need to make sure it hasn't hallucinated a method or property on an object it should be able to "see" the definition of, which it does constantly; IDK maybe there's a setting somewhere, but I don't have to fiddle with settings in vanilla VSCode to get that...)

        It's actually kinda useful in some cases, but the UI is terrible and it needs to integrate much better with existing tools that are superior to it for specific purposes, before I'll be happy using it. I'd say the productivity gains are a wash, for me, so far. Plus it's entirely too memory-hungry, I'd just come to accept that a text editor takes a couple GB now (SIGH), and here it comes taking way more than that.

      • diehunde 1 hour ago
        As an employee of a big tech company doing this, it's all fear mongering. We are being told that if everyone doesn't use these tools, our competitors will wipe the floor with us because they are using them and will ship features 10x faster. But many engineers are suspicious as well.
        • zrail 6 minutes ago
          It's baffling, to be honest. I'm at a fintech that is currently pushing very hard at this, but in the same breath talking about how we're not a pure software play. I just don't understand where they're coming from.
      • zephen 1 hour ago
        I don't doubt you, but I'm out of the loop.

        Who does this?

        • forgetfulness 1 hour ago
          My uncle leads IT support teams, the org is measuring AI use in writing reports and tickets. The org has very poorly structured and obsolete processes (he's trying to straighten them up as he goes), AI will probably amplify the lack of structure, by making it easier for the work to _look_ as if someone carefully reviewed the issues and followed procedure.

          A friend is a team lead in an org that's mandating vibecoding via "Devin", a lesser known player an "architect" chose after shallow review. The company also has endemic process issues and simply can't do deployments reliably, it's behind the times in methodology in every other respect. Higher ups are placing their trust in a B-list agentic tool instead of fixing the problems.

          Anyway, I wouldn't be caught dead working at either of those two shops even before the AI rollout, but this is what's going on in the IT underworld.

          • genthree 1 hour ago
            I hate the AI assistants for ticket-writing. The beneficial use there would be to prompt for possibly-useful information that's not present, or call out ambiguity and let the writer decide how to resolve any of that. Coaching, basically. Suggesting actual text to include, for people who aren't already excellent at ticket-writing, just leads to noisier tickets that take more work to understand ("did they really mean this, or did the LLM just prompt them to include it and they thought sure, I guess that's good?")

            [EDIT] Oh and much of your post rings true for my org. They operate at a fraction the speed they could because of organizational dysfunction and failure to use what's already available to them as far as processes and tech, but are rushing toward LLMs, LOL. Yeah, guys, the slowness has nothing to do with how fast code is written, and I'm suuuuure you'll do a great job of integrating those tools effectively when you're failing at the basics....

            • forgetfulness 1 hour ago
              Lots of organizations don't want to accept that their velocity issues are quality issues. It's often a view held by an old guard that was there when the business experienced growth by adding features, while not having to bear any maintenance burden. The people who remain are either also oblivious to this, or simply have stopped caring.

              LLM-generated code hits all the right notes, it's done fast, in great volumes, and it even features what the naysayers were asking for. Each PR has 20 pages of documentation and adds some bulk to the stuff in the tests folder, that can sit there looking pretty. How wonderful! Hell, you can even do now that "code review" that some nerd was always complaining about, just ask the bot to review it and hit that merge button.

              Then you ask the bot to generate the commands again for the deploy (what CI pipeline?) and bam! New features customers will love. And maybe data corruption.

        • adelie 50 minutes ago
          my company (mid-size, publicly traded) is mandating [x] hours spent on AI per week. i have no idea how they're planning on measuring this, and as far as i can tell, neither does management.

          suppose it's better than counting lines of code, though.

      • mh- 1 hour ago
        The only thing I've mandated for engineers is that folks give it a try occasionally, as models, best practices, and tooling improves.

        I'm currently tracking exactly two numeric metrics: total MAUs (to track the aforementioned), and total DAUs (to gauge adoption and rightsize seat-licensed contracts.)

        • jrjeksjd8d 1 hour ago
          Why do you care so much? If these are really revolutionary tools that vastly optimize work, why bother forcing people to "try new models and best practices"?

          If the benefit is there people will use it or get left behind, there's no sense having a mandate that people resentfully try the new tooling.

          Imagine you had a developer who writes Java using vim. It sounds insane but they are just as productive as everyone else. Then you mandate they have to try IntelliJ every quarter, just to see if maybe they like it now. You're just going to piss them off and reduce their productivity by mandating their workflow.

          FWIW in the face of these kind of mandates I have been using tokens but ignoring the output. So it's costing my employer money and they have a warped metric of whether the tool is actually useful.

          • mh- 1 hour ago
            > FWIW in the face of these kind of mandates I have been using tokens but ignoring the output. So it's costing my employer money and they have a warped metric of whether the tool is actually useful.

            What you're actually doing here, from my POV, is incentivizing your employer to use more invasive metrics when they tried to stay hands-off and mandate the absolute bare minimum of "uh, give it a shot and see if you think it's useful right now."

            The analytics that Claude Enterprise exposes are far more intrusive than I would want to be subjected to as an engineer, so I rolled out a compromise. I don't even track who the active users are, currently.

            But maybe you're right, and there are enough people sabotaging the metrics out of spite, that there's a reason they provide the other data.

            I hope that the engineers in my org are more mature than that, and would be willing to just say "I'm not currently using it", but thanks for giving me something to think about.

            • ryandrake 16 minutes ago
              > mandate the absolute bare minimum of "uh, give it a shot and see if you think it's useful right now."

              That’s not the bare minimum, though. The bare minimum is: “if you are meeting or exceeding your job expectations, great work, keep using the tools that are working for you.”

              To a productive employee, merely saying “just try out AI, it might help” feels like the boss saying “just try out astrology or visit a psychic for a reading. You might find it interesting.”

            • kaffekaka 21 minutes ago
              I think one side of the issues folks are having is that combined with the mandate to use these tools, there is also an expectation or assumption that the developers will instantly get X% more productive. Like, "you must use this tool and you will be twice as productive".

              Where I work there as certainly been that kind of discussions, "we need to use AI for this, because no offense but you are simply not fast enough". And this from people who do not understand software development and has never worked with it. They have only read the online stuff about 20X speeds and FOMO. (And my workplace is generally quite laid back and reasonable. I am sure many other places are much more aggressively steered.)

          • ianm218 1 hour ago
            > Why do you care so much? If these are really revolutionary tools that vastly optimize work, why bother forcing people to "try new models and best practices"?

            If AI makes an employee 10X more productive they get a slight pay raise maybe, but the company makes substantially more money or gets substantially more output. So there is a large difference in incentives.

            • mh- 1 hour ago
              This is true, though I believe savvy employees have leverage to ensure they participate in a larger share of that upside. As you can see from other comments, lots of people will just drag their heels and not give it a good-faith attempt, so it'll often average out in the way you predict.
        • tjpnz 1 hour ago
          Making the tools available is one thing, but saying you're mandating their use at any level sounds like micro management to me. How would you feel if one of your subordinates started telling you how to do your job? I'm sure you would be mightily pissed off about it.
          • mh- 1 hour ago
            I don't think telling people what their job is counts as micro management. Part of their job right now is staying abreast of technological developments and experimenting with new ways of working.

            Re: some of them being upset about it- probably. Some people are also upset about being required to use Jira. I personally dislike using Okta.

            • skydhash 53 minutes ago
              It is micromanagement. If the job are not being done, the best way is to investigate what current practices are blocking people from doing it (the answer is probably meetings and bad communication). The worst way is to present a tool as a silver bullet for tasks you’re not doing and not accountable for.
              • mh- 48 minutes ago
                Where am I presenting the tool as a silver bullet? You seem to be confusing me with someone else in this thread, or making the mistake of turning this into a polarized conversation of "AI is a panacea" vs "AI is worthless".

                I engaged in the thread in good faith, and am transparent about what I'm doing and why. I also clarified that part of the job in my org is experimenting with these tools.

                • skydhash 20 minutes ago
                  The complaint in the thread is that management is forcing AI tooling usage. If part of your job is to experiment with these tools, then like any experiment, the correct way is to share the findings with a report detailing the methodology and findings. But no one is doing that AFAIK. It’s all superlatives.
        • Henchman21 1 hour ago
          Whats your plan for when someone flatly refuses?
          • mh- 1 hour ago
            I'll cross that bridge when I get there. No one who works for me has refused to be paid to try out a new technology when I ensure the time is set aside for them to do so.
      • suhputt 56 minutes ago
        [dead]
      • jmalicki 1 hour ago
        I've also never seen an optional tool become a step change like this

        Even moving from assembly language to compiled languages was not as much of a step change.

    • nemomarx 2 hours ago
      It also seems like skills with particular tech (prompt engineering, harnesses, mixture of experts set ups) doesn't always necessarily pay off when there's a sea change. Hard to predict what you'll want in a few years anyway, right?
      • Aurornis 1 hour ago
        > (prompt engineering, harnesses, mixture of experts set ups)

        Prompt engineering as a specific skill got blown out of proportion on LinkedIn and podcasts. The core idea that you need to write decent prompts if you want decent output is true, but the idea that it was an expert-level skill that only some people could master was always a lie. Most of it is common sense about having to put your content into the prompt and not expecting the LLM to read your mind.

        Harnesses isn’t really a skill you learn. It’s how you get th LLM to interact with something. It’s also not as hard as the LinkedIn posts imply.

        Mixture of Experts isn’t a skill you learn at all. It’s a model architecture, not something you do. At most it’s worth understanding if you’re picking models to run on your own hardware but for everything else you don’t even need to think about this phrase.

        I think all of this influencer and podcast hype is giving the wrong impression about how hard and complicated LLMs are. The people doing the best with them aren’t studying all of these “skills”, they’re just using the tools and learning what they’re capable of.

      • bdcravens 2 hours ago
        In my experience (and this may be confirmation bias on my part), casting a wide net and trying out new tech, while you maintain depth in the area relevant at the time, makes you ready for what's coming, even when you don't know what that may be.
        • zephen 1 hour ago
          Curiosity is good and helps with your personal development, for sure.

          OTOH, tfa specifically said:

          > I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I'm utterly content to wait until their hype has been realised.

          So, it's not like he's being deliberate ignorant, rather simply deliberately slow-walking his journey.

      • bonesss 1 hour ago
        Past the sea change: half the reason those prompt and harness solutions seem to work are LLM-lies, the testing is gassing you about how it works and the efficacy, defaulting to ‘yes’.

        If you test specific features of those solutions over time you see very inconsistent results, lots of lies, and seemingly stable solutions that one-shot well but suddenly experience behaviour changes due to tweaks on the backend. Tuesdays awesome agent stack that finally works is loading totally different on Thursday, and debugging is “oh, sorry, it’s better now” even when it isn’t. Compression, lies, and external hosting are a bad combo.

        Sometimes I imagine a world where computers executed programs the same way each time. You could write some code once and run it a whole calendar month later with a predictable outcome. What a dream, we can hope I guess.

        • skydhash 50 minutes ago
          People are doing toy projects and praising them, while some are testing them in real world situations and not findings them that useful. But the former is labelling the latter as luddites and telling them they will be left behind.
      • dw_arthur 2 hours ago
        Even two or three years ago I had ideas for projects but I could see the models were not ergonomic for my uses. I decided to wait for better models and sure enough the agentic models showed up which are much easier to use.

        Next thing I'm waiting on is building a new server for a powerful locally hosted LLM in 5 years. No need to go through the headaches and cost of doing it now with models that may not be powerful enough.

      • stiiv 2 hours ago
        Agreed! Investing lightly at this stage seems smart if your time/attention budget is tight.
      • dakolli 2 hours ago
        All of these occulted skills, that we literally can't explain why they work are akin to gamblers superstitions. If I write something this way, it works. Its like a gambler who think they order in which the push the buttons on the slot machine makes a difference.

        Kind of weird tools also incorporate addictive gambling game's UX design. They're literally allowing you to multiply your output: 3x, 4x, 5x (run it 5 times for a better shot at a working prompt). You're being played by billionaires who are selling you a slot machine as a thinking machine.

    • II2II 1 hour ago
      I almost entirely agree with the author's assessment of new technology. Yet that statement rubbed me the wrong way.

      Sometimes it is better to get into things early because it will grow more complex as time goes on, so it will be easier to pick up early in its development. Consider the Web. In the early days, it was just HTML. That was easy to learn. From there on, it was simply a matter of picking up new skills as the environment changed. I'm not sure how I would deal with picking up web development if I started today.

      • mekoka 43 minutes ago
        "It will grow more complex" is never a good reason to get into things early. It's just your mind playing FOMO tricks on you.

        Many developers who picked up the web in the early years struggle with (front-end) web development today. It doesn't matter if they fetched jQuery or MooTools from some CDN as it was done in the mid 00s. Once the tooling became too complicated and ever changing they couldn't keep up as front-end dilettante. It required to commit as professionals.

        If you started today, you'd simply learn the hard way, as it's always been done: get a few books or register for a course. Carve some time every day for theory and practice. All the while prioritizing what matters the most to get stuff done quickly right now, with little fluff. You will not learn Grunt, Bower, and a large array of historic tech. You'll go straight for what's relevant today. That applies to abstractions, frameworks, and tooling, but also to the fundamentals. You'll probably learn ES6+ and TS, not JS WAT. A lot of the early stuff seems like an utter waste of time in retrospect.

        This is true for all tech. If you knew nothing about LLMs by the end of this year, you could find a course that teaches you all the latest relevant tricks in 5 to 10 hours for 10 bucks.

        • bdangubic 35 minutes ago
          > Once the tooling became too complicated and ever changing they couldn't keep up as front-end dilettante. It required to commit as professionals.

          The best professionals did not fall for insanity of the modern front-end dilettante and continued hacking shit without that insanitity.

          > You will not learn Grunt, Bower, and a large array of historic tech. You'll go straight for what's relevant today.

          which will be outdated "tomorrow" just like grunt/bower... are looked at today

          > A lot of the early stuff seems like an utter waste of time in retrospect.

          This cannot be further from the truth, if you learned Javascript early, like really learned it, that mastery gets you far today. The best front-end devs I know are basically Javascript developers, everything else is "tech du jour" that comes and goes and the less of it you invest in the better off you'll be in the long-run.

          > If you knew nothing about LLMs by the end of this year, you could find a course that teaches you all the latest relevant tricks in 5 to 10 hours for 10 bucks.

          Hard disagree with this unless you are doing simple CRUD-like stuff

      • ashwinsundar 1 hour ago
        This isn't a good example - people were completing 6-month bootcamps and getting $100k offers to do web development not too long ago, decades after the web and HTML took off. After a few years they were making as much as anyone who learned HTML and Web 1.0 back in the 90s.

        Are the bootcampers better developers? Probably not. But they still were employable and paid relatively the same.

      • apsurd 1 hour ago
        The web/html is a great analogy. I too am in no rush to be hyper effective with LLMs. In fact i want to deliberately slow down because ai-native coding is so exhausting.

        That said, your point about the leverage of learning html and web in the early days compared to now rings true. pre-compiled isomorphic typescript apps are completely unrecognizable from the early days of index.html

      • topaz0 1 hour ago
        And yet, at some point most web developers will have picked it up after the "raw html" era -- that point has probably come, even.
    • vablings 1 hour ago
      There really isn't anything special to using AI anyways it's not rocket science. Sometimes I will use AI to write me some tailwind tags, sometimes I will use AI to write me a static site for a custom report.

      Most of my AI usage comes from doing things I don't enjoy doing like making a series of small tweaks to a function or block of code. Honestly, I just levelled the playing field with vim users and its nothing to write home about

    • agentultra 16 minutes ago
      One area where it may end up leaving you behind is if you’re looking for a job right now. There are a lot of companies putting vibe coding in their job requirements. The more companies that do this the harder it will be to find employment if you’re not adopting this tool/workflow.
    • gradus_ad 1 hour ago
      But it's so easy to try something like Claude Code. It's not like you need to get up to speed. There is no learning curve*, that's the nature of AI. Just start using it and you'll see why it has attracted so much hype.

      *I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense.

      • JohnFen 1 hour ago
        > There is no learning curve*, that's the nature of AI.

        There isn't? Then why is it that whenever devs have tried it and not achieved useful results, they're told that they just haven't learned how to use it right?

        • laserlight 1 hour ago
          “You're holding it wrong.” is the most common response I get, when I talk about problems I had with LLM-assisted coding.
        • bigstrat2003 50 minutes ago
          Because the AI bros hyping it up are incapable of admitting that the hype is overblown. That would mean they have nothing to sell you, so of course they aren't going to say that.
      • we_have_options 1 hour ago
        I've been playing with it on weekends for the last few months. 9 out of 10 projects, it's failed.

        Projects as simple as "set up a tmux/vim binding so I can write prompts in one pane and run claude in the other". Fails.

        I've been coding for over 20 years.

        If there is no learning curve, why doesn't it work for me? You can't say I'm not using it right, because if that was true, then all I need to do is climb the learning curve to fix that, the curve that you say doesn't exist.

        • hombre_fatal 1 hour ago
          I've used Claude Code to do everything from vibe-code personal apps including a terminal on top of libghostty to building my perfect desktop environment on NixOS (I'd never used Nix until then).

          I'm not sure why it isn't working for you. Maybe your expectation is a perfect one-shot or else it has zero value, and nothing in between?

          But my advice is to switch gears and see the "plan file" as the deliverable that you're polishing over implementation. It's planning and research and specification that tends to be the hard part, not yoloing solutions live to see if they'll work -- we do the latter all the time to avoid 10min of planning.

          So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file.

          From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it. But the idea is that an LLM is useful at this intermediate planning stage without tacking on additional responsibilities.

          I think by "no learning curve" they are referring to how you can get value from it without doing the research you'd need to use a conventional tool. But there is a learning curve to getting better results.

          I learned my plan file workflow just from Claude Code having "Plan Mode" that spits out a plan file, and it was obvious to me from there, but there are people who don't know it exists nor what the value of it is, yet it's the centerpiece of my workflow. I also think it's the right way to use AI: the plan/prompt is the thing you're building and polishing, not skipping past it to an underspecified implementation. Because once you're done with the plan, then the impl is trivial and repeatable from that plan, even if you wanted to do the impl yourself.

          I'm way past the point of arguing anything here, just trying to help.

        • gradus_ad 1 hour ago
          Did it not work after the first try and you gave up? Did it not produce any usable code that you could hand tweak or build off of? I want to understand your definition of "failed" here.
          • laserlight 1 hour ago
            What's your definition of "working"? Do you consider it working, when you have to put more effort into prompting back-and-forth than writing it the old way?
        • 6DM 1 hour ago
          It doesn't work if you're treating it like a peer engineer. It only works if you treat it like you're a customer with no concern with how it works behind the scenes.

          That's what's being asked of me in my last two jobs. Vibe code it, if it's bad just throw it away and regenerate it because it's "cheap". The only thing that matters is that you can quickly generate visible changes and ship it to market.

          Out of frustration I asked upper management (in my current job), if you want me to use AI like that then I'll do it. But when it inevitably fails, who is responsible? If there's no risk to me, I will AI generate everything starting today, but if I have to take on the risk I won't be able to do this.

          Their response was that AI generates the code, I'm responsible for reviewing it and making sure it's risk free. I can see that they're already looking for contractors (with no skin in the game) that are more than willing to run the AI agents and ship vibe code, so I'm at a loss on what to do.

        • bigstrat2003 54 minutes ago
          > If there is no learning curve, why doesn't it work for me?

          Because LLMs are not actually good at programming, despite the hype.

        • Kiro 12 minutes ago
          Failing 9 out of 10 times for such simple tasks is indeed puzzling. I have no idea what you're doing to achieve that but I'm impressed.
      • adriand 1 hour ago
        I think working with the technology gives you powerful intuitions that improve your skill and lead to better outcomes, but you don't really notice that that's what's happening. Personally speaking - and I suspect this is true of most people in general - I have very poor recollections of what it was like to be really bad/new at things that I am now very skilled at.

        If you have try teaching someone something from the absolute ground up, you will quickly realize that a huge number of things you now believe are "standard assumptions" or "obvious" or "intuitive" are actually the result of a lot of learning you forgot you did.

      • hombre_fatal 1 hour ago
        I think it comes down to your own personality, appetite, and also how external factors like hype might impact you (resent, annoyance, curiosity, excitement).
      • ErroneousBosh 32 minutes ago
        I tried it. Either I don't know how to use it, or it just doesn't work.
      • nDRDY 1 hour ago
        Then what is the point? If what I'm doing can be done by Claude, as operated by someone who "doesn't need to get up to speed", then I really need to look at another career.
    • theptip 1 hour ago
      Ok, here is the risk of being left behind - if we have moderately fast take-off, the 1-2 years required to upskill in AI might mean you find yourself unemployable when your role gets axed.

      I don’t think folks are taking seriously the possible worlds at the P(0.25) tail of likelihood.

      You do not get to pick up this stuff “on a timescale of my choosing”, in the worlds where the capability exponential keeps going for another 5-10 years.

      I’m sure the author simply doesn’t buy that premise, but IMO it’s poor epistemics to refuse to even engage with the very obvious open question of why this time might be different.

      • msabalau 1 hour ago
        But they have engaged with it, and made an assessment about it's current utility.

        We have no reason to believe that they won't keep an eye on this.

        Little to nothing about AI tools so far suggests that that one can't just as easily pick the skills later. Tools that will get "exponentially better" will almost certainly be unrecognizable to someone desperately engaging with them now, for not other reason the sake of "having 1-2 years of experience"

        Someone might reasonably choose to to bet on the upside. That doesn't imply that everyone else ought to fearfully hedge.

      • duskdozer 36 minutes ago
        Eh, I'm not super worried. After all, Every six months or so, the latest model changes everything and the former model was complete garbage. It's not just a new model—it's a new paradigm shifting the landscape of agentic development.
      • SpicyLemonZest 1 hour ago
        I don't think there's such a thing as a "fast take-off" where human experience with 2026-era LLM coding remains economically relevant.
      • fatata123 1 hour ago
        [dead]
    • garyfirestorm 1 hour ago
      Counter point. It’s always advantageous to learn and grow as things evolve. This way you have an active role and maybe a say in how it will evolve. And maybe you could contribute towards that evolution (despite poor execution openclaw showed what LLMs could be doing)

      > There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate.

      Not long ago we were ridiculing genZ for not knowing why save icon looks like a floppy disk.

      Do you want to feel like that in next 5-10 years?

      • td2 1 hour ago
        The counterpoint is that you will learn jank.

        If you started early webdev, you learned lots of tricks, that dont benefit a modern webdev. E.g soap, long polling, the JsonP workaround... and so on

        Many of the Llm frameworks will be seen simular. Mcp is already kinda heading in the obsolete direction imo, as skills took over

        • skydhash 43 minutes ago
          I’ve learned a lot of stuff that don’t really benefits me right now, but now and then I encounter a situation that made me happy that I did. It may never happens for some, but at the time, I was probably happy learning it.

          But there’s some stuff that I don’t bother explore in depth because my time is finite and I don’t really need it. And anything LLM tooling is probably easier than a random JS framework. Vim’s documentation is probably longer than cursor’s.

      • marcd35 1 hour ago
        I agree with this point. There is absolutely a 'left behind' gap that is under-explored.

        My last job was a cable technician - making house calls to fix wifi, satellite tv, phone issues. Mostly elderly residents. The majority of them all were computer and phone illiterate. They were slow adopters to the fast-moving technology and many of them did not know how to operate their devices after we (UI/UX/hardware/software engineer 'we') removed them.

        I wonder if this also has contributed to the elderly lonliness problem - sure its probably mostly related to physical companionship, acceptance of aging, etc, but the world that they knew (in general and the technological world they grew up in) is no longer recognizable.

        • skydhash 36 minutes ago
          But maybe it doesn’t matter that much to them. I don’t know how to skin a rabbit, but that knowledge could be handy in some situations. But I don’t see myself being in that situation other than accidentally.

          My mother has a phone, but only use it to call. She has never needed a computer even though I spent my teenage years glued to one. But I have like 1 percent of a skill in cooking.

          • ryandrake 0 minutes ago
            Exactly. We look at older people and think “oh, look at those poor souls. They don’t know X and Y technologies and they keep doing things the old way! They must feel so left behind.” Nothing is further from the truth. My whole life I’ve lived in neighborhoods full of people 20+ years older than me and not once did I have a neighbor or friend who I thought was overwhelmed with life and upset about how different the world was becoming from what they are used to. This is a tired trope. People are resilient and adaptive, and as you get older you learn how to embrace new things that actually help and reject new things that don’t. As I get older, I find myself just not caring about a lot of things that younger people care about and not doing a lot of things they do. I don’t use social media, I still pay for things with cash and checks, I don’t understand or care about the Kardashians or reality tv. My phone is 8 years old. I listen to prof rock and new wave music, and don’t know any hip hop songs. I don’t feel even slightly “left behind” or “obsolete.”
    • randusername 1 hour ago
      Counterpoint:

      Mistakes are less costly in the beginning and the knowledge gained from them is more valuable.

      Over-sharing on social media. Secret / IP leaks with LLMs. That kind of thing.

      I agree:

      FOMO is an all-in mindset. Author admits to dabbling out of curiosity and realizing the time is not right for him personally. I think that's a strong call.

    • brandonmenc 1 hour ago
      This is true in my experience.

      I waited until it seemed good enough to use without having to spend most of my time keeping up with the latest magical incantations.

      Now I have multiple Claude instances running and producing almost all of my commits at work.

      Yes, with a lot of time spent planning and validating.

    • logicchains 14 minutes ago
      Even if it reaches the end state of AGI, e.g. AI that's smarter and more capable than 90% of humans, there'll still be a huge learning curve to using it well, as anyone who's tried managing very smart humans can attest.
    • imtringued 1 hour ago
      I think this is particularly evident with AI.

      The early adopters started years ago and they've seen improvements over time that they started attributing them to their own skill. They tell you that if you didn't spend years prompting the AI, it will be difficult to catch up.

      However, the exact opposite is happening. As the models get better, the need for the perfect prompt starts waning. Prompt engineering is a skill that is obsoleting faster than handwriting code.

      I personally started using codex in march and honestly, the hardest part was finding and setting up the sandbox. (I use limactl with qemu and kvm). Meanwhile the agentic coding part just works.

    • postalcoder 2 hours ago
      The thing is, this post is hitting a straw man. ngmi culture was deeply toxic and pervasive in crypto. I think the people who are really into LLMs are having a blast.
      • stavros 2 hours ago
        I'm definitely having a blast, but I agree with the author. You're not going to get left behind, the "getting left behind" rhetoric was just cryptocurrency pump-and-dumpers. It's fine to wait and not engage if you don't want to.
        • postalcoder 2 hours ago
          I agree with you, which is why I think it's a straw man. How many real devs are actually banging the "you're getting left behind!" drums?
          • mekael 1 hour ago
            I had an heavy ai user on my team say that “those who learn how ti use the tools wont get fired, those who dont are gone”. I used it to generate a bunch of cfn and it worked fine from an example and a couple line prompt, doesnt seem that hard to learn to me.

            Now reviewing the 1k lines it generated and making sure its secure, thats going to take me longer than writing it by hand.

            • stavros 1 hour ago
              Yeah, I think this is it. If you don't learn to use them, you'll be much slower than people who do, but also they're not really that hard to learn, so it's not super urgent.
          • bigstrat2003 48 minutes ago
            I have personally heard people say this at work. It's not a strawman, there really is a message of "you'll be left behind" out there.
          • apsurd 1 hour ago
            It can be implicit though.

            The llm person having a blast is compelled to push everyone to see what they see. If they have a leadership role at their company, then the getting-left-behind drum does get banged in the form of "ai native company transformation" initiatives.

          • bleuarff 1 hour ago
            I don't know for devs, but that's the message we get from upper management.
          • foolserrandboy 1 hour ago
            The executives are, not the devs.
          • SpicyLemonZest 1 hour ago
            Lots, and not just online. I run into them regularly in my office, and so do my friends and family in tech. One of my coworkers is now spending all his time writing SKILLs, he's convinced that we'll never need to solve operational issues again if we have the right SKILLs.
          • bena 1 hour ago
            You're using a no-true-Scotsman to accuse the author of a strawman.

            Consider that.

          • Fraterkes 1 hour ago
            I think FOMO-aligned ai stuff is fairly common on HN, doesn't mean it's always deliberately manipulative.
        • plagiarist 28 minutes ago
          I'm not worried about being left behind technologically, but I am worried about being left behind after every company on the planet decides we need N years experience in AI to be employable.
          • stavros 19 minutes ago
            I already have 30 years of experience in LLMs, if you believe my CV, so I'm not worried.
    • wslh 1 hour ago
      We've seen multiple ideas/products get quickly absorbed into frontier models, OSS, or well-funded startups. The cycle from "interesting idea" to "commoditized feature" is getting very short. Personally, there were three of these in the last year.

      And even if your product is genuinely great, distribution is becoming the real bottleneck. Discovery via prompting or search is limited, and paid acquisition is increasingly expensive.

      One alternative is to loop between build and kill, letting usage emerge organically rather than trying to force distribution.

    • fantasizr 2 hours ago
      somehow the ai bros are saying creating .md files is the real ingenuity, and couldn't be learned in say half a day. There's absolutely no rush to keep up with the latest code producing tools especially when they're all "pay to play".
  • heytakeiteasy 2 hours ago
    Feels like a false equivalency. It's just my experience, but I've completely ignored crypto and the metaverse, and I don't get the sense I'm missing out on much. In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life. Transformative for the better? Time will tell I suppose, but I'm really enjoying it so far.
    • xondono 2 hours ago
      This depends very much on your line of work.

      As a freelancer I do a bit of everything, and I’ve seen places where LLM breezes through and gets me what I want quickly, and times where using an LLM was a complete waste of time.

      • pennomi 2 hours ago
        For sure. The more specialized or obscure of things you have to do, the less LLMs help you.

        Building a simple marketing website? Probably don’t waste your time - an LLM will probably be faster.

        Designing a new SLAM algorithm? Probably LLMs will spin around in circles helplessly. That being said, that was my experience several years ago… maybe state of the art has changed in the computer vision space.

        • heytakeiteasy 1 hour ago
          > The more specialized or obscure of things you have to do, the less LLMs help you.

          I've been impressed by how this isn't quite true. A lot of my coding life is spent in the popular languages, which the LLMs obviously excel at.

          But a random dates-to-the-80s robotics language (Karel)? I unfortunately have to use it sometimes, and Claude ingested a 100s of pages long PDF manual for the language and now it's better at it than I am. It doesn't even have a compiler to test against, and still it rarely makes mistakes.

          I think the trick with a lot of these LLMs is just figuring out the best techniques for using them. Fortunately a lot of people are working all the time to figure this out.

          • jatora 1 hour ago
            Agreed. This sentiment you are replying to is a common one and is just people self-aggrandizing. No, almost nobody is working on code novel enough to be difficult for an LLM. All code projects build on things LLM's understand very well.

            Even if your architectural idea is completely unique... a never before seen magnum opus, the building blocks are still legos.

        • monsieurbanana 1 hour ago
          Specialized is probably not the word I'd use, because llms are generally useful to understand more specialized / obscure topics. For example I've never randomly heard people talking about the dicom standard, llms have no trouble with it.
          • phil21 1 hour ago
            I think there is a sweet spot for the training(?) on these LLMs where there is basically only "professional" level documentation and chatter, without the layman stuff being picked up from reddit and github/etc.

            I was looking at trying to remember/figure out some obscure hardware communication protocol to figure out enumeration of a hardware bus on some servers. Feeding codex a few RFC URLs and other such information, plus telling it to search the internet resulted in extremely rapid progress vs. having to wade through 500 pages of technical jargon and specification documents.

            I'm sure if I was extending the spec to a 3.0 version in hardware or something it would not be useful, but for someone who just needs to understand the basics to get some quick tooling stood up it was close to magic.

          • aleph_minus_one 1 hour ago
            > llms are generally useful to understand more specialized / obscure topics

            A very simple kind of query that in my experiences causes problems to many current LLMs is:

            "Write {something obscure} in the Wolfram programming language."

            • AlotOfReading 25 minutes ago
              One tendency I've noticed is that LLMs struggle with creativity. If you give them a language with extremely powerful and expressive features, they'll often fail to use them to simplify other problems the way a good programmer does. Wolfram is a language essentially designed around that.

              I wasn't able to replicate in my own testing though. Do you know if it also fails for "mathematica" code? There's much more text online about that.

              • aleph_minus_one 5 minutes ago
                > Do you know if it also fails for "mathematica" code?

                My experience is similar.

      • muskstinks 58 minutes ago
        But this learning is also value.

        Without playing around with it, you wouldn't know when to use an LLM and when not.

      • peacebeard 1 hour ago
        Honestly I think this is the primary explanation for why there is so much disagreement on if LLMs are useful or not. If you leave out the more motivated arguments in particular.
    • datsci_est_2015 2 hours ago
      > In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life.

      Feels like a false dichotomy.

      Have I become faster with LLMs? Yes, maybe. Is it 10x or 1000x or 10,000x? Definitely not. I think actually in the past I would have leaned more on senior developers, books, stack overflow etc. but now I can be much more independent and proactive.

      LLM-based tools are a wide spectrum, and to argue that the whole spectrum is worth exploring because one sliver of it has definite utility is a bit wonky. Kind of like saying $SHITCOIN is worth investing in because $BITCOIN mooned as a speculative asset:

        - I’m bullish on LLMs chat interfaces replacing StackOverflow and O’Reilly
        - I could not be more bearish on Agents automating software engineering
      
      Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead.
      • hungryhobbit 55 minutes ago
        >Feel like we’re back at Adobe Dreameaver release and everyone is claiming that web development jobs are dead

        I truly believe so much of the anti-AI sentiment is the same as the Luddites.

        They're often used as a meme now, but they were very real people, faced with a real and present risk to their livelihoods. They acted out of fear, but not just irrational fear.

        AI is the same: it's unquestionably (to anyone evaluating it fairly) a huge boost to productivity ... and also, unquestionably, a threat to programmer jobs.

        Maybe the OP is right about waiting, but to me whenever new tech is disrupting jobs, that seems like the best time to learn it. If you don't, it's not just FOMO as the author suggests ... it's failing to keep up on the skills that keep you employed.

      • JumpCrisscross 2 hours ago
        > Have I become faster with LLMs? Yes, maybe.

        The question isn’t if you’ve improved. It’s if the path you took to getting to your current improvement could have been shortcut with the benefit of hindsight. Given the number of dead ends we’ve traversed, the answer almost certainly is yes.

    • spelunker 7 minutes ago
      Crypto and the Metaverse were solutions in search of a problem. LLMs kind of felt like that until tooling arrived that enabled doing a lot more than copying + pasting chat conversations.

      Sure, maybe crypto changed some lives, but an entire industry? I think ALL of software dev is going under a transformation and I think we're past the point of "wait it out" IMO.

      Or I'm wrong, but right I'm being paid to develop a new skill professionally. Maybe the skill ends up not being useful - ok, back to writing code the old way then.

    • newsoftheday 1 hour ago
      > In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation

      It may have reduced the time to an implementation, based on my experiences I sincerely doubt the veracity of applying the adjective "working".

    • ErroneousBosh 30 minutes ago
      > In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life.

      I can't really agree. I've never seen anything from an LLM that I would consider even helpful, never mind transformative.

      How are you supposed to use them?

      • kaffekaka 1 minute ago
        Is this seriously so? Have you never seen anything helpful from an LLM? That seems such a black and white statement that I get confused.

        I am conservative regarding AI driven coding but I still see tremendous value.

        It makes me want to ask you: do you ever see helpful things from your colleagues at all?

    • lapcat 2 hours ago
      > Transformative for the better? Time will tell I suppose

      That's the point of the blog post. If you can't even say right now whether it's for the better, then there's no reason to rush in.

      • colejohnson66 2 hours ago
        I read OP as saying it is transformative, at least for them. Whether it's transformative for society is left to be decided.
      • nehal3m 2 hours ago
        And conversely if it is, then there is no point to getting in early since the whole point is to externalize knowledge and experience
    • locknitpicker 1 hour ago
      > Feels like a false equivalency.

      It's clearly a textbook example of survivorship bias.

      In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history.

      It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time.

      And coding is the lowest of low hanging fruits.

      • ThrowawayR2 1 hour ago
        In the 90s was also the dotcom boom, and the vast majority of those who placed an all-in bet on it being everything lost it all in the dotcom bust and also "ended up being forgotten by history". Some of those bets were prescient but too early but many of those bets never made any sense. The dotcom bust was worse than the software industry crash we're experiencing now.

        "It's rather obvious that this AI thing is a transformative event in world history" perhaps but it's not at all obvious how it's going to shake out or which bets are sensible.

        • locknitpicker 35 minutes ago
          > In the 90s was also the dotcom boom, and the vast majority of those who placed an all-in bet on it being everything lost it all in the dotcom bust and also "ended up being forgotten by history".

          I think you are missing the point, and also the very site you're posting on.

          Look at the top 50 list of most valuable companies in the world. Over half of the total market value reported today is attributed to companies which were either dotcom startups or whose growth was driven by the dotcom growth period. Dismissing the advent of the internet as anything short of revolutionary is disingenuous, no matter how many zombo.com companies failed.

          LLMs have the exact same transformative impact on humanity.

          • danaris 6 minutes ago
            > LLMs have the exact same transformative impact on humanity.

            But this is begging the question.

            Yes, we can see that the internet was radically transformative.

            But you are arguing that this somehow proves that LLMs are too, when there's wildly insufficient evidence—either on where LLMs are going in themselves, or in the comparison—to credibly make that claim.

      • disgruntledphd2 1 hour ago
        > It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time.

        It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads. Certainly I remember having to go down a few results to get SO results for a while, even when the top results were just copypasta from SO.

        • locknitpicker 39 minutes ago
          > It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads.

          I don't think that's it. SO was the go-to page for troubleshooting, whose traffic was not exactly originating from web search. Also, the LLM-correlated drop in traffic is also reported by search engines. Stack Overflow just so happens to be a specialized service with a very specialized audience whose demand is perfectly dominated by LLM chatbots.

      • kjkjadksj 1 hour ago
        Internet is something new. By definition llm coding isn’t doing anything you couldn’t have done already. Once the agents aren’t writing a human syntax based language but are spitting out opaque functions in binary machine code, then they are doing something new and compelling imo because there are real performance gains with that.
        • bitwize 1 hour ago
          No, this is wrong. AI has drastically shortened the time and effort between idea and implementation. The upshot being, that not only do you get things done faster, but things you wouldn't otherwise countenance doing are now within reach.
      • irishcoffee 1 hour ago
        > In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history.

        Allow me to introduce you to the dot-com boom, where everyone who bet on the internet went broke.

      • lapcat 1 hour ago
        > In the 90s the same argument was directed at this new thing called the internet, and those who placed a bet on it being a fad ended up being forgotten by history.

        Almost all people are "forgotten" by history.

        In any case, people who were not even born yet in the 1990s are using the internet today, very successfully, so clearly you can wait.

  • wolframhempel 2 hours ago
    There's value in being early - in the right thing.

    - If you'd invested in Bitcoin in 2016, you'd have made a 200x return

    - If you'd specialized in neural networks before the transformer paper, you'd be one of the most sought-after specialists right now

    - If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush

    Of course, you could just as well have

    - become an ActionScript specialist as it was clearly the future of interactive web design

    - specialized in Blackberry app development as one of the first mobile computing platforms

    - made major investments in NFTs (any time, really...)

    Bottom line - if you want to have a chance at outsized returns, but are also willing to accept the risks of dead ends, be early. If you want a smooth, mid-level return, wait it out...

    • anticorporate 2 hours ago
      There may be financial value in being early (if you're lucky), but there are other values in waiting.

      My goal in life is not to maximize financial return, it's to maximize my impact on things I care about. I try to stay comfortable enough financially to have the luxury to make the decisions that allow me to keep doing things I care about when the opportunities come along.

      Deciding whether something new is the right path for me usually takes a little time to assess where it's headed and what the impacts may be.

      • 0x3f 1 hour ago
        > My goal in life is not to maximize financial return, it's to maximize my impact on things I care about.

        In the vast majority of cases, financial returns help maximize your impact on the things you care about. Arguably in most cases it's more effective for you to provide the financing and direction but not be directly involved. That's why the EA guys are off beng quants.

        The only real exceptions are things that specifically require you personally, like investing time with your family, or developing yourself in some way.

        • anticorporate 1 hour ago
          I knew this canned rebuttal was coming and almost addressed it in my previous comment.

          I've not found this to be true at all, for a variety of reasons. One of my moral principles that extreme wealth accumulation by any individual is ultimately harmful to society, even for those who start with altruistic values. Money is power, and power corrupts.

          Also, the further from my immediate circle I focus my impact on, the less certainty I have that my impact is achieving what I want it to. I've worked on global projects, and looking back at them those are the projects I'm least certain moved the needle in the direction I wanted them to. Not because they didn't achieve their goals, but because I'm not sure the goals at the outset actually had the long term impact I wanted them to. In fact, it's often due to precisely what we're talking about in this thread: sometimes new things come along and change everything.

          The butterfly effect is just as real with altruism as it is with anything else.

        • ashwinsundar 1 hour ago
          I didn't realize maximizing money is the way to achieve moral excellence. It's interesting how Puritanical the EA folks are
        • lawtalkinghuman 1 hour ago
          > That's why the EA guys are off beng quants.

          Or in prison for fraud.

        • danny_codes 52 minutes ago
          > The only real exceptions are things that specifically require you personally, like investing time with your family, or developing yourself in some way.

          So, the things that matter the most for most people?

          Studies pretty consistently show that happiness caps off at relatively modest wealth.

        • Devasta 35 minutes ago
          I want to cure lung cancer, therefore as an Effective Altruist™ I maximize my income by selling cigarettes to children outside playgrounds. The money will go towards research in my will, and in the meantime the incidence of lung cancer in teenagers will incentivize the free market to find a cure!

          People don't become quants because they are EAs, they become EAs to justify to themselves why they became quants.

        • komali2 1 hour ago
          > Arguably in most cases it's more effective for you to provide the financing and direction but not be directly involved. That's why the EA guys are off beng quants.

          The EA guys aren't the final word on ethics or a fulfilling life.

          Ursula K. Le Guin wrote that one might, rather than seeking to always better one's life, instead seek to share the burden others are holding.

          Making a bunch of money to turn around and spend on mosquito nets might seem to be making the world better, but on the other hand it also normalizes and enshrines the systems of oppression and injustice that created a world where someone can make 300,000$ a year typing "that didn't work, try again" into claude while someone else watches another family member die of malaria because they couldn't afford meds.

    • hakunin 5 minutes ago
      > If you'd invested in Bitcoin in 2016, you'd have made a 200x return

      Except you would've probably sold it at any of 1.5x, 2x, 4x, or 10x points. That's what people keep missing about this whole "early bitcoin". You couldn't tell it will 2x at 1.5x, you couldn't tell it will 4x at 2x, and so on.

    • not_a_bot_4sho 1 hour ago
      Wow, shots fired here for me.

      I was ahead of the game with my intimidate expertise in ActionaScript and Silverlight! I made 3D engines in browsers well before WebGL was a spec.

      It was quite profitable for a few years, then poof. Dead end lol

    • jghn 2 hours ago
      A friend told me about bitcoin in early 2010, back when the coins were effectively free. I laughed at the idea and called it stupid.

      I still think it's stupid, but I'd be a whole lot richer if I went along with it at the time!

      • benhurmarcel 1 hour ago
        You would probably have sold early for a relatively small amount too.

        I bought 10ish BTC at some point for almost nothing, sold them for a low 4-digit amount thinking they were stupid anyway. I still think they were stupid but it turns out they could have paid off my house easily. Oh well.

        • Cthulhu_ 1 hour ago
          Question is whether you sold all or just a segment. If you bought 100 btc early on, sold 99 of them, you'd still have something worth a deposit for a house or a car now, which is still life-changing.
      • irusensei 1 hour ago
        IMO the only way you'd be rich with bitcoin would be if you forget about your coins and reminded of it years later or if you were a hardcore "fix the money fix the world hodl" believer.

        Otherwise you would most likely have sold during one of huge crashes or values, attempted trade and lost it all, invested into the new shitcoin NFT or whatever or just got hacked along the way.

        • jghn 1 hour ago
          indeed. even if I didn't lose them along the way or have them stolen via hack, I would have cashed out back when they were outlandishly priced at $100. I never would have held on to them this long.

          But even $100 would have been nice given you could still pop them out for free on a standard PC back then with mining software.

      • wolframhempel 2 hours ago
        To be fair, I remember being about ten years old and berating my friend who just told me that he had something called "Rebel Assault" on a CD how this was completely impossible as CDs could only store music and how he was a complete idiot for believing otherwise... :-)
      • irishcoffee 1 hour ago
        I distinctly remember staring at my check/bank/debit card laying on the desk in front of my keyboard, all the info punched in to buy $500 of bitcoin for something like $0.29 a coin.

        Didn't pull the trigger. I just tell myself I'd have sold them when they doubled in price or they'd have been hacked in one of the mt. gox attacks and I'd have lost them anyways.

        Today it would be about 120m. Oh well.

        • jghn 1 hour ago
          Same. there's no chance I'd have the riches associated with those coins. Or at least that's how I manage to sleep at night.

          There was a local food delivery service at the time that accepted bitcoin. Can you imagine looking back on life and realizing you spent the equivalent of $1M on a burrito?

          • cheevly 22 minutes ago
            My roommates and I literally bought a pizza with our stash of bitcoins. So yes, we fully understand how this feels.
    • aleph_minus_one 1 hour ago
      > - If you'd specialized in neural networks before the transformer paper, you'd be one of the most sought-after specialists right now

      > - If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush

      I disagree:

      Concerning the first point: how neural networks are today is very different from how they were in former days. So, the knowledge of neural networks from the past does only very partially transfer to modern neural networks, and clearly does not make you a very sought-after specialist right now.

      Concerning your second point: the success of mobile games is very marketing-centric. While it is plausible that being early in mobile games development when the iPhone was released might have opened doors in the game industry for you, I seriously doubt whether having this skill would have made you rich.

      • imtringued 18 minutes ago
        There's also the problem that there is currently a huge transformer monoculture in the AI space. Everything gets better if you throw transformers at the problem.

        If you had worked on anything other than a transformer based architecture post 2016, such as Mamba or RWKV, you would have wasted your time.

        Mamba 3 is the third iteration and somehow I doubt that it will catch on.

      • erikerikson 1 hour ago
        I can confirm your response to the point you can first
    • rdiddly 1 hour ago
      Crucially those are all investments. Just like creating AI or buying data centers to run AI is an investment. Whereas merely using AI feels more like being in the general population of consumers. The shape of the outlay for it is un-investment-like. A monetary investment is a big charge up front, not a monthly fee. A skills investment looks like effort spent learning, but I mean how difficult is it to type reasonably precise English? Conclusion: you're a customer, not an investor, so you can start any time.
    • fermentation 35 minutes ago
      There's really no need to be "early" for something like this. I've seen coworkers who've never used agentic coding pick it up and know everything that need to in ~2 days. The people who really dive in and are running orchestrated agents overnight are in general not building much of real value.
    • JumpCrisscross 2 hours ago
      Almost everyone chasing those returns would be better off investing in index funds.
      • aleph_minus_one 1 hour ago
        Index funds give you something like the expected value over a huge class of possible investments.

        What you aim for if you want to invest early is rather a probability distribution of

        - get rich with a small (but nevertheless realistic) probability p

        - get something between little, nothing, and loosing a little bit with probability 1-p

        This is a very different offering than the profit probability distribution that index funds give you.

        • JumpCrisscross 1 hour ago
          > What you aim for if you want to invest early

          Sure. I’m saying someone pursuing that portfolio will probably end up underperforming an index. Most new early-stage VCs do.

          > get something between little, nothing, and loosing a little

          Broadly speaking, when your investment outcomes don’t differentiate between anything and zero, you’re mostly going to get zero.

          • aleph_minus_one 1 hour ago
            > I’m saying someone pursuing that portfolio will probably end up underperforming an index.

            This holds if you consider "underperforming" to be a comparison of expected values.

            On the other hand, if you consider "probability of getting a really huge payoff" to be the measure by which the investments are compared, the index fund is the one that looses in the comparison.

            • mahemm 1 hour ago
              Would you be comfortable using this same logic to invest most of your net worth in lottery tickets/betting on black in a casino? If not, I'd be curious to hear what is different in that for you.
              • aleph_minus_one 45 minutes ago
                > Would you be comfortable using this same logic to invest most of your net worth in lottery tickets/betting on black in a casino?

                I wouldn't "invest" in lottery tickets because for these p is far too small (exception: if I found a loophole in the system of the lottery, which has been found for some lotteries). For casinos, there is additionally the very important aspect that the casino will scam you (if you start winning money (for example by having found some clever strategy that gives you an advantage), the security will escort you out of the building and ban you from entering the casino again).

                So, to give an explanation of the differences:

                - Because "the typical run" for such an investment will be loosing, you should never invest your whole net worth (or a significant fraction thereof) into such an investment. The advice that I personally often give is to use index funds or stock investments for generating the money for investments that are much more risky, but have huge possible payouts.

                - You should only do such an "early investment" if you have a significant information advantage over the average person. Such an advantage is plausible, for example, if you are deeply interested in technology topics

                - Lottery tickets have an insanely small p (as defined in my comment). You only do "early investments" into topics where the p is still small, but not absurdly bad. The difference is that for lottery tickets the p is basically well-known. On the other hand, for "early investments", people can only estimate the p. Because of your information advantage from the previous point, you can estimate the p much better than other people, which gains you a strong advantage in picking the right "early investments" to choose.

                But be aware that this is a strategy for risk-affine people. If you aren't, you better stay, for example, with index funds.

      • jayd16 1 hour ago
        Sure, you could take calculated risks for predictable returns over a long enough time scale.

        Or you could take what's in the box!!

    • bombcar 2 hours ago
      Some of those things involve a bit of money as a gamble; others require some time learning tooling that can be repurposed (mobile game developers can obviously do mobile apps), and others would be dedicating years of your life into something that may be a dead end.
    • rob 1 hour ago
      I should've started registering a bunch of two and three character and generic .com domains in the early 90s when registration was free.
    • demaga 2 hours ago
      None of those examples are useful.

      1. 2016 was years after Bitcoin was developed. So you could still make 200x returns without being early adopter.

      2. Is this even true? I'd bet scraping experts or people who can fine tune LLMs have easier time finding a job than classical ML academics.

      3. Candy Crash was released when iPhone was on its 5th iteration.

      If anything, you just added to OPs points. Being an early adopter gives limited advantage.

      • Quarrelsome 2 hours ago
        to be fair a friend of mine made one of the first flashlight apps on iOs and made a tidy sum out of it.

        I think the question really is about how well you hit your timings. You can have held bitcoin but sold it went it hit $5k or less. You can have a technical advantage in a given field but somehow waste it (dead startup, serious illness) and lose your timing. Nobody knows what the right timings are, but I think the OP is pushing for a more consistent risk investment strategy and setting up the timings to raise the floor significantly at the cost of losing some of the best possible ceilings.

    • mock-possum 1 hour ago
      In all fairness I was an actionscript specialist, and up until 2010 or so I was making REAL good money doing contract flash work -

      Luckily I was also doing frontend work alongside, so when the time came to transition to html+css+javascript, it wasn’t much of a move at all, it was just putting down AS and focusing fully on JS

  • bogzz 2 hours ago
    It's a horrifying feeling facing the possibility that the career I spent so much time and money to get into is fading away. Sure, LLMs are not there yet, and they might not ever quite get there. But will companies start hiring again? If productivity has gone up, and it seems like it has, then no.

    So, a decade of hanging by a thread, getting by and doubling down on CS, hoping that the job market sees an uptick? Or trying to switch careers?

    I went to get a flat tire fixed yesterday and the whole time I was envious of the cheerful guy working on my car. A flat tire is a flat tire, no matter whether a recession is going on or whether LLMs are causing chaos in white collar work. If I had no debt and a little bit saved up I might just content myself with a humble moat like that.

    • jghn 2 hours ago
      > But will companies start hiring again?

      Anecdata, but the few people I know who were looking to switch gigs all had multiple offers within a few weeks. One thing they all had in common was taking a very targeted approach with their search and leveraging their networks. Not spamming thousands of resumes into the ether.

      • jmye 2 hours ago
        > leveraging their networks

        Just finished a search - agree. The resume process is fundamentally broken, but a strong network makes it irrelevant. Lean on connections - there's a ton of opportunity out there.

      • GeoAtreides 1 hour ago
        so you either are well connected or you starve, got it

        guess it's death and destitute for introverts

        edit: please explain the downvotes, i'm curious why you think i'm wrong

        if what op says it's true, that today only networking works, then it easily follows that if for some reasons you do not have a network then you don't get hired

        • jghn 1 hour ago
          Anyone who has been in the industry for several years or more should have people they can reach out to. That’s not being well connected.

          This is really just a reversion to how things used to work, relying on human connections. People seemed to manage to get jobs 30 years ago just fine

        • bigstrat2003 23 minutes ago
          Bro I'm an introvert and I have work connections. It isn't hard.
          • GeoAtreides 17 minutes ago
            > I have work connections.

            > It isn't hard.

            you're not an introvert then

        • bogzz 1 hour ago
          Seems like that is the case, yes.
        • lionkor 1 hour ago
          I'm currently involved in the hiring process in our company, selecting engineers for my team. If someone applies who has the programming language we ask for in their CV, they get a first interview. If they can read code, and write VERY basic code, they will get through at least the first 2 rounds without any issues.

          If people put down the AI, and actually learn how to write a `for` loop, they would be more hire-able than 50% of candidates.

          > "Guess it's death [...] for introverts"

          There is a meritocracy somewhere in our capitalist system. Not everyone participates, but it exists.

          • GeoAtreides 16 minutes ago
            OP was saying the only way to get hired is through work connections

            of course if the process doesn't involve networking then we don't have a problem, we agree on that

    • bonoboTP 1 hour ago
      There is infinite amount of software to be made. Desires and wants never get satisfied. There will simply be more software, more features, more supported platforms, more bug checks, more tests, more CI/CD, more docs, more websites, more services, more more more. Once we solve something, we have a million new desires that we want to solve. There will be plenty of work in software, up until the time when really all knowledge work can be replaced. At that point all bets are off.
    • yunwal 2 hours ago
      > A flat tire is a flat tire, no matter whether a recession is going on or whether LLMs are causing chaos in white collar work.

      There’s really not much stopping changing tires from being automated away. Further standardization of tires or wage increases would probably do the trick.

      There’s still plenty of software to be created. You’ll probably have to learn some ML tricks or whatever, but there’s nothing going away, just changing as software has always done.

      • reaperducer 2 hours ago
        There’s really not much stopping changing tires from being automated away.

        Sounds like you've never changed a tire. Or at least not outside of a very controlled environment.

        • tayo42 1 hour ago
          Are most cars not just undoing the lock, the bolts, switching the tire and redoing the bolts and locks?

          How are these put on in the first place on an assembly line?

          • sharkjacobs 30 minutes ago
            This is the funniest possible answer to "Sounds like you've never changed a tire. Or at least not outside of a very controlled environment."

            "Oh, you think I've never changed a tire? Well here is my abstract high level understanding of the steps to changing a tire! And have you considered the quintessential controlled environment for putting tires onto cars?"

            • tayo42 11 minutes ago
              We all change tires with a jack and a spare? It's like a simple skill? I don't get the point your making then?
    • miah_ 2 hours ago
      I've worked in tech since the late 90's and recently became an apprentice potter. My work in pottery is so much more fulfilling than any tech work I've done. I wish I had started sooner.

      I'm still working in tech, and likely will forever in a much reduced capacity. But pottery is my life now.

    • samiv 2 hours ago
      I cannot but agree. It's a massive skill leveling where software development is transforming from high skilled coding to low skilled prompting.

      For an old dog like myself it feels an unjust rug pull.

      • chasd00 49 minutes ago
        I think looking at what the web did to the journalism industry as a model to what's happening to the software dev industry is worth while. Journalism didn't go away but it did completely change. Many old school journalists just couldn't adapt and left the industry, many papers died too.
        • samiv 41 minutes ago
          Many things only have value because of scarcity.

          Digital products such as "photoshop" have had value because people need a tool like that and there's only a limited number of competition, i.e. scarcity. The scarcity exists because of the cost. I.e. the cost of creating "photoshop"creates limit for how many "photoshops" exist. When you bring down the cost you'll have more "photoshops" when you have more "photoshops" as the volume increases the value decreases. Imagine if you can just tell claude "write me photoshop", go take a dump and come back 30 mins later to a running photoshop. You wouldn't now pay 200USD for a license, now would you? You'd pay 0USD.

          If you now create a tool that can (or promises it) can obliterate the costs, it means essentially anyone can produce "photoshop". And when anyone can do it it will be done over and over and at which point they're worth zero and you can't give them away.

          The same thing has happened to media publishing, print media -> web, computer games etc.

          Then the problem is that when your product is worth zero you can no longer make a business by creating your product, so in order to survive you must look into alternative revenue streams such as ads, data mining etc. None of which are a benefit to to the product itself.

      • tofuahdude 1 hour ago
        Why unjust? Who promised you that the way software is made will stay static?

        Our software industry has specialized, for decades, in "rug pulling" / changing / "disrupting" other industries on a massive scale.

        I find it pretty ironic when engineers make these statements in that context.

        • samiv 1 hour ago
          Whether you like it or not but the society is built on certain social constructs and agreements.

          Do you think it's fair that when the society moves underneath, the capitalistic system moves its tectonic plates it's the individual who has to bear the cost of that?

          Abd let's be clear only software devs are just sucking it up. You think lawyers and doctors would allow themselves to be laid off en masse and be replaced with trainees who just prompt the computer?

          Also what will happen when high wage earners start loosing their discretionary income. The whole service sector for starters will be shaken.

          Just imagine some big tech company laying off 10k engineers. Making 0.3m per year. That's 3b dollars that disappear from the incomes and thus from the economy and just stays in the pockets of the capital holders.

          • chasd00 46 minutes ago
            i made this point a few comments up but i think what's happening to the software dev industry is what happened to the journalism industry when the web really came into its own and everyone was now a journalist. There were even books written by tech people about how great "creative destruction" is heh now the shoe is on the other foot. How many "old dinosaurs" did web development and software dev in general put out of business? My neck is on the line too but even i have to chuckle at that a little bit.
            • samiv 30 minutes ago
              yep you're absolutely right. the value in journalism and journalistic output was based on the scarcity, i.e. the cost of publishing reduced the amount of available content. With web the costs were obliterated so the content exploded and the value of any individual piece dropped to essentially zero. When it's worth zero your revenues are zero and zero revenues you can't really pay for any journalism.

              So then you have no choice but to seek alternative revenue streams (ads, data mining) and in fact this becomes the thing, since the original thing no longer produces a revenue.

      • bonoboTP 2 hours ago
        I don't think it's a psychologically positive self identification to see yourself merely as a gatekeeper and toll extractor rent seeker who only makes a living by withholding agency and skill from others.

        I know many jobs are about giving partial access to secrets or insider knowledge etc but I simply can't see myself accepting that this is my value proposition.

        No, let the pie grow. Let more people be able to do more things. Use the new capabilities to do even more. See how you can provide genuine value in the new environment. I know it isn't easy. There are many unknowns. But at least aspirationally I see that as the only positive way forward.

        The same thing has happened to many jobs. 100 years ago being a photographer was a difficult skill. They must have felt a rug pull when compact cameras became mainstream and they were no longer called to take all family pictures. Surely the codex writers felt a rug pull when printing became widespread. Typesetters when people could use word processors on their PC with font settings. Prop designers and practical effects people when movies switched to vfx. Etc etc.

        • jwolfe 2 hours ago
          > I don't think it's a psychologically positive self identification to see yourself merely as a gatekeeper and toll extractor rent seeker who only makes a living by withholding agency and skill from others.

          That's an incredibly uncharitable reading of the parent comment. At no point in history prior to maybe this year could you argue that working in software was gatekeeping, toll extracting, or rent seeking. Being a highly skilled craftsperson creating software for those who can't or don't want to is a very psychologically positive self identification. Lamenting that the industry is moving away from highly skilled craftspeople is also perfectly valid, even if you believe that it is somehow good for society, which is yet to become clear.

          • bonoboTP 2 hours ago
            They complained about the skill leveling where now lower skilled people can also do what needed higher skill before. You toiled to learn the craft, now there is a fast track to those results. That's what the rug pull is.

            Yes, producing software was value. (It of course still is as of today, we are talking about what may be coming). My plead is to continue searching for ways to contribute value. Don't resign to a feeling that the only way to hold on is if you try to stop others from knowing about or being able to use the skill leveling tech. This makes one bitter and negative. Embrace it, aspire to be happy about it.

            Its like getting scooped in science. In research, I always try to reframe it to be happy that science has progressed. Let me try to learn from it and pivot my research to some area where I can contribute something. Sulking about having been scooped does not lead to positive change and devalues ones own self-image.

            • samiv 1 hour ago
              The problem is that we don't live in a society where the benefits of new technology benefit all.

              We're about to pull the rug underneath all knowledge workers. This will disrupt wage earners lives. This will disrupt the economy.

              You might feel great about when things become cheaper but remember that when things are cheap it's only because costs are low and when costs are low the revenues are low and when revenues are low salaries are low too. Keep in mind that one party's cost is other party's revenue.

              The economy is ultimately one large circle where the money needs to go around. You might think of yourself a winner as long as someone else's salary drops to zero and you still get to keep your income but eventually it will be you whose income will also be disrupted.

              Just something to keep in mind.

              And also we're going to just not rug pull on the individual knowledge workers but businesses too. Any software company with a software product will quickly find themselves in a situation where their software is worth zero.

              Also this comment about gatekeeping is absolutely stupid. It's like saying trained doctors and medical schools are gatekeeping people from doctoring. It would be so much better if anyone could just doctor away, maybe with some tool assistance. So much fantastically better and cheaper? Right! Just lay off those expensive doctors and hire doctor-prompters for a fraction of the price.

              • chasd00 32 minutes ago
                > We're about to pull the rug underneath all knowledge workers. This will disrupt wage earners lives. This will disrupt the economy.

                to tie back to the actually article, if you believe a rug pull is imminent then you got to get off the rug. Idk, you have to make a decision because we're certainly at a fork in the road. There's no guarantee waiting will result in a better outcome nor one saying it will be a worse outcome. There's going to be winners and losers always and lot of it is really just luck in timing. I guess, in reality, the careers we've built come down to a flip of a coin; stay on the rug, get off the rug.

                /i'm thinking of buying a welding truck and getting in to that, then hire a welder and rinse repeat until i have a welding business. There's plenty of pipe fence in my neck of the woods and i see "welder wanted" all over the place so there's opportuntiy too.

                • samiv 21 minutes ago
                  Good luck to you and your welding business. Personally I'm getting to a point where I'm just "too old" (and grumpy) to start over, so I guess for me it's going to be a retirement to some LOCO that I can afford.
              • polothesecond 1 hour ago
                > We're about to pull the rug underneath all knowledge workers. This will disrupt wage earners lives. This will disrupt the economy.

                It will put and end to the middle class entirely, but that’s the intent.

                The reality is a lot of people who were formerly middle or upper middle class, and even some lower class populations will face steep, irreversible “status adjustment”.

                I’m not talking about “we used to be able to take vacations and now we can’t”. I’m talking about “we used to be highly paid professionals now we’re viciously competing for low paid day labor (gig work) to hopefully be able to afford the cheap cuts this week”.

              • bonoboTP 1 hour ago
                You can make a living, if: you have a way to modify your behavior in a way such that it compels another human being to reciprocate and modify their behavior in a way that you find beneficial for your life. All of money and economics in the end boils down to this. If you no longer have any kind of behavior that your neighbors and community see as valuable enough to modify their behavior to benefit you and keep you around, then we will be in trouble.
      • draftsman 2 hours ago
        LLMs have lowered the bar for the unskilled person to create shit software. I have used Opus 4.6 on a number of projects, and it still spits out buggy, and sometimes, flat out broken code. I was actually surprised when it completely hallucinated the names of query params for an HTTP request in my code, when in the prompt I had explicitly given it the exact names it needed to use. I thought these frontier models were supposed to be game changing.
        • js8 1 hour ago
          > LLMs have lowered the bar for the unskilled person to create shit software.

          So? Demand the source code. Run your own AI to review the quality of the code base. The contracting company doesn't want to do it? Fine, find one that will.

    • cyber_kinetist 1 hour ago
      I think the bigger issue is not that LLMs are taking away developer jobs, but the current geopolitical crisis (the collapse of the US empire and the end of the neoliberal era) is leading towards an imminent economic catastrophe, and that would be enough to pop not only the AI bubble, but an even bigger "IT bubble" that has been proliferating since the 90s.

      Programmers (and other white collar jobs) were able to luxuriously coast along the ZIRP era because capital (replenished twice via quantitative easing) was cheap and plentiful, and because the elites at the top had to pump huge amounts money to create a shared fantasy of the "technological future" that validates the neoliberal era. Now that the reality of the actual "physical economy" (the economy of making tangible things) has clawed back at us because of that forbidden three-letter word (war), we all realize that doubling and tripling oil prices were actually dictating our lives rather than some "Skynet AI" crap, and thus our fantasy simulacra of "virtual" play-things have now come to an end. Oh and we all found out that most of SaaS was actually bullshit anyway. In fact, if it could be completely replaced by AI then it was already pretty bullshit in the first place.

      So, for smart STEM people uninterested in programming and only looking for a stable career, I think they would be better off by just doing engineering work that's a bit more tangible, like robotics, manufacturing, shipbuilding, construction, etc. (Or anything related to war, but only if you're able to stomach what you're doing.) If you don't like to sit all day for a salary, then niche blue collar work can also be a good option, since general-purpose robotics (Physical AI?) is still too far away because of many, many issues that's just too long to explain here. I still think if you like programming then you should stick to it in the long run - there will be a very cold winter because of the combination of LLMs, AI bubble pop, and general economic depression, but for those who survive this era there will be an opportunity because of the shortage of skilled programmers (since no-one bothered to hire juniors after the pop, no one will grow to become seniors themselves!) Computing will still be with us forever, just not in a way that investors thought that it's going to "engulf the world".

      • drstewart 1 hour ago
        I would say if all of your doomastrophizing comes to light (the myriad of collapses and depressions and winters in your post), then there is no opportunity for anyone anywhere, and we should all stock up on bullets and cigarettes while we can.

        But something tells me you won't do that.

        • cyber_kinetist 40 minutes ago
          Yup, absolutely! People before us already lived through the Great Depression, people already lived through two World Wars, and despite all that there were still some technological advancements made and some good opportunities amongst the chaos! There's obviously going to be pains from transitioning into a new world order, but humanity will still keep on living. Just don't expect the next world to be the same as before!

          I think it's important to know and practice your passion, even if you have to work on something different to pay the bills. You can only be good at something if you really like it, and you never know what opportunity you'll stumble onto if you're ready for it.

    • bethekidyouwant 2 hours ago
      You must be joking that being a car mechanic is anything like it was 20 years ago.
      • slfnflctd 2 hours ago
        The vast majority of tires that need to be repaired or replaced (and the processes to do so) haven't changed much if at all, though. And there are entire franchises that pretty much only do tires. Same with many other manual labor tasks.

        These are predictable jobs with very few variables that there is still no sign of automation replacing any time soon. They often don't suck as bad as people think. One of the most enjoyable jobs I had was on an assembly line, because my mind was mostly free to wander. It was almost like meditation.

        • bonoboTP 1 hour ago
          Hm, why don't you just go do it then? Seems strange to spread this knowledge and create competition for yourself.

          Theres a reason most people want a white collar job and send their kids to college instead of to such manual jobs.

          • slfnflctd 1 hour ago
            > most people want a white collar job and send their kids to college

            Part of the reason for my prior comment is the clear fact that a not-insignificant percentage of white collar jobs are being massively devalued at the moment, which means many people who thought they'd be able to send their kids to college with income from such jobs won't.

            Considering that the field of robotics is so far behind LLMs in terms of clear value outside of niche industrial applications, I think manual labor is about due for a resurgence. There may be some major rebalancing happening. The big question for laborers will be - as it has always been - what can I do that sucks the least but also allows me to pay for a decent life? Answers will vary.

            • bonoboTP 39 minutes ago
              I'm not sure how long this state of robotics will last. Dexterity is improving very fast. Robots are getting cheaper and cheaper.

              But also, a lot of the manual labor is quite expensive and only affordable as long as there are white collar workers who can pay for fancy bathroom remodelings and landscaping and so on. I don't know how a big deluge of reskilled pipefitters and HVAC technicians will be able to find work. Will everyone just pay each other to do a bunch of handy work for each other?

          • bogzz 1 hour ago
            Money.
  • keiferski 2 hours ago
    I actually think the opposite approach might be the most optimal one, at least from a monetary perspective. That is, be on the cutting-edge of something, but be willing to bail out at the moment its future starts seeming questionable. Or even more specifically, maximize your foothold in it while minimizing your downside.

    Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now. Even if you sold it 5 years ago, you would have made a ton of money. But if you quit your job and started a cryptocurrency company circa 2020, because you thought crypto would eat the entire economic system, you probably wasted a lot of time and opportunities. Too much invested, too much risked.

    AI is another one. If you were using AI to create content in the months/years before it really blew up, you had a competitive advantage, and it might have really grown your business/website/etc. But if you're now starting an AI company that helps people generate content about something, you're a bit late. The cat is out of the bag, and people know what AI-speak is. The early-adopter advantage isn't there anymore.

    • bastawhiz 2 hours ago
      How do you know what not to invest in then? Bitcoin wasn't the only cryptocurrency fifteen years ago. It wasn't even the first digital currency. If you hop on every bandwagon, you'll go broke far more then you become wealthy. Just look at all the people who poured money into NFTs or digital real estate in the metaverse or Dogecoin or whatever.

      It's easy to say "well of course I would have invested in Google in 1999" but there was nothing in 1999 to say that Google was going to be as big as it was. Why not Lycos or Dogpile or AskJeeves?

      How many people dedicated their careers to Flash, only to have it die at the hands of Steve Jobs and HTML5? It's not just about bailing out: lots of folks had to start over because taking advantage of the opportunity means actually investing real time and money. "As a tulip bulb producer, I would have simply stopped producing tulip bulbs when it started to seem questionable." https://en.wikipedia.org/wiki/Tulip_mania

      • keiferski 2 hours ago
        I don't think you're understanding my point. Hopping on the bandwagon is explicitly not "maximizing your foothold while minimizing your downside," nor is it early-stage almost by definition. An early stage technology with an uncertain future cannot have a bandwagon.

        I think the logical thing to do is to invest a minor amount of time/money across a broad spectrum of new promising tech. If you had been aware of and bought $500 of Bitcoin in 2010, you'd be a billionaire today. The early people involved with NFTs also did very well.

        The Flash example is specifically the opposite of my point. Flash was a lucrative skill for a period of time, but at a certain point it became very clear that it didn't have a future.

        • cj 2 hours ago
          If applying that advice were as easy as it sounds, there would be many more billionaires in the world.

          If only Mt Gox didn't vaporize all of my bitcoin in the early 2010's :(

          • keiferski 2 hours ago
            I didn’t say it was easy, I was just thinking theoretically, as a strategy that was the opposite of the OP.
    • epolanski 2 hours ago
      Ah yeah, the beauty of hindsight knowledge.
    • squeaky-clean 1 hour ago
      > That is, be on the cutting-edge of something, but be willing to bail out at the moment its future starts seeming questionable

      Counterpoint, I sold all my Bitcoin in 2011 when Mt Gox got hacked and the price plummeted 80%. Would have done it again after their 2014 hack too if I had any left.

      > Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now

      But you just said bail the moment it's future starts to be questionable. If you follow that you would have never held it for 15 years.

    • WindyMiller 2 hours ago
      But you can't be on the cutting-edge of everything, and there are opportunity costs.
    • chrisandchris 2 hours ago
      Maybe - but that's also a style of life definition, isn't it. I did not care at all about Bitcoin, and I'm still pretty fine. I even think, sitting Bitcoin out was better for me, because who (vrosdly speaking) cares about it today (besides speculating with it)?
    • LeonidasXIV 1 hour ago
      If you bought into NFTs when they were hot you would have lost money. Not every new tech is worth investing immediately.
      • keiferski 1 hour ago
        When they were hot is not when they were new.
    • JumpCrisscross 2 hours ago
      > be on the cutting-edge of something, but be willing to bail out at the moment its future starts seeming questionable

      The problem is this leaves you undifferentiated from every hype chaser in Silicon Valley. Our world is littered with folks who went to coding school, traded Bitcoin, did something in the metaverse and blogged about AI. That jack-of-all-trades knowledge can be useful. But only if you’re making unlikely connections. Having the same cutting-edge familiarity as every tech journalist doesn’t that make.

      Better: develop deep knowledge and expertise in something. Anything. Not only does this give you some ability to recognize what expertise looks like from afar, it also lets you dip into new topics and have a chance at seeing something everyone else hasn’t already. That, in turn, gives you the ability to be a meaningful first mover.

    • xondono 2 hours ago
      Except that strategy gets you killed through a thousand paper cuts.

      What would have you done when the Bitcoin fork happened 50/50? Would you have gone int ICOs? Which ones? Etc…

      There’s simply too many “new things”, so by trying to get exposure to them you’ll be massively in the red.

      Let’s say you get into 1000 “new things”, and you strike it lucky and hit BTC. You’d had to buy BTC in early 2013, hold it over the whole period and sold at the historical maximum for you to be at break even.

      If instead of buying 1000 “new things”, you’ve put your money into the S&P you’d be at +250% by the same time.

    • CodingJeebus 2 hours ago
      Yeah it looks reasonable when you only pick success stories for your examples.

      If you sold the farm to get in early in the Metaverse, you're totally hosed now because that was a dead end. The idea of digital real estate was as terrible then as it is now.

      • keiferski 2 hours ago
        I specifically wrote this:

        > Or even more specifically, maximize your foothold in it while minimizing your downside.

  • mindsandmach 5 minutes ago
    I'm reminded of the parable of the Chinese farmer (a quick Google search if you aren't familiar) when I see this sentiment. Is going all in on crypto good or bad? Maybe so, maybe not. We'll see. Is going all in on AI-assisted development good or bad? Maybe so, maybe not. We'll see.

    All I know is, I've always enjoyed building things. And I enjoy building things with AI-assisted tools too, so I'll continue doing it.

  • pjmlp 2 hours ago
    Agree with the message, coding since 1986, I have learned to not suffer from FOMO and wait for the dust to settle.

    Ironically one might even get projects to fix the mess left behind, as the magpies focus their attention into something else.

    In the case of AI, the fallacy is thinking that even if ridding the wave, everyone is allowed to stay around, now that the team can deliver more with less people.

    Maybe rushing out to the AI frontline won't bring in the interests that one is hoping for.

    EDIT: To make the point even clearer, with SaaS and iPaaS products, serverless, managed clouds, many projects now require a team that is rather small, versus having to develop everything from scratch on-prem. AI based development reduces even further the team size.

  • RobinL 1 hour ago
    For me, it's beyond doubt these tools are an essential skill in any SWE's toolkit. By which I mean, knowing their capabilities, how they're valuable and when to use them (and when not to).

    As with any other skill, if you can't do something, it can be frustrating to peers. I don't want collegeues wasting time doing things that are automatable.

    I'm not suggesting anyone should be cranking out 10k LOC in a week with these tools, but if you haven't yet done things like sent one in an agentic loop to produce a minimal reprex of a bug, or pin down a performance regression by testing code on different branches, then you could potentially be hampering the productivity of the team. These are examples of things where I now have a higher expectation of precision because it's so much easier to do more thorough analysis automatically.

    There's always caveats, but I think the point stands that people generally like working with other people who are working as productively as possible.

  • another-dave 2 hours ago
    On the otherhand, when Cloud Computing started to come in, I knew a bunch of sysadmins. Some were in the "it'll never take off" camp and no doubt they know it now, kicking and screaming.

    But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.

    Similarly with mobile dev. As a Java dev at the time that Android came along, I didn't keep abreast of it - I can always get into it later. Suddenly the job ads were "Android Dev. Must have 3 years experience".

    Sometimes, even just from self-interest, it's easier to get in on the ground floor when the surface area of things to learn is smaller than it is to wait too long before checking something out.

    • bigstrat2003 19 minutes ago
      > But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.

      lol no. There's nothing actually different about managing VMs in EC2 versus managing physical servers in a datacenter. It's all the same skills, and anyone who is competent in one can pick up the other with zero adjustment.

    • aleph_minus_one 1 hour ago
      > On the other[ ]hand, when Cloud Computing started to come in, I knew a bunch of sysadmins. Some were in the "it'll never take off" camp and no doubt they know it now, kicking and screaming.

      > But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.

      From a technological perspective, these sysadmins were right: in nearly all cases (exception: you have a low average load, but it is essential that the servers can handle huge spikes in the load), buying cloud services is much more expensive overall than using your own servers.

      The reason cloud computing took of is that many managers believed much more in the marketing claims of the cloud providers than in the technological expertise of their sysadmins.

      • bigstrat2003 14 minutes ago
        Yeah, most companies still should not be using cloud for stuff. I think a lot of people here who work for startups don't get that, because cloud makes a ton of sense if you're a new business that has a hundred other higher priorities than spending capital on infrastructure and sysadmins. But for most companies? They are better off with on prem.
    • SoftTalker 1 hour ago
      > Suddenly the job ads were "Android Dev. Must have 3 years experience".

      So just read up on it and say you do. They don't really need 3 years experience, so you don't really need to have it.

      • mablopoule 22 minutes ago
        I'm not sure why you're downvoted, but this is the right take IMO. I hate cheating and lying in general, but in any job posting you have to separate what are the actual requirement in term of knowledge versus what can be realistically learned on the job / doing a prototype in a weekend.

        Of course don't fraud by like pretending you're a statistician when you have absolutely no mathematical background, but also don't take at face value the "Must have {x} years of experience in {y} tech" requirement when you know you have the necessary work experience to have a good grasp on it in a few weekend prototypes, and you also know that the job doesn't actually require deep expertise of that particular tech.

        I did the same for my first React.js job, and I didn't feel bad because 1) I was honest about it and did not sold myself as a React expert, and 2) I had 10 years of front-end development, and I understood web dev enough to not be baffled by hooks and the difference between shallow copy vs. deep copy of a data structure, so passing technical test was good enough for it.

      • bitwize 1 hour ago
        Employers check your work history. You'd better be able to back up having the amount of experience they require with paid, value-delivering work at past employers, or they'll pass.
        • SoftTalker 54 minutes ago
          Many actually do not check anything.
          • bitwize 47 minutes ago
            I've never actually encountered one who didn't. Just like I've never been able to actually quit on Tuesday and walk into my next role on Thursday the way Hackernews told me "any halfway decent developer" should be able to do. They tend to ask about this sort of thing in interviews too and if you prove not to have the required background, you are considered weak and filtered out.

            Anyways, checking happens often enough that the risk of being considered a liar and a fraud for claiming experience you don't have is high.

  • picafrost 21 minutes ago
    I am increasingly feeling okay with the idea of being left out. The worst parts of working professionally in a software development team have been amplified by LLMs. Ridiculously large PRs, strong opinions doubled down due to being LLM-"confirmed", bigger expectations coming from above, exceptionally unwarranted confidence in the change or approach the LLM has come up with.

    I am dying inside when I make a comment and receive a response that has clearly been prompted toward my comment and possibly filtered in the voice of the responder if not copied and pasted directly. Particularly when it's wrong. And it often is wrong because the human using them doesn't know how to ask the right questions.

    Fortunately, most of the fundamental technological infrastructure is well in place at this point (networking, operating systems, ...). Low skilled engineers vibe coding features for some fundamentally pointless SaaS is OK with me.

  • linsomniac 2 hours ago
    >I didn't use Git when it first came out.

    This really hinges on what you mean by "didn't use git".

    If you were using bzr or svn, that's one thing.

    If you were saving multiple copies of files ("foo.old.didntwork" and the like), then I'd submit that you're making the point for the AI supporters. I consulted with a couple developers at the local university as recently as a couple years ago who were still doing the copy files method and were struggling, when git was right there ready to help.

    • intrasight 2 hours ago
      Pretty sure he meant he didn't use Git.

      I'm still stuck with TFS and SVN in my day jobs but use Git on and off on side projects. I really wish all my clients would just switch to Git.

      • ambicapter 2 hours ago
        Git is old and busted, jj is the new hotness.
    • jmye 2 hours ago
      Was "when it first came out" a confusing limiter, there? I guess it seems like the article was making the point that jumping on git the day it came out could've just meant learning the betamax of repos, and that it was better to wait a bit to see what mature tech looked like, rather than waiting until 202X to realize that saving multiple file versions was suboptimal.

      I don't understand how this, at all, makes "the point" for anyone.

      • LeonidasXIV 1 hour ago
        Using Git when it came out would have probably meant to use Cogito, which has been dead for such a long time by now.

        Or have bet on Mercurial. Which is also close to dead. Or darcs, which has been big in certain environments and now practically extinct.

  • eqmvii 2 hours ago
    A lot of people feel this way.

    But IMO the most fruitful thing for an engineering org to do RIGHT NOW is learn the tools well enough to see where they can be best applied.

    Claude Code and its ilk can turn "maybe one day" internal projects into live features after a single hour of work. You really, honestly, and truly are missing out if you're not looking for valuable things like that!

    • lopis 2 hours ago
      You're only missing out if that's what you want to do. Not every software developer is interested in creating new software projects from scratch in an hour, or at all. It's totally find to do software development as a job, and then close your laptop and not see it until Monday. Learn the tools that suit when when you need them.
      • remus 1 hour ago
        > You're only missing out if that's what you want to do.

        Who writes software and doesn't have a list of "I'll fix this one day" issues as long as their arm?

        This is honestly one of the things I enjoy most at the moment. There's whole classes of issues where I know the fix is probably pretty simple but I wouldn't have had time to sort it previously. Now I can just point claude at it and have a PR 5mins later. It's really nice when you can tell users "just deployed a fix for your thing" rather than "I've made a ticket for your request" your issue is on the never-ending backlog pile and might get fixed in 5 years time if you're lucky.

    • stiiv 2 hours ago
      > Claude Code and its ilk can turn "maybe one day" internal projects into live features after a single hour of work. You really, honestly, and truly are missing out if you're not looking for valuable things like that!

      You're right, it's possible. But you might be both overestimating the ease of onboarding and underestimating the variety of tasks and constraints devs are responsible for.

      I've seen Claude knock out trivial stuff with a sufficiently good spec. But I've also seen it utterly choke on a bad spec or a hard task. I think these outcomes are pretty broadly established. So is the expectation that the tech will get better. Waiting isn't unwise.

      • smugma 1 hour ago
        Waiting may not be “unwise” but acting now may be optimal. Even though tooling may be much better in 12 months, if it can improve quality or time now, that’s a net benefit.

        Bikers in the Tour de France used to not wear helmets. They were seen as uncouth (“why jump on the bandwagon?”). Helmets today are way better than they were then. But if the utility provided is greater than the cost, of course it makes sense to act sooner.

        I’m not explicitly arguing for investing in AI or other newfangled tech, I’m arguing that the premise of waiting may be “sounded” but also “leaves money on the table”, or in some cases, lives.

        The author talks about vaccines as a counter example but doesn’t really address the cost/benefit in any detail.

  • babarock 2 hours ago
    I don't understand the rush to be "the first". Facebook isn't the first social media, Google isn't the first search engine, iPhone is not the first smart phone, Microsoft is not the first OS, the list goes on.

    Clearly there's an advantage for being an early adopter, but the advantage is often overblown, and the cost to get it is often underestimated.

  • tracker1 19 minutes ago
    I remember working on a few early tools for VRML towards the end of the 90's... It was cool, but far from great... was remembered from the mention of VR in the article.

    That said, my only regret with Bitcoin was deleting my early wallets when I realized the coins were only worth $.25 ... if I'd had any inkling what they'd be worth someday, I'd probably have just bought $1000 worth back then and zipped it up until closer to today. I'm truly curious how many bitcoins were similarly deleted from existence.

  • etwigg 33 minutes ago
    I don't think the "craftsman" self-identification is going to work for software engineers anymore. The tool capabilities are too dynamic, you have to be some sort of opportunistic pirate/entrepreneur. Sure you can jump in and get up to speed on some aspect of the toolchain later on, but the identity shift is the hard and slow part that I think it's wise to get started on ASAP.
  • hsaliak 1 hour ago
    My experience so far tells me that the default path with AI tooling is that it lets us create without learning. So the author is right in that they can pay for a seat in this revolution whenever they want.

    A practitioner with more experience maybe a few percentage points more productive, but the median - grab subscription, get tool, prompt, will be mostly good enough.

    • nclin_ 1 hour ago
      I think this is true at the solo developer scale, but I suspect experience will be much more evident when working with a team.
  • ge96 49 minutes ago
    If this is about vibe-coding.

    I remember when React was the hotness and I was still using jQuery, I didn't learn it immediatley, maybe a couple years later is when I finally started to use React. I believe this delayed my chance in getting a job especially around that time when hiring was good eg. 2016 or so.

    With vibe-coding it just sucks the joy out of it. I can't feel happy if I can just say "make this" and it comes out. I enjoy the process... which yeah you can say it's "dumb/waste of time" to bother with typing out code with your hands. For me it isn't about just "here's the running code", I like architecting it, deciding how it goes together which yeah you can do that with prompts.

    Idk I'm fortunate right now using tools like Cursor/Windsurf/Copilot is not mandatory. I think in the long run though I will get out of working in software professionally for a company.

    I do use AI though, every time I search something and read Google's AI summary, which you'd argue it would be faster to just use a built in thing that types for you vs. copy paste.

    Which again... what is there to be proud of if you can just ask this magic box to produce something and claim it as your own. "I made this".

    Even design can be done with AI too (mechanical/3D design) then you put it into a 3D printer, where is the passion/personality...

    Anyway yeah, my own thoughts, I'm a luddite or whatever

    • ge96 40 minutes ago
      It's funny too you'll hear people say they reverse-engineered something (not related to today's post) and you're like "wow that's impressive" but it turns out AI did it.
  • JumpCrisscross 1 hour ago
    “There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate. Are you genuinely saying that they'll all be left behind because they didn't learn your technology in utero?”

    This is a great framing.

    • themacguffinman 26 minutes ago
      Isn't that what is implied by the toughest job market yet for junior level candidates? Author is very confident that the answer to his question is "no".
      • mablopoule 3 minutes ago
        I think the implication is that even though the technological landscape is evolving, it's not as if people born in the 60's couldn't foray into computer science because they arrived too late to study the ENIAC first.
    • furryrain 44 minutes ago
      I'm less convinced.

      To keep and/or increase my current compensation, I have to be competitive in the software development market.

      (Whether I need AI to remain competitive is another matter.)

      The 16,000 new babies will be competing in different markets.

      Oh, and of those 16,000 babies, many are born in far less fortunate circumstances, they're already far behind their cohort. :/

  • MarkusWandel 2 hours ago
    In general, a good strategy is just staying a little bit behind. Let the new fads play themselves out. Some have staying power. Bitcoin never did turn into a usable currency, just another speculator's toy. Luckily I am - so far - in a position where I can watch the AI thing from the sidelines to see how it plays out.
  • taraharris 8 minutes ago
    As long as it's not coupled with calls to tax and regulate those who do get in early and reap benefits from doing so, this is good and healthy.

    (I'm not the earliest adopter of crypto and AI by any means. I only rode up crypto a couple of times for 2X and 3X kinda gains on my investment, and I only started using Claude last year.)

  • muskstinks 1 hour ago
    Crypto was interesting to think through and it was clear very early on how many flaws it has. It basically just moved the goal post to level deeper and it was quite an eye opener how few people even understood the major flaw of crypto: You can only do crypto savely with anything on blockchain and it has not solved any real issue off blockchain (which means you can literlay just send crypto to each other and thats it).

    But AI is a beast.

    Its A LOT to learn. RAG, LLMs, Architecture, tooling, ecosystem, frameworks, approaches, terms etc. and this will not go away.

    Its clear today already and it was clear with GPT-3 that this is the next thing and in comparison to other 'next things' its the next thing in the perfect environment: The internet allows for fast communication and we never have been as fast and flexible and global scaled manufactoring than today.

    Which means whatever the internet killed and changed, will happen / is happening a lot faster with ai.

    And tbh. if someone gets fired in the AI future, it will always be the person who knows less about AI and knows less about how to leverage than the other person.

    For me personally, i just enjoy the whole new frontier of approaches, technologies and progress.

    But i would recommend EVERYONE to regularly spend time with this technology. Play around regularly. You don't need to use it but you will not gain any gut knowledge of models vs. models and it will be A LOT to learn when it crosses the the line for whatever you do.

    • danny_codes 48 minutes ago
      That hasn’t been my observation and seems unlikely to be true. I suspect AI is a wicked learning environment. Narrow experience may hinder rather than help
  • mmmore 1 hour ago
    One hidden premise of this is "AI tools are not useful now, even if they might be in the future." For example:

    > Few are useful to me as they are now.

    Except current AI tools are extremely useful and I think you're missing something if you don't see that. This is one of the main differences between LLMs and cryptocurrency; cryptocurrencies were the "next big thing", always promising more utility down the road. Whereas LLMs are already extremely useful; I'm using them to prototype software faster, Terrance Tao is using them to formalize proofs faster, my mom's using them to do administrative work faster.

    • edent 42 minutes ago
      Cool. But every time I use them I get a wrong answer faster.

      I know, I know. I'm prompting it wrong. I'm using the wrong model. I need to pull the slot-machine arm just one more time.

      I know I'm not as clever as Terrance Tao - so I'll wait until the machines are useful to someone like me.

  • asim 1 hour ago
    I'm healthily skeptical of new technology. Meaning I'm not the early adopter. But I've also found over the years I don't get left behind. I become curious at the time things are stabilising. Maybe on the cusp where there's still a lot of pushback but there's also clear value. Crypto in 2014-2017. AI in 2023-2024. You don't have to feel FOMO but if you're a technologist, if you have a healthy desire to evolve, change and learn then you'll naturally pick things up. I went from total crypto skepticism in 2014 to investing most of what I had. I went from total AI skepticism to doing RAG for the Quran and agentic tech for the small web. I think there's value in staying true to who you are but also naturally discovering and learning on your own timeline.
  • bonoboTP 2 hours ago
    That's a reasonable strategy. I don't think spreading FOMO is good. But pragmatically, I enjoy working with the latest crop of AI models regarding all sorts of computer tasks, including coding but many other sysadmin stuff and knowledge organization.

    I didn't pick them up until last November and I don't think I missed out on much. Earlier models needed tricks and scaffolding that are no longer needed. All those prompting techniques are pretty obsolete. In these 3-4 months I got up to speed very well, I don't think 2 years of additional experience with dumber AI would have given me much.

    For now, I see value in figuring out how to work with the current AI. But next year even this experience may be useless. It's like, by the time you figure out the workarounds, the new model doesn't need those workarounds.

    Just as in image generation maybe a year ago you needed five loras and controlnet and negative prompts etc to not have weird hands, today you just no longer get weird hands with the best models.

    Long term the only skill we will need is to communicate our wants and requirements succinctly and to provide enough informational context. But over time we have to ask why this role will remain robust. Where do these requirements come from, do they simply form in our heads? Or are they deduced from other information, such that the AI can also deduce it from there?

  • mazone 1 hour ago
    Many companies usually want to compare themselves to Apple and at the same time say they are disruptors and innovators but Apple is probably the best company at being okey with being left behind. Many think about them as experts in products but for me they always been best att copy what others are doing and refine it, maybe not neccassary better technical but always seen the market fit better then others. Like poker, the later you need to take your decision the more information you have.
  • mathgladiator 1 hour ago
    I heard from a senior leader at Amazon that "Today, I am choosing how I fail". This has echoed in my head for many years.

    At any moment, you are failing at thousands of things that you may not even know about, and that is the gist of what I took away from it. The thing is that you have to be OK when you intentionally choose to not invest in something as regret is ultimately a poison.

    The other thing is this: you are not obligated to bring people with you and you have a choice of free association.

  • dgxyz 2 hours ago
    I make my money cleaning up all the stupid fads. The tail end of the curve is profitable.
  • quantified 2 hours ago
    > I wrote my MSc on The Metaverse. Learning to built VR stuff was fun, but a complete waste of time. There was precisely zero utility in having gotten in early.

    Wonderful life lesson on hype cycles. I am curious if hype literacy will join media literacy in academia.

  • redm 2 hours ago
    I agree with the sentiment of this article.

    Sadly, I'm still disagreeing while crypto kiddies are driving past me in lambo's. If its the future of money, yes we'll get there eventually, but like every technology shift, there's a lot of money to be made in the transition, not after. *

    * I sold all crypto a few years ago and I'm a happier person :D

    • lich_king 2 hours ago
      Alternatively, there's money to be lost in the transition. The vast majority of "crypto investors" did not walk away from it any richer. Some folks have gotten lucky, but it's just that: their thesis about the future of money was evidently wrong, they just happened to get the timing right. Getting lucky for the wrong reasons is not a good investment strategy.

      Meanwhile, the main category of people who have consistently gotten rich off the "crypto revolution" were various scammers and pump-and-dumpers who have since moved on to meme stocks, AI content farming, and so on.

      But I wouldn't use crypto as a benchmark because AI has more substance. We can debate if it's going to change the world, but you can build some new types of businesses and services if you have near-perfect natural language comprehension on the cheap.

    • codingdave 2 hours ago
      When you see people making money from something you did not get involved in, just remember: Someone will always be luckier than you, wealthier than you, etc. It doesn't matter. Measure yourself by your own life satisfaction, not by how others are doing.

      That is why I agree with the sentiment as well. I use AI a little. Not too much. And I'm as swamped with work as ever because my focus is on legacy stacks, where AI is really not strong.

    • topaz0 1 hour ago
      Others are pointing out that many also lost money, but I think we can say something even stronger, which is that the people that got rich only did so by taking the money of the people who lost money on it. If it's the future of money, you'll be able to buy in at a stable valuation, neither winning nor losing in the transition -- that's kind of a key property of money.
    • Aurornis 2 hours ago
      > Sadly, I'm still disagreeing while crypto kiddies are driving past me in lambo's.

      I know a few people who got wealthy by being early to crypto. None of them had the correct reasoning at the time: They thought BTC was going to become a common way to pay for things or that “the flippening” was going to see worldwide currency replaced with BTC. They thought they’d be kings in a new economy but instead they’re just moderately wealthy with a large tax bill they’re determined to dodge.

      I know far more people who lost money on crypto, though. Some were even briefly crypto-rich but failed to sell before the crash or did things like double down on the altcoin bubble.

      The second group had gone quiet about their crypto while the few people in the first group gloat and evangelize (because continued evangelization is necessary to keep their portfolios pumped). This creates an intense survivorship bias where it appears like all the crypto kiddies are wealthy, but a quiet mass of people who played with crypto are most definitely not.

  • waynecochran 43 minutes ago
    This is one of those posts I would like look back on in a year or two. I am usually a late adopter with everything. This time is think its different. I am seeing what AI can do with my own eyes. I am creating new things at light speed and figuring out this all works. I don't think you want to be late to the party on this one.
  • pgwalsh 1 hour ago
    Solid piece, very wise. AI is fun but where I find it useful with code is first writing more thorough comments and then writing the base of your tests.

    Writing the actual code that's efficient is iffy at times and you better know the language well or you'll get yourself in trouble. I've watched AI make my code more complex and harder to read. I've seen it put an import in a loop. It's removed the walrus operator because it doesn't seem to understand it. It's used older libraries or built-ins that are no longer supported. It's still fun and does save me some time with certain things but I don't want to vibe code much because it removes the joy out of what you're doing.

  • wolvesechoes 1 hour ago
    The biggest issue is - you will be left behind, in the end. This is the race you cannot win. You can try as much as you like, spend free time trying to catch up, and you, most likely, will lose. If you play this game, you've already lost.

    I am actually surprised by people willingly trying to be more productive, like... machines. And then crying when machines are proven to be better at being machines than meatbags.

  • nslsm 1 hour ago
    >There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate.

    No, they are not.

  • cjbgkagh 1 hour ago
    > There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate. Are you genuinely saying that they'll all be left behind because they didn't learn your technology in utero? > No. That's obviously nonsense.

    That's does not obviously follow, I do worry about the ever increasing proportion of humanity who are no longer 'economically viable' and this includes people who are not yet born.

  • jedberg 1 hour ago
    It's more about job seeking than anything. If you jump on a fad early, and it turns out to be the winner, when you're looking for work you can say you have X years of experience with it, which will be a few more than most of the other candidates.

    It also shows a passion for learning and improvement, something hiring managers are often looking for signals of.

    But of course it's a trade off. This rewards people who don't have family or other obligations, who have time to learn all the new fads so they can be early on the winners.

  • cheevly 55 minutes ago
    Ive been using AI/LLMs for 3 years non-stop and feel like I've barely scratched the surface of learning how to wield them to their full potential. I can’t imagine the mindset of thinking these tools don’t take extreme dedication and skill to master.
  • grim_io 1 hour ago
    Devs who never mentored or never had to delegate/explain the work to be done to someone else, might be in for a rough first few weeks/months.

    It is a skill, but not a special AI specific skill.

  • bdcravens 2 hours ago
    This is fine so long as you don't confuse stubbornness for caution. As technologies lose favor, and others suggested you expand your toolset, don't post about your frustration while you're standing in the unemployment line.
  • A_Duck 2 hours ago
    Crypto isn't bad because it failed to make early adopters rich — it did make them rich. It's bad because it has horrible externalities in scams, war crimes / sanctions evasion, organised crime — which most of those early adopters were well aware of.
    • edent 2 hours ago
      You can make pretty much the same arguments for LLMs. Pollution, corruption, and war are all things I'm keen to avoid.
    • emoII 2 hours ago
      I think you can actually replace crypto with LLMs/Diffusion models in this sentence and it will still hold true
    • aduwah 2 hours ago
      Not to defend crypto, but this all applies to fiat too
      • hparadiz 2 hours ago
        For whatever reason the folks on here refuse to understand that for a big chunk of this planet people get paid in inflationary currencies that they used to have to immediately convert to dollars and then stuff under a mattress because the local banking system is corrupt. In that environment cryptocurrency is a god send.
    • gonzalohm 2 hours ago
      All those apply to AI too, no?
    • xondono 2 hours ago
      The way early adopters got rich was by scamming others, so not sure I see your differentiation there
  • lowken 1 hour ago
    There’s a lot of truth to this post. I’m very pro AI, and I believe everyone should get comfortable with it because it’s not just the future, it’s already the present. If you want to stay competitive in today’s workforce, AI is going to be part of your toolkit.

    But on the other hand... I also only learned git when I needed it at a new job... So we can pump the breaks a bit.

  • nsfmc 2 hours ago
    > For every HTML 2.0 you might have tried, you were just as likely to have got stuck in the dead-end of Flash.

    i'll just say, and i understand this is not the point of the article at all, but for all its faults, if you got in on flash as earl as html 2.0 and you were staring at an upcoming dead-end of flash in say, 2009, you also knew or had been exposed at that time to plenty of javascript, e4x and what were essentially entirely clientside SPAs, providing you a sort of bizarro view of the future of react in a couple of years. honestly, not a bad offramp even if flash itself didn't make it.

  • markbnj 1 hour ago
    Adoption of a new technology has always sorted itself into buckets by early adopters, mainstream adopters and late adopters. I think this post is just demonstrating the mindset of the latter.
  • halapro 1 hour ago
    The last comment is ridiculous. Newborns have a literal lifetime to catch up—rather, they will learn what they need when they need it.
  • al_borland 1 hour ago
    I prefer to move slower. I've accepted that I'm not going to create some unicorn startup (that was never an aspiration). As an employee at a company, my goal is to focus my time learning the things that are relevant to my job and that will be useful for 10 years, not 10 weeks.

    Chasing every new tech will lead to burnout and disillusionment at some point.

    AI probably isn't going away in the same way NFTs largely did, and I use it to some degree. However, I don't see a lot of value of being on the bleeding edge of AI, as the shape it takes for those skills that will be used for the next 10 years are still forming. Trying to keep up now means constantly adapting how I work, where more time is spent keeping up on the changes in AI than actually doing something useful with it.

    After the bubble pops, I think we'll start to see a much more clear picture of what the landscape of AI will look like long-term. Who are the winners, who are the losers, and what tools rise to the top after the hype is gone. I'll go deeper at that time.

    Right now, the only thing I'm allowed to use at work is Copilot, so I just use that and don't bother messing around with much more in my free time.

    • 1PlayerOne 1 hour ago
      Same situation here, Copilot the only available option. But I get the feeling of things moving ahead...
  • vinayaksodar 1 hour ago
    While you can certainly take a wait and watch approach on many things you also have to take a strategic bet on some otherwise you will never be at the forefront of any field.
  • DonsDiscountGas 1 hour ago
    I started working in AI/ML about ten years ago. Reasonably early. Today, professionally and financially I'm doing about as well as a typical programmer. I find the field interesting so I have no regrets but I tend to agree with OP.
  • somenameforme 2 hours ago
    The irony is that if LLMs live up to their potential then the value of software development as a skill is going to plummet, at least as far as something to do for others. I say it's ironic because obviously the people most interested in using LLMs for software development are software developers, and most are not working independently. It'd be like if we were all proactively getting involved in training our own replacements.

    I was highly skeptical of this happening not that long ago, but I have to say that it seems increasingly likely. LLMs are still quite mediocre at esoteric stuff, but most software development work isn't esoteric. There's the viable argument that software development largely isn't about writing code, but the ability to write code is what justifies software developer salaries, because there's a large barrier to entry there that most just can't overcome. The 80/20 law seems to apply to everything, certainly here - 80% of your salary is justified from 20% of what you spend your time doing.

    It's quite impossible to imagine what this will do to the overall market, because while this sounds highly negative for software developers, we're also talking about a future where going independent will be way easier than ever before, because one of the main barriers for fully independent development is gaps in your skillset. Those gaps may not be especially difficult, but they're just outside your domain. And LLMs do a terrific job of passably filling them in.

    It'd be interesting if the entire domain of internet and software tech plummets in overall value due to excessive and trivialized competition. That'd probably be a highly disruptive but ultimately positive direction for society.

  • lionkor 1 hour ago
    I had this with Rust. I always saw the huge hype, especially some years ago, and it was hugely off-putting. Ridiculous projects like rewriting famously full coverage branch tested projects like SQLite in Rust, or rewriting the GNU coreutils, and always spamming "blazing fast" and "written in Rust (crab emoji)" was very, very hostile to a C++ developer.

    When I eventually got around to using Rust, I was hooked, and now I don't use C++ anymore if I can choose Rust instead. The hype was not completely unjustified, but it was also misplaced, and to this day I disagree with most of those hype projects.

    It was no issue to silently pick up Rust, write some code that solves problems, and enjoy it as a very very good language. I don't feel a need to personally contact C or C++ project maintainers and curse at them for not using Rust.

    I do the same with AI. I'm not going around screaming at people who dare to write code by hand, going "Claude will replace you", or "I could vibe code this for 10 bucks". I silently write my code, I use AI where I find it brings value, and that's it.

    Recognize these tools for what they are: Just tools. They have use-cases, tradeoffs, and a massive community of incompetent idiots who like it ONLY because they don't know better, not because they understand the actual value. And then there's the normal, every day engineers, who use tools because, and ONLY because, they solve a problem.

    My advice: Don't be an idiot. It's not the solution for all problems. It can be good without being the solution to a problems. It can be useful without replacing skill. It can add value without replacing you. You don't have to pick a side.

  • fdghrtbrt 2 hours ago
    Guy who's ok with being left behind (crypto, AI) did a MSc on the metaverse. Sounds like he tried to go with the hype once, got burned.
    • edent 2 hours ago
      You're welcome to read my MSc and see if I was "going with the hype" https://arxiv.org/abs/2304.10542

      The answer was quite the opposite. I wanted to see if the technology lived up to the hype. The answer, unsurprisingly was no. If only Zuck had listened to me :-)

      • fdghrtbrt 2 hours ago
        Wanna comment on why you'd be spending your time and energy doing free work for facebook?
    • giraffe_lady 2 hours ago
      Some might call that learning from our mistakes.
      • jmye 1 hour ago
        People don't believe in that anymore. If your gut instinct was wrong, you're not only bound to it for all time, but you'd best angrily double down at every opportunity. God forbid you "flip flop" or consider new information, or whatever.
        • topaz0 1 hour ago
          Fortunately the LLM will always see that your gut instinct was right all along.
  • sktrdie 22 minutes ago
    So? Economy is entertainment. When crypto was hype, billions were made and burned from building whatever entertaining thing around that. Now it's AI's turn. Billions will be made and burned. Economy is just a fun game. Let's have fun. The idea that everything needs to be "useful" is highly subjective. What is truly useful? Is it food? Shelter? Medicine?
  • romaniv 2 hours ago
    > weaponisation of FOMO

    This is in an excellent characterization of the kind of marketing tactic I see all over social media right now and that I find absolutely disgusting.

    The keyword here is fear. Despite faux-positive veneer, the messaging around certain technologies (especially GenAI) is clearly designed to induce anxiety and fear, rather than inspire genuine optimism or pique curiosity. This is significant, because fear is one of the most powerful tools to shut down rational thinking.

    The subliminal (although not very subtle) message there is something very primitive. "If you don't join our group, you will soon starve to death." This is radically different from how most transformative technologies were promoted in the past.

    • Theodores 1 hour ago
      I think AI is not quite the same as crypto when it comes to FOMO. At the peak of the craze you could not write on HN that 'crypto is nonsense' unless you wanted to be modded down to oblivion, to be shadow banned forever. I exaggerate, but not much.

      With AI people are able to say 'this is nonsense' without people getting the pitchforks out.

      As for myself, I don't have the bandwidth to learn how to do clever things with AI. I know you just have to write a prompt and it all happens by magic, but I have been burned quite badly.

      First off, my elderly father got tricked out of all of his money and my mother's savings, which were intended for my niece, when she comes of age. It was an AI chatbot that did the deed. So no inheritance for me, cheers AI, didn't need it anyway!

      Then there was the time I wanted to tidy up the fonts list on my Ubuntu computer. I just wanted to remove Urdu, Hebrew and however many other fonts that don't have any use for me. So I asked Google and just copied and pasted the Gemini suggestion. Gemini specified command line options so that you could not review the changes, but the text said 'use this as you can review changes'. I thought the '-y' looked off, but I just wanted to do some drawing and was not really thinking. So I typed in the AI suggestion. It then began to remove all the fonts and the window manager, and the apps. It might as well have suggested 'sudo rm -fr /'.

      This was my wakeup call. I am sure an AI evangelist could blame me for being stupid, which I freely admit to. However, as a clueless idiot, I have been copying and pasting from Stack Overflow for aeons, to never be tricked into destroying all my work.

      My compromise is to allow some fun with cat pictures, featuring my uncle's cat, with Google Banana. This allows me to have a toe in the water.

      Recently I went on a course with lots of people with few of them being great intellects. I was amazed at how popular AI was with people that have no background in coding. They have collectively outsourced their critical thinking to AI.

      I did not feel the FOMO. However, I am old enough to remember when Word came out. I was at university at the time and some of my coursemates were using it. I had genuine FOMO then. What is this Word tool? I was intimidated that I had this to learn on top of my studies. In time I did fire up Word, to find that there was nothing to learn of note, apart from 'styles', which few use to this day, preferring to highlight text and making it bold or biglier. I haven't used a word processor in decades, however, it was a useful tool for a long time.

      Looking back, I could have skipped learning how to use a word processor, to stick to vi, latex and ghostscript until email became the way. But, for its time, it was the tool. AI is a bit like that, for some disciplines, you can choose to do it the hard way, using your own brain, or use the new tools. However, I have been badly burned, so I am waiting it out.

  • sd9 2 hours ago
    > If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.

    In contrast to the current top comment [1], I don't think this is a wise assessment. I'm already seeing companies in my network stall hiring, and in fact start firing. I think if you're not trying to take advantage of this technology today then there may not be a place for you tomorrow.

    I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me. I trust their experience, but in my own experience, it has made things possible in a matter of hours that I would never have even bothered to try.

    Besides the individual contributor angle, where AI can make you code at Nx the rate of before (where N is say... between 0.5 and 10), I think the ownership class are really starting to see it differently from ICs. I initially thought: "wow, this tool makes me twice as productive, that's great". But that extra value doesn't accrue to individuals, it accrues to business owners. And the business owners I'm observing are thinking: "wow, this tool is a new paradigm making many people twice as productive. How far can we push this?"

    The business owners I know who have been successful historically are seeing a 2x improvement and are completely unsatisfied. It's shattered their perspective on what is possible, and they're rebuilding their understanding of business from first principles with the new information. I think this is what the people who emerge as winners tomorrow are doing today. The game has changed.

    Speaking as an IC who is both more productive than last year, but simultaneously more worried.

    [1] https://news.ycombinator.com/item?id=47454614

    • coldpie 2 hours ago
      > I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me.

      I think it depends on why you do programming. I like programming for its own sake. I enjoy understanding a complex system, figuring out how to make change to it, how to express that change within the language and existing code structure, how to effectively test it, etc. I actively like doing these things. It's fun and that keeps me motivated.

      With AI I just type in an English sentence, wait a few minutes, and it does the thing, and then I stare out the window and think about all the things I could be doing with my life that I enjoy more than what just happened. I find my productivity is way down this year since the AI push at work, because I'm just not motivated to work. This isn't the job I signed up for. It's boring now.

      The money's nice, I guess. But the joy is gone. Maybe I should go find more joy in another career, even if it pays less.

      • sd9 1 hour ago
        Oh, I agree entirely. The new paradigm is entirely unsatisfying to me too. It's not the same work that I trained my entire life to get good at, and the new work is not as fun. I trained to get good at this work because I just loved it since I was first introduced to it at ~10. I would have, and was, doing it for free for years.

        Unfortunately that doesn't change my outlook on where all this is headed.

        • coldpie 1 hour ago
          Perhaps, then, you can actually empathize with people who don't get value from it :) I used to enjoy the work, now I don't, so I'm posting on HN and daydreaming about other careers, instead of doing something useful.
          • sd9 1 hour ago
            Yeah, maybe empathise was the wrong word. I certainly empathise with the feelings, I just struggle to see how people cannot use it to get more done.

            I'm also daydreaming about other careers instead of doing something useful.

            • coldpie 1 hour ago
              > I just struggle to see how people cannot use it to get more done.

              To be blunt about it, there's a decent chance I'll be quitting this job later this year, largely because of the AI push. I just hate these tools and I do not want to work this way. Losing an employee is a pretty big cost to the company. I guess the AI stuff is probably worth it to them, but there's a downside to it, too.

              • sd9 1 hour ago
                Yeah I agree with you, and I think a lot of people feel the same. It's totally different now and it's not what I signed up for. Maybe I'll get used to it, idk.

                I hope everything works out well for you.

    • randusername 1 hour ago
      > I find it hard to empathise with people who can't get value out of AI

      > But that extra value doesn't accrue to individuals, it accrues to business owners.

      What is value?

      Is a 2X faster lumberjack 2X as valuable? Sure

      Is a 2X faster programmer 2X as valuable? At what, fixing bugs? Adding features? That's not how the "ownership class" would define value.

      Productivity is a measure of efficiency, not growth. Slashing labor costs while maintaining the status quo is still a big productivity gain.

      • sd9 1 hour ago
        > Slashing labor costs while maintaining the status quo is still a big productivity gain.

        Maybe I didn't express myself properly, but I think we agree, at least on this point?

        Besides this effect, of enabling smaller teams to produce the same results, I think there is a larger effect coming where fundamentally different structures produce the same or better results as last year. I just don't think we've completely figured out what that looks like yet.

    • jghn 2 hours ago
      > between 0.5 and 10

      Hopefully not too many people are "enhanced" to the tune of 0.5x!

      • sd9 2 hours ago
        That was deliberate. Some people report that it slows them down - fine.
  • jwsteigerwalt 2 hours ago
    At some point you commit the time to learn what you need to. I like to think of the analogy to SEO. The veterans in the industry are not who they are because they were at the front of the line. It’s because they have the 15 years of experience under their belt.
  • nkozyra 2 hours ago
    Not advocating for crypto here, but the ROI evaluations here are a bit incongruous.

    The risk of getting in early on crypto is you lose a little money. The risk of not is missing out on money. You can't simply replay that later, the way that you could invest the time to catch up on how git works.

  • abrztam 1 hour ago
    sure you can pick up any tool whenever you want, but from your employer's perspective AI is the best force multiplier since slavery, everything between it still required humans with leverage, the question is if your boss will need you at all by then
  • Aldipower 2 hours ago
    WordStar for DOS was great! A lot better then my hand-writing. But still, I get the point. :-)
    • badc0ffee 31 minutes ago
      I was a WordPerfect guy, personally.
  • mvrckhckr 2 hours ago
    It’s a personal choice, and both early and late(er) can be valid rational choices if it’s you who is making the choice and not just following a crowd (or even a single person).
  • argee 2 hours ago
    I agree with the conclusion but not with the premise. The conclusion is, "I don't have to be an early adopter," but the premise seems to be "there is zero utility in getting in on anything early."
  • duskdozer 29 minutes ago
    Anyone obsessively insisting others adopt $tech with threats that you'll be obsolete, left behind, whatever, are just selling you something. If anything, they should be trying to keep it a secret, so that they stay among the elite few who get outsized benefits from $tech while everyone else plays in the mud.
  • aavci 2 hours ago
    FOMO is making me feel like I should mess around with openclaw but I can’t see any use cases that I can’t accomplish with other tools. What should I do based on this article?
    • edent 2 hours ago
      I recommend going to the pub with some friends and having a chat about anything that isn't work.
      • aavci 2 hours ago
        Nice blog btw
    • bombcar 2 hours ago
      Don't just do something, stand there!

      Like investing in index funds, a big part of it is psychology of the individual as Jeeves would say.

      Find a way to scratch the FOMO itch without taking on too much risk.

    • kxrm 2 hours ago
      Why do you need a reason to try anything?

      Just go out and prove how useless it is. If, during your testing, you find that it has no good use case, toss it.

      Waiting for others to validate a tech for you is a mistake IMO.

  • Tade0 1 hour ago
    To me the main question is the long term pricing.

    It is said that major providers more than break even on what they're charging.

    But at the same time that's not the point of capitalism, is it? The point is to charge close to the value you're providing.

    My lunch money is approximately $10 and I often blow through as much in Claude tokens generously provided by the company which hired me. But I'm not getting $10 value from those tokens, but much more.

    The cost of entry to this market is extremely high. Should Anthropic win and become an almost monopoly, it is bound to keep increasing prices to the point, where the value it's providing matches the cost.

    That's the endgame of every AI company out there. It's worth using these tools now, while there's still competition and moats weren't established.

    • fantasizr 1 hour ago
      This is where I landed too. At least in the js frameworks "wars" it was reading the docs and prototyping angular and react for free. To try and keep up with the latest AI tools you're expected to spend ~$1k / year.
    • StingyJelly 1 hour ago
      Luckily the models were built on copyrighted materials so hopefully the big players won't have strong legal standing to kill off model destilling. Then, with models like current deepseek or kimi k2.5, you are perhapy 1/2 years behind in their capabilities at the fraction of the cost. For real inference costs, look at prices on openrouter. For hobby, I wasn't able to burn trough more than $5 in a month.
    • StilesCrisis 1 hour ago
      I don't think there's a universe where Google stops competing here, though. The second Anthropic gets greedy, there will be alternatives.
    • layer8 1 hour ago
      > But at the same time that's not the point of capitalism, is it? The point is to charge close to the value you're providing.

      The question is how large the value delta will be from open models. And I’m not sure if the cost of entry is really “extremely” high comparatively if as you project the market will be so profitable. Surely investors will see a chance to get a larger piece of that profit. While model training is costly, there is a ceiling imposed by the training material being limited (at least for text). LLMs also don’t have a network-effect moat like social media has, or a web search moat like Google has, or a chip technology moat like TSMC has. It’s unclear if a significant moat will emerge.

    • dist-epoch 56 minutes ago
      NVIDIA Jensen predicted today that soon engineers will spend $20k in tokens every month.
  • aaurelions 2 hours ago
    Why launch Voyager-1 if, in X years, no matter how far it flies, we’ll catch up to it and overtake it using a new version?
    • bdcravens 2 hours ago
      Without lessons learned from Voyager-1 the successors would never be built.
  • aavci 2 hours ago
    How else can you be in the right place and right time to discover a problem to solve that can’t be seen from afar?
  • dreamcompiler 28 minutes ago
    It amazes me that companies are developing proprietary IP with somebody else's cloud-based AI that ingests and learns from everything that they type and it generates.

    These companies are paying for the privilege of having their IP stolen.

  • tutanosh 1 hour ago
    I understand and empathize with his sentiment, but I think he is missing the point. Using AI effectively as an engineer requires a paradigm shift in terms of how you work. You cannot approach your work as you did in the past, and use AI and expect it to be a big improvement. In fact, if you do that you will likely be disappointed, and worse off. Shifting your paradigm is one of the hardest things you can do, even more so if you have been in the field for a while, but it is also the most rewarding, and opens up many new possibilities. It's not about being left behind, as much as it is about limiting yourself unnecessarily, by staying in your comfort zone.
  • kxrm 2 hours ago
    As someone further down the road in my career, I would argue that waiting is your prerogative but you do so at your own peril.

    I made these kind of mistakes early in my career, stuck it out with PHP for far too long ignoring all the changes with frontend design trends, react, etc. I was using jQuery far too late in my career and it really hurt me during interviews. What I was doing was seen as dated and it made ageism far worse for me.

    Showing a portfolio website that was using tables instead of divs.

    I had to rapidly skill up and it takes longer than you think when you stick too long with what works for you.

    If AI truly is a nothing-burger than guess what? Nothing lost and perhaps you learned some adjacent tech that will help you later. My advice is to NEVER stop learning in this field.

    Learning is your true superpower. Without that skill, you are a cog that will be easily replaced. AI has revealed to me who among my colleagues is curious, and a continuous learner. Those virtues have proven over the course of my 25+ year career in technology to be what keeps you relevant and marketable.

    • bombcar 2 hours ago
      I think the point isn't "wait forever and never learn" but simply "you don't have to be at the forefront of the wave" - because the real ones will lift everything, and you can come it a bit later.

      It is easy NOW to look back and see the optimal path for a web developer, but was that obvious from the start? How many killer technologies lie unused today?

  • josefritzishere 45 minutes ago
    This reasoning is solid and applies equally to AI. I do not need it crammed into every service and forced on me thank you very much "If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours."
  • fantasizr 2 hours ago
    the main value being created is by selling courses and convincing people they're late and need to catch up
  • throwaway330935 2 hours ago
    >what's the point of "getting in early"?

    You're trying to make the point using BitCoin, but in the early 2000s I had just over 14,000 of them, so I can quite clearly see a point in getting in early.

    • bdcravens 2 hours ago
      (Assuming you meant early 2010s, since Bitcoin didn't exist in the early 2000s)

      This underscores a main counter point: dipping your toes and casting a wide net often has a low cost, since back then you could mine (and even purchase) Bitcoin relatively inexpensively. If it hadn't worked out, then it wouldn't have hurt much at all.

    • flatline 2 hours ago
      The bitcoin white paper was released in 2008.
    • jmye 1 hour ago
      The argument about bitcoin was against it being "the future of currency". There is no point, whatsoever, to "getting in early" to that.

      If everyone had been talking about it like the casino that it actually is, then sure - some people made some good bets, and a lot of people made bad ones trying to get in early. Imagine being the person who sold all your bitcoin for whatever other stupid memecoin, to "get in early"?

      It's not a real counter-argument, it's just "I had a lot of dumb luck on this one specific thing, aren't you silly for not guessing as well as me".

  • groundzeros2015 2 hours ago
    I’m suprised nobody has mentioned that Claude is the realization of all this blockchain work - an internet computer you rent time from where the computation is measured in tokens :)
    • bdcravens 2 hours ago
      You just described time-sharing. Cryptocurrency mining is much different, in that you can commit computation resources and never get paid.
      • groundzeros2015 2 hours ago
        Indeed it’s much more similar to time sharing than cloud compute.

        But time sharing was never as global and accessible. It was a group in a company or in a university sharing a specific machine.

        The currency discussion is different too. Time sharing was just a local fee. Now people are talking about the tokens themselves can do for them.

        • hparadiz 1 hour ago
          HN is generally sleeping on the next big tech that is coming for running this stuff locally. Which is quite performant and will soon be in everything.
  • nromiun 57 minutes ago
    Getting early into any technology only makes sense if you are building your business on top of it. Or you are making money from it in some way. Other than that it makes sense for the rest of us to wait.

    Of course those that believe that AI will convert into AGI and destroy society as we know it won't be convinced.

  • neya 1 hour ago
    > I'm OK being left behind, thanks!

    > It is 100% OK to wait and see if something is actually useful.

    > I took part in a vaccine trial

    > Getting Jabbed With EXPERIMENTAL SCIENCE!

    This is such a weird article. The author presents so many contradictory anecdotal experiences against the author's own conclusion.

  • dist-epoch 1 hour ago
    General idea is true, except for this particular technology.

    When AI will be easy to pick up and guide, guess what, there will be no need for a programmer to pick it up. AI will be using itself, Claude Manager driving Claude programmers.

    So leverage AI while you still can provide value doing so.

    It's literally a "use it or lose it situation".

  • surgical_fire 2 hours ago
    Okay, this text was pretty good. Refreshing to read something that doesn't seem written by AI too (would be ironic given the contents).

    The only scenario where I think it pays off to be on top of the hype is of you are chasing money sloshing around the latest hype. You know, the hustle culture thing. If that's not your thing, waiting until things are established (if they ever get there) is harmless.

    And yeah, AI as it is now is at best moderately useful. I use it on a daily basis, but could do without it with little harm.

    • Krssst 2 hours ago
      > Refreshing to read something that doesn't seem written by AI too (would be ironic given the contents).

      As much as I dislike the idea of not writing/checking code I am responsible for, it was a surprise to me seeing a few "anti/limited AI in coding" articles that don't pass an LLM detector. (I know those are not perfect but not much else one can do).

    • bombcar 2 hours ago
      That's really the big difference - if you're a startup or founder (or apparently even an billionaire CEOwner) you gotta chase the hype because your customers are the VCs/shareholders and if you are NOT chasing hype they're going to have serious doubts about you.

      As an employee (perhaps even a highly stock-option compensated one) the equation is very different. Perhaps you're aligned if you're an employee of a startup/AI obsessed company. But for the vast majority they're not.

  • hota_mazi 1 hour ago
    Whenever I hear "It's never too late to do X", I can't help but think "Well in this case, there is no harm in waiting a bit longer, is there?".
  • bravetraveler 1 hour ago
    It's a problem of motivation, all right? Now if I work my ass off and Initech ships a few extra units, I don't see another dime, so where's the motivation? And here's something else, Bob: I have eight different bosses right now.
  • A_Duck 2 hours ago
    I'm upvoting because it's useful to see and debate this viewpoint — shared by many engineers I know

    I do think it's a bad take though. Not all new trends are the same: the metaverse was an obvious flop and crypto hasn't found practical applications. AI isn't like those because it's already practically changed the way I get my job done.

    It takes time to learn skills, and getting started earlier will means more time to use them in your working life.

    • mehagar 2 hours ago
      The way I see it is - AI still makes mistakes, and I have to know how things work at some point anyway. So I'd rather spend my time actually understanding fundamentals (in my case, CSS at the moment), than trying to keep up with the frequently changing AI tools and models.

      Once the tools and models stabilize more (as well as the pricing model), there's less risk in me learning something that is no longer relevant.

      Except when I choose to wait on learning how to use AI tools effectively, I get told I am going to be "left behind".

  • nubg 1 hour ago
    His ignorance is my first-mover opportunity!
  • rileymichael 2 hours ago
    i’ve said this before, but the “left behind” narrative is FUD nonsense. as an llm avoider i’ve never felt further _ahead_ than now. all of my peers who never bothered to learn their tools (which gave tangible benefits) have opted into deskilling themselves further.

    it’s readily apparent who has bought into the llm hype and who hasn’t

  • 0xblinq 2 hours ago
    Comparing these tools to the crypto or NFTs hype is so out of touch with reality.

    This is more on the scale of the invention of the press, the telegraph, or the internet itself.

    "I'm ok being left behind, I will join this Internet thing when it really becomes useful"...

    Ok... you do you. Hope you don't get there too late.

    • jmye 1 hour ago
      > Hope you don't get there too late.

      Too late for what? Could no one start a viable internet business in 2005, or were they all taken in 1998? Is it impossible to learn machine learning today, if you weren't jumping into Tensorflow in 2015? Do you think it's impossible to learn OpenClaw today, if you weren't playing with it six months ago, and do you think there might not be a successor that "wins" and is easier to learn and use six months from now, or will I have "gotten there too late" to possibly leverage or learn agents?

      I just don't understand what it is you think anyone will be too late for, unless this is just self-justification and snide ego-boosting.

      • dist-epoch 52 minutes ago
        In 1998 or 2005 two persons could single-handedly start a Google or a Facebook. Not possible anymore today in Internet.

        But in AI a single person created OpenClaw.

        It's called low-hanging fruits.

    • WolfeReader 1 hour ago
      The behaviors of NFT advocates and AI advocates are shockingly similar.

      Remember how NFTs were supposed to be the future or art ownership, and all it amounted to was awful pictures of bored apes and ahegao lamas? The NFT bros proudly displayed their shitty art - not because it was good, but because it signaled their allegiance.

      Now go on to any pro-AI blog. Look at the images. They've stopped trying to edit out the AI errors - they proudly display images with garbled text and bad anatomy. Just like before, it signals allegiance to AI consumption.

      Even the last sentence of your post is the same sentiment as "have fun staying poor" was for the crypto bros.

      • polothesecond 1 hour ago
        > The behaviors of NFT advocates and AI advocates are shockingly similar.

        It’s the same people every time. Stupid, gullible idiots.

        Remember when HN was obsessed with the room temp superconductivity fraud a few years back? Remember the zealous indignation at anyone suggest skepticism? The empty attempts to downplay their rabid stupidity afterwards?

  • make_it_sure 2 hours ago
    some people are not ok, some people lose their jobs and suffer because they are too complacent and it's too uncomfortable to adapt.

    This is the lazy guy path, is not the wise one.

    • giraffe_lady 2 hours ago
      People mostly lose their jobs and suffer through no failure they were positioned to avoid, and often through no fault of their own at all. Spending your career chasing fads out of fear of being abandoned to penury is at least as limiting as mere conservatism towards new technologies. The risk is real and the fear is valid but that isn't the solution, no individual action you can take is the solution.
  • nailer 1 hour ago
    I'm glad I missed: GraphQL, Kubernetes, Microservices, the Metaverse.

    I'm glad I jumped early on: Linux, Python, virtualization, cloud, nodejs, Solana.

    I wish I'd gotten into Rust and LLMs earlier.

  • mocmoc 2 hours ago
    Wouldn't play that game with LLM's
  • thr0w 2 hours ago
    > What is there to be left behind from?

    Employment?

  • raincole 2 hours ago
    > If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.

    I mean... yeah? It's obviously true. However people use LLM coding today not because they're "afraid of being left behind" or "investing into a new tech" or whatever abstract reasoning. It's because they're already reaping the benefit right away. It takes just a few hours to go through like 80% of the learning curve.

  • bartread 2 hours ago
    I mean, whatever, man.

    This line, as one example:

    > For every HTML 2.0 you might have tried, you were just as likely to have got stuck in the dead-end of Flash.

    Like a lot of tech Flash had its moment in the sun and then faded away, but that “moment” lasted a decade, and plenty of people got their start because of or built successful businesses around it. Did they have to pivot as Flash waned? Sure, but change is part of life.

    I’m sorry but I find the take expressed in this piece to be absolutely miserable and uninspiring.

    But, hey, congratulations on the 20:20 hindsight, I suppose.

  • m132 1 hour ago
    The thing is, Bitcoin, at least before cryptocurrencies were picked up by "tech bros", was originally a way to disconnect from the corrupt, centralized banking system.

    LLMs, at the moment, are all about giving up your own brain and becoming fully dependent on a subscription-based online service.

  • simianwords 2 hours ago
    Its interesting to see this author's historical takes about AI.

    IMO it reads a little desperate and very much like the hype bros but from opposite side. Take a look at the articles if you don't believe.

    https://shkspr.mobi/blog/tag/ai/

    - I'm OK being left behind, thanks!

    - Unstructured Data and the Joy of having Something Else think for you

    - This time is different

    - How close are we to a vision for 2010?

    - AI is a NAND Maximiser

    - Reputation Scores for GitHub Accounts

    - Agentic AI is brilliant because I loath my family

    - Stop crawling my HTML you dickheads - use the API!

    - Removing "/Subtype /Watermark" images from a PDF using Linux

    - LLMs are still surprisingly bad at some simple tasks

    - Books will soon be obsolete in school

    - Winners don't use ChatGPT

    - Grinding down open source maintainers with AI

    - Why do people have such dramatically different experiences using AI?

    - Large Language Models and Pareidolia

    - How to Dismantle Knowledge of an Atomic Bomb

    - GitHub's Copilot lies about its own documentation. So why would I trust it with my code?

    - LLMs are good for coding because your documentation is shit

    • PurpleRamen 1 hour ago
      > IMO it reads a little desperate and very much like the hype bros but from opposite side.

      I would coin it Chill Guys, and I claim on average they are more often correct than hype bros. We all know the old saying that nobody ever was fired for selecting an established solution/technology. Being chill, staying safe, it's more beneficial in real business. Just wait until the hype has normalized and then start dabbling with the new, when all others starting doing it also. You may not be the frontrunner of a new wave, but you can still ride it well. You are only really left behind when everyone changed, and your income starts drying up, until then, it's just business as usual.

      And always remember: when ChatGPT started its hype, many were predicting Googles death and the end of Web searches. Now, 3(?) years later, Google is still around, the search is still going strong, and Gemini is seriously competing with ChatGPT. Hypes come fast and die fast.

      • simianwords 34 minutes ago
        this particular person seems to be more wrong than right though.
  • gabordemooij 1 hour ago
    I find the hivemind terribly oppressing at times. AI tools are great, but in the end it seems to me that the results matter most. However we seem to go from hype to hype, again and again. It's all so tiresome. Why can't we just respect individual choices and focus less on the tools and more on the results?
  • adampunk 2 hours ago
    OK, bye!

    We really are cycling through every possible way of saying that someone doesn’t understand something provided that it also says everyone else is dumber than them. By the time these tools stabilize to where someone with no appetite for risk or willingness to learn can use them, it will be nearly impossible to learn them from scratch. We need to stop treating these blog posts like they’re sensible comments and imagine that it’s someone 13 years ago saying they don’t want to get into react. We certainly had a lot of blog posts like that. Every one of them was written by someone who got into react later. None of those people can look around the ecosystem and find that companies are excited to hire someone who refuses to use react.

    It’s OK to say that you don’t understand something.

    • davebren 1 hour ago
      > "it will be nearly impossible to learn them from scratch."

      Aren't the tools supposed to get easier to use, not harder? As far as I can tell all the expertise in using LLMs comes from already having the underlying skill in the domain.

      • adampunk 0 minutes ago
        >As far as I can tell all the expertise in using LLMs comes from already having the underlying skill in the domain.

        How is it that you came to that conclusion?

    • SpicyLemonZest 2 hours ago
      I decided 13 years ago that I don't want get into React, and continue to have a long and successful career in software development. A couple of times I've had to lean in and do some React work, so I learned the specific bits I needed for those projects, and I was slower and worse than an expert would be but got the job done. Perhaps I'm naive, but it doesn't seem like there was anything in React where I've been left behind and couldn't learn it now if I had a need to.
    • lapcat 2 hours ago
      > it will be nearly impossible to learn them from scratch.

      Are you claiming that all future generations of would-be programmers are doomed?

      • adampunk 1 hour ago
        I'm saying that there's a cost to waiting, just like there's a cost to jumping in early. The cost is SPECIFICALLY that it is harder to jump in to a mature field with its own jargon and concerns. The assumption I think folks are making is their engineering prowess will save them here: whatever complicated thing that matters in AI land will be easily visible to an rank outsider.

        The whole premise of "imma wait" is not sober patience, it's the implied "imma wait until everything falls apart, then we'll go back to what I know how to do." that people don't like saying. It's an argument (often not even stated as such) that the people who don't jump in will be healthier and happier for just having ignored this wave.

        I think that's baloney. It's not FOMO I'm arguing about but the idea that real practices and infrastructure are being built right now that people are internalizing. Folks who aren't a part of it just aren't internalizing any of that. As the tech gets better (and it will!), those practices and infrastructure get more complex, more specialized. The idea that I can just wait years and then "engineer harder" to undertand this from the outside while being competitive is fantasy. Maybe some subset of people can, bully for them. Most people won't be able to.

        Future programmers aren't doomed. Future programmers who can't or won't adapt to the biggest change in computing since the slide rule are doomed.

        • lapcat 1 hour ago
          > The cost is SPECIFICALLY that it is harder to jump in to a mature field with its own jargon and concerns.

          Hasn't that been the case for decades? What specifically is different now, such that for some reason it's harder to jump in now than it was before?

          If anything, LLMs are supposed to make things easier, aren't they?

          > it's the implied "imma wait until everything falls apart, then we'll go back to what I know how to do." that people don't like saying.

          You can read whatever assumption you want into the blog post, but it's not there in the words. You're dunking on a straw man.

          • adampunk 2 minutes ago
            >What specifically is different now, such that for some reason it's harder to jump in now than it was before?

            Well, the obvious evidence that something is different now is all around us. It's been made with painful seriousness by people who thought they were making a different point, namely that LLMs represent an unreliable, poorly understood, and hazardous abstraction layer between coders and the machine. Specifically that this abstraction layer is DIFFERENT than others in the past. There are dozens and dozens of blog posts making this point (some written by machines) on HN. It would be hard to have not come across this point or miss the chorus of engineers agreeing with it. It's supposedly a cardinal reason why assembly -> C was a "good abstraction" and natural language -> slop is a "bad abstraction." If we take that argument seriously, it represents strong evidence that something new is happening, independent of anything I might say.

            Why is it different? C'mon. COME ON. why can I find a post on the front page of HN when Claude is down for more than 10 minutes? Why can I find out that a new model has been released from the big 5 frontier labs, again on the front page, inside minutes? Why is it different? Did we build trillions of dollars of datacenters for NetBeans or SecondLife or whatever other cartoonish old fad I'm supposed to treat as analogous today? Are we just supposed to imagine that Microsoft, NVIDIA, Facebook, Google, Alibaba are all just staffed with idiots, or they're all caught up in irrational exuberance? Are we supposed to watch generation costs march down and outcomes improve and still think 'yeah, this is just like Pets.com? Are we supposed to yield to vague and suggestive motions toward e.g. the dot com boom as though working with agents were the same thing as investing in a specific internet company ca. 1998? Are we supposed to take from that analogy that an engineer who said "no thanks, I'll wait to see how this internet thing shakes out" in the 1990s was a real smarty to be emulated? Come on.

            It's both categorically different and clearly has meaningful material force behind it.

            This is a whole different interface to the computer and even if the eventual outcome is that real engineering work happens with tightly constrained and specialized harnesses around agents, understanding the actual interface is critical. Ironically, the meta-claim here is that good engineers will just be able to vibe out correct practice by engineering harder instead of understanding that core interface! Rather what will be needed is attention and orientation to concerns that people care about in the space.

            I don't want to dunk on a strawman. I'd much rather not see a whole community of engineers loudly pat each other on the back for not learning about something.

  • LaGrange 2 hours ago
    > Have fun being poor

    Not going to lie, I’d rather be poor. Not destitute - I’ve been poor but not destitute and I’d rather not go desperate - but poor? As in (because “poor” is very imprecise and can imply anything between utter poverty to “not owning three homes”) like having a low paying job but still enough to pay rent?

    I’d rather be that than do AI assisted software development. Genuinely the only thing stopping me now is that there’s actually way more skill and qualifications in most low-paying jobs than a typical software developer imagines, and acquiring those takes time and money itself. But by now I know multiple people who made the jump even before the latest madness, and they’re all happier. Some still code, but don’t even publish. Some are like “I haven’t used a proper computer in _months_ this is great.” All work hard jobs at odd hours. None regret.

  • homeonthemtn 2 hours ago
    I think this is a carry over from the early 2000s boom and bust mindset. That if you jump in early enough on nearly any technology, you'll become a billionaire. So hop on board our burning platform!

    In general, we as a society have not adjusted to technology. We've gone through to much change to have any stable base lines. So we're going to float in insanity for a while until things finally settle down. Probably 2 wars, a famine, and several periods of resource scarcity away still, but we'll get there one day...

  • dakolli 2 hours ago
    I love the Cryptocurrency analogy. The LLM hype monkeys are the same people that were screaming that NFTs/Digital Art was going to replace all the traditional art in gallaries in 10 years. They are literally the same people, and they are all addicted to money and hype. Ignore them..

    Did handmade Swiss watch movements lose all demand when Asia started mass manufacturing watches? No. There is always going to be more demand for quality over slop. Its the same reason that handmade clothes are worth 100x more than clothes at a department store.

    This is all by design too, these billionaires selling thinking machines are trying to make us all dependent on their fountain of tokens. Don't fall for it. Just like how maps apps made everyone reliant on Google/Apple for your ability to navigate around your own city, these billionaires want to do the same think with your ability to think, build, plan and even learn/read.

    Don't fall for this scam, unlike other hype cycles like NFTs and Crypto this will actually damage more than just your bank account, it will fry your brain if you become over reliant on it.

    Take a second and consider why these LLM tool companies design their products like slot machines. They put multipliers in there UIs (run this x3,x4,x5) times so that you inevitably treat the thing like a slot machine. And it is like a slot machine, you have no way to control the results its quite random, in the case of llms they just have a better payout percentage, at the cost of making your brain become dopamine and structurally dependent on their output. They convince people there is some occulted art in the formation of a prompt, like a gambler who thinks if they press buttons in a certain order they'll get better results or many other gambling superstitions.

    If you're writing software please take a moment to breath, and ask yourself if its really that useful to have piles of code where you have little idea how things work, even if they do. Billionaires will sell you on the idea that this doesn't matter because the llm, that you conveniently have to pay them to use, will always be able to fix that bug.

    Don't fall for the ruling classes trick, they want you reliant on this thing so they can tell you that your input isn't as valuable, and therefor your salary and skills are not as valuable. We have to stop this now.

  • dakolli 2 hours ago
    The kid who showed his work in detail in math class is doing better in life 9/10 times than the kids that only knew how to use a calculator. Now consider how well the people who think you just need to know how to yell at the calculator are going to do?

    When Maps apps came around, people totally lost the brain muscle for being able to navigate. Using LLMs is no different, people over reliant on these tools are simply ngmi. They are going to be totally reliant on their favorite billionaire being willing to sell them competency via their thinking machines.

    I would caution everyone to consider if the Billionaires who are screaming that you're going to be left behind, laid off and redundant if you don't (pay them to) use their brain nerfing machine, whether or not they have your best interest at heart.

    You're not going to be left behind.

    https://arxiv.org/abs/2506.08872

    • Hasslequest 39 minutes ago
      Firstly, you can run the LLMs on your own machine. So I find the proprietary/moat narrative weak.

      Secondly, I find that correct usage of LLMs can accelerate learning. My brother used an LLM to generate flash cards for a driver's license test. I use LLMs to digest a ton of text and debug issues that would have been impossible to find (I would have given up) Have it generate, explain, review, compare code or general writing.

      It is like having access to wise old man in every field. They may have inferior reasoning capability, and their memory may falter, but they have seen everything in their corpus and are great at pointing you to external references. And you can delegate them to busywork.

    • dakolli 2 hours ago
      Who the hell downvotes this lol
      • SpicyLemonZest 1 hour ago
        I think that framing your observations in terms of "Billionaires who are screaming" about a "brain nerfing machine" doesn't help think about the issues clearly or contribute to a healthy discussion.
  • maxothex 28 minutes ago
    [dead]
  • hideyoshi_th 2 hours ago
    [dead]
  • stefantalpalaru 19 minutes ago
    [dead]
  • jruz 1 hour ago
    Have fun writing code yourself /s
  • j3th9n 1 hour ago
    Tldr; dude makes wrong choices one after the other and copes with it by being ok with being left behind. "I wrote my Msc on The Metaverse", ....
  • null-phnix 1 hour ago
    [dead]
  • butILoveLife 2 hours ago
    This would hit harder if Bitcoin didn't win and AI coding didn't completely change our jobs.

    Why not simply evaluate things instead of ignoring them until its too late?

    Sure, we don't have infinity time, but the fact that OP mentions these two things, means the pattern showed up enough.

    • techblueberry 2 hours ago
      What is too late? If you want to use cryptocurrency as a medium of exchange, go ahead, it’s right there!

      There’s this irony to the FOMO in crypto, which is people argue the “sensible” thing (it’s the future of money) to create FOMO for the insensible thing (it’s a lottery ticket). You’re right it’s too late to buy a lottery ticket, but the vision wasn’t a lottery, it was a medium of exchange!

      AI assisted coding is the same way. I use it every day, but if I decided to stop and wait a year, I could still pick it up, probably more easily when the tools are better.

      In fact, people who wait might do better than me because their mental model won’t be locked into a way of interacting that will be out of date in six months.

      Wouldn’t it be ironic if all the early adopters were the losers because they liked the hacky nature of it? This happened to a lot of early computer adopters, low level programmers, etc.

      • butILoveLife 2 hours ago
        I never said Crypto. I said bitcoin.

        And with AI... I am genuinely afraid my job will be automated. I'm trying to become a manager before we are relegated to minimum wage workers.

    • mehagar 2 hours ago
      You make it seem like AI coding has already "totally changed" our jobs. This is exactly the FOMO the article talks about ("until its too late"). It hasn't. I'm still using the same workflows without AI tools, and so are most of my teammates.
      • butILoveLife 2 hours ago
        To be fair, I would have talked like you in January 2026. But things have changed since Feb.
    • cryptonym 2 hours ago
      Crypto didn't "win", the technology is there but people are mostly gambling, or doing shady stuff. Shall I mention NFTs? It didn't change the life of the average joe, nor business. It's a niche.

      Many people are still coding without AI and doing perfectly fine. When you design serious things, coding is not where most time is spent anyway. Maybe it'll become unavoidable at some point, by that time the experience will be refined and it'll be easier to learn.

      Point is, it's never too late. If you don't need to be cutting edge on a new tech, it may not make sense to put the extra effort of early birds. If you put that effort, you better not do it for free.

    • fdghrtbrt 2 hours ago
      "if bitcoin didn't win?" It didn't win. It's still useless.
      • Hasslequest 37 minutes ago
        You can use it at steak n shake
      • hparadiz 2 hours ago
        must be nice to always live under stable currencies.
        • fdghrtbrt 2 hours ago
          Where do you live?
          • hparadiz 2 hours ago
            At one point I lived in the Soviet Union but that's not really relevant to my point.
            • fdghrtbrt 1 hour ago
              I don't understand your point. You said it would be nice to live..... what does that have to do with bitcoin? What's your point? Make a point.
              • hparadiz 1 hour ago
                My point is your opinion is sheltered and lacks life experience.
                • fdghrtbrt 1 hour ago
                  Your opinion on my opinion is not really relevant to bitcoin.
    • randomsolutions 2 hours ago
      He is rejecting the framing of get in now before it's "too late". If it is so useful then we will be able to pick it up when it is more polished rather than learning to use some half polished turd that will be obsolete in 6 months.
    • master-lincoln 2 hours ago
      What did bitcoin win? Seems to me like it's mainly used as speculative asset instead of using it to pay goods and services.
    • eugenekolo 2 hours ago
      I'm not sure Bitcoin won.. it just continues being a ponzi scheme that you can make money in.

      You can also accept that certain things and be happy in life either way. Don't need to chase get rich schemes. Some are more privileged than others in being able to do this.

    • lpcvoid 2 hours ago
      Bitcoin won? I don't think it did. Main use is still scams, circumventing sanctions, and grifting. My SEPA instant money transfer does everything bitcoin promised without the trash, and bad people surrounding bitcoin.
    • monegator 2 hours ago
      daily reminder your have yet to start paying for using LLMs, the current paid tiers are just to lure you in.

      It also has changed nothing in the way i do stuff. Checked it out, not for me, thanks but no thanks

      (Sick and tired of hearing how you made an ui in a couple of hours by directing the code when it still takes me the same couple of hours of coding.)

  • hparadiz 2 hours ago
    When did ignorance become a virtue? Or is it the contrarian montra?
    • bdcravens 2 hours ago
      I see you haven't spent much time on Twitter the past 10 years :-)
      • hparadiz 2 hours ago
        <insert no body quits twitter rick and Morty meme>

        This isn't Twitter though.

  • danielbln 2 hours ago
    A great blog article for 2023. In 2026 I think the wait is over..
  • marnett 2 hours ago
    > I didn't use Git when it first came out. Once it was stable and jobs began demanding it, I picked it up.

    What jobs aren’t requiring usage of these tools by now?

    • collinvandyck76 2 hours ago
      Most are, implicitly at least. But at the time, there was definitely a period of transition with many shops continuing to use subversion for a while.
      • pjmlp 2 hours ago
        And TFS, ClearCase, Mercurial, Plastic, Perforce, Fossil, CVS, RCS, ....

        Then there are those still using folders with timestamps.

    • bluGill 2 hours ago
      You should be able to pick up tools and learn on need. There are better version control systems than git - always have been, but git won despite being worse. (Git was massively better than what was popular before it won) if you can't learn git quick then you shouldn't program at all - there are much harder tools you will need to know and many are company or project specific
      • Xenoamorphous 2 hours ago
        Mercurial?
        • bluGill 1 hour ago
          That is the one I know best but there are other options that people I tend to trust say are good. There are more options than I have time to give them an honest evaluation.
    • debugnik 2 hours ago
      Just last year I migrated to Git a major code base that was still stuck in SVN. It's not even a legacy project, just a laggard. For some colleagues, this was their first time using Git on the job.
      • SoftTalker 1 hour ago
        There are large projects still using CVS. Not to say everyone should, but git is only a tool. It isn't essential, and there are alternate ways to achive the same ends with different tools.
  • adriancooney 2 hours ago
    There are real productivity gains by using these tools right now. Instead of doing 1x your normal work, you can do 5x while still maintaining quality. This is like an accountant sticking to pen and paper because calculators are big and clunky.
    • nehal3m 2 hours ago
      In your analogy that calculator would only produce a correct answer 80% of the time, and plausible looking but incorrect ones the other 20%.

      If that were the case I’d hire pen guy.

      • adriancooney 2 hours ago
        What's the error rate of the pen guy?

        Also, if your AI has a 20% error rate, you're not holding it right. You need to spend more time keeping it on rails - unit tests, integration tests, e2e tests, local dev + browser use, preview deployments, staging environments, phased rollouts, AI PR reviews, rolling releases. The error rate will be much closer to 0%.

        • davebren 1 hour ago
          How does a phased rollout improve LLM error rates exactly?
      • braebo 2 hours ago
        More like “Producing 80% of the correct answer” and the remaining 20% with some nudging and tweaking. Still extremely valuable.
    • graypegg 2 hours ago
      > I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. [...] If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.

      I think the point the author is making is not that it's all useless, but against the very overly simplistic idea the plot of Amount of AI vs Productivity in All Situations is a hockey stick chart.

      Being told to be excited about something when clearly all they're saying is "it works sometimes, other times not so much. I'll keep checking and when it's good enough for me I'll get on board" is aggravating.

    • RustyBucket 2 hours ago
      To be honest, I would rather spend 5x effort while doing my normal work, because salary won't grow.
    • Tade0 2 hours ago
      > Instead of doing 1x your normal work, you can do 5x while still maintaining quality.

      That's a gross overestimate. 2x I would maybe believe.

      Someone has to sign off your work and unless it's hard to write but easy to read, this is where the bottleneck currently lies.

      • adriancooney 2 hours ago
        Everything is relative. If your systems aren't adapted to AI development, it will be much lower.
    • coldpie 2 hours ago
      > Instead of doing 1x your normal work, you can do 5x while still maintaining quality.

      Yet my pay stays the same, all my coworkers get fired, and Sam Altman gets all of their paychecks. Hrm.

    • gonzalohm 2 hours ago
      What if the calculator had randomness built into it?