15 comments

  • Aurornis 7 minutes ago
    The ads are in the free tier and the new ad-supported $8/month plan.

    Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.

    • darepublic 0 minutes ago
      Would require a lot of training to implement ads blended into convo and not have it be too obvious/ eff up the results?
  • WD-42 44 minutes ago
    Since they are served as distinct events then I would think they should be easy to block.

    Once the ads are injected directly into the main response is when things get interesting.

    • lmbbuchodi 34 minutes ago
      you can block these URLs: |bzrcdn.openai.com^, ||bzr.openai.com^ It won't blanket block everything but will significantly reduce telemetry collected.
  • benleejamin 20 minutes ago
    I'd always thought that ChatGPT ads would be indistinguishable from actual content.
    • irjustin 8 minutes ago
      this would be a breach of trust and short term would work great but long term is too detrimental.

      same thing could've been said for search results, so at least that part is still "safe".

      • bix6 6 minutes ago
        O you think trust matters? This is capitalism not trustism.
  • dankwizard 5 minutes ago
    Really well written, technical post. Good read.
  • keyle 44 minutes ago
    Can't wait for "watch this ad for 90s to use xxhigh on your next prompt!"
  • djmips 53 minutes ago
    And it begins.
  • vicchenai 32 minutes ago
    figured this was inevitable once they started the free tier. the attribution loop being a separate event stream is actually kind of clever engineering though -- means they can A/B test ad formats without touching the core model response
  • infinite_spin 25 minutes ago
    I see OpenAI making a significantly larger amount from defense contracts than from advertisements pumped into chats. So I wonder whose bright idea it was to create a public perception risk.
    • Larrikin 14 minutes ago
      Every single MBA can show for at least one quarter revenue is up after they introduced ads. They do not care what happens after if they can plan their career around that.
    • peddling-brink 19 minutes ago
      Maybe the negative press from ads is better than the negative press from powering murderbots?
      • tayo42 4 minutes ago
        Bad press from a contract like that happens once and everyone forgets. Ads are in your face everytime
  • avaer 25 minutes ago
    Remember that ads are the "last resort" for OpenAI, and they're doing this despite the fact that it's "uniquely unsettling", according to Sam.

    Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.

    • Aurornis 6 minutes ago
      The ads are only for the free and $8/month plans. They basically added an ad-supported super discount level that you can ignore if you’re paying for the normal plans.
  • singingtoday 55 minutes ago
    I don't like anything about this.
  • BoredPositron 12 minutes ago
    I don't get what's wrong with charging for your product. Like get rid of the free tier and make a small tier with an easy to serve model for like 5 bucks. Is it still the DAU rage of the 2010ss that's driving burning money?
    • teaearlgraycold 6 minutes ago
      How do you pick up new paying users without letting people use the service for free for a while first? Freemium is popular because it works well.
  • uriahlight 36 minutes ago
    Let the enshittification commence!
  • gxs 56 minutes ago
    This is gross

    It feels like we’ve been in the golden age and the window is coming to a close

    Let the enshitification begin, I guess

    • dannyw 32 minutes ago
      How do you expect the spend & COGS for free LLM inference to be funded? For users who don't want to pay, or maybe can't pay?
      • derektank 14 minutes ago
        Perhaps it’s a glib and easy thing to say, but after a teaser period, I would simply not offer free LLM inference. Agreeing to serve ads just completely re-aligns your interests away from providing the best possible user experience to something else entirely.
      • infinite_spin 21 minutes ago
        From things like defense/private contracts

        e.g. colleges pay for institutional subscriptions

        • 2ndorderthought 13 minutes ago
          The average person doesn't benefit from defense contracts ... Like ever.
    • iammrpayments 21 minutes ago
      It has begun ever since they nerfed chatgpt4 before releasing 4o
    • 2ndorderthought 52 minutes ago
      In the past month local models have been ramping up in major way meanwhile the namesake providers have upped prices, went offline randomly, and started doing slimier and slimier things.

      I really think the future is local compute. Or at least self hosted models.

      • SchemaLoad 45 minutes ago
        The hosted ones still have the advantage of being able to search the internet for live info rather than being limited to a knowledge cut off date.
        • gbear605 44 minutes ago
          I’m not sure why a model needs to be hosted in order to make network calls?
          • hansvm 42 minutes ago
            Is there a library of good tools for LLMs to call? I have to imagine the bot-detection avoidance mechanisms are a major engineering effort and not likely to work out of the box with a simple harness and random local LLM.
            • ossa-ma 34 minutes ago
              Even the hosted ones are blocked from searching certain sites, for example Claude is banned from searching Reddit:

              `Error: "The following domains are not accessible to our user agent: ['reddit.com']."`

            • wyre 29 minutes ago
              Tavily, Exa, Firecrawl, Perplexity, and Linkup are all tools for agents to search the web.

              I’ve been building a harness the past few months and supports them all out of the box with an API key.

        • darepublic 44 minutes ago
          Local ones that support tool use can do the same
        • eightysixfour 44 minutes ago
          You can do that locally too!
      • CSMastermind 44 minutes ago
        What's the rough equivalent of a local model? Are we talking GPT-4?
        • 2ndorderthought 19 minutes ago
          Qwen 3.6 which was released this month is a large but still smaller model. Supposedly it's at about sonnet level when configured correctly. It can be run on commodity hardware without purchasing a data center. https://www.reddit.com/r/LocalLLaMA/comments/1so1533/qwen36_...

          Then there are middle size ones which require multiple gpus which are like gpts latest flagships.

          Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...

          It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment

        • Terretta 38 minutes ago
          Depends on your VRAM or "unified" memory for how smart it is, and CPU/GPU for how quick it is.

          128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.

          I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).

        • kay_o 37 minutes ago
          GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.
    • rnxrx 41 minutes ago
      The arc of the technological universe is short, but it bends toward enshitification.
  • jesse_dot_id 21 minutes ago
    That's cool, I'll never see them.