I’ve personally found that my workflow has become very “opportunistic”—I feel like I can do anything with AI, so I try everything. That might be good…or bad. I’d be curious to see what HN has to say, or whether anyone else has experienced something similar.
Here’s the Reddit post for context: https://www.reddit.com/r/ClaudeAI/comments/1s08r1c/karpathy_says_he_hasnt_written_a_line_of_code/
Anyone also feeling this way?? If not psychosis which may be an exaggeration then feeling more stressed, frazzled, whatever.
I'm a big believer is not just doing something because I can. Could AI build me a personal suite of apps to manage my life in the exact way I want... maybe? Should I spend my time doing that, even if AI is writing 100% of the code? Probably not. Will it be better enough to justify the investment? No. When it breaks or has bugs, who has to deal with me? Me. What about the infrastructure? Another thing to do.
You can say AI is writing all the code, but if someone has to be there to babysit and guide it the whole way, it's still work. Less engaging and rewarding work. I mostly find vibe coding to be boring and frustrating, unless it can one-shot it, which it can only do for small stuff.
I use AI, but I use it in the same way I would use a search engine or a hammer. It's a tool to help do what I was already doing. Sure, it grows my capacity to some degree, but pushing that too far ends up being problematic, as I lose my ability to properly oversee it.
however, actually interesting useful things it tends to fail at, and these can be small reasonably sized projects.
worse still, AI models seem to be optimizing for how many tokens they can make you burn before you give up, rather than minimizing turns required to have a finished product, I say that because each new model that comes out, it needs more turns of coaxing and prodding to get to a functional state.
The fact is, while it can talk a good game, and has been RLHF'd to high heaven to validate you all the bloody time to keep you engaged and burning tokens, your brain is simply tuned to reward any semblance of progress, and you getting a little bit more out of the LLM is in the same damn family of hit you get off coding. The dangerous bit though, is the inherently probabilistic nature of it though. This crank on a prompt may be different from same inputs, but different crank on the machine.
Just remember to get out from in front of the screen, and try to experience the worldly implementations of the systems you think you're building. Without that real world experience, no one's going to trust a bloody thing you do. You are a world model. It's a language model. It may know how to shoot the lingo, you know or can reckon how to do the thing.
Try running yourself a local model on a sufficiently beefy laptop. The lack of instant feedback tends to help soften the feedback loop, and gives you a less "ecstasy" coded position from which to actually objectively evaluate the efficacy of the thing at converting raw electricity -> thing. You'll find the added friction from the additional constraints (no outsourcing to a datacenter funded by someone else's money), suddenly changes the character of the thing.