LLMs learn what programmers create, not how programmers work

I ran an experiment to see if CLI actually was the most intuitive format for tool calling. (As claimed by a ex-Manus AI Backend Engineer) I gave my model random scenarios and a single tool "run" - i told it that it worked like a CLI. I told it to guess commands.

it guessed great commands, but it formatted it always with a colon up front, like :help :browser :search :curl

It was trained on how terminals look, not what you actually type (you don't type the ":")

I have since updated my code in my agent tool to stop fighting against this intuition.

LLMs they learn what commands look like in documentation/artifacts, not what the human actually typed on the keyboard.

Seems so obvious. This is why you have to test your LLM and see how it naturally works, so you don't have to fight it with your system prompt.

This is Kimi K2.5 Btw.

4 points | by noemit 2 hours ago

0 comments