Types and Neural Networks

(brunogavranovic.com)

33 points | by bgavran 4 hours ago

3 comments

  • Xmd5a 2 hours ago
    Related:

    https://cybercat.institute/2025/05/07/neural-alchemy/

    https://cybercat.institute/2026/02/20/categorical-semantics-...

    https://cybercat.institute/2025/10/16/dependent-optics-ii/

    > The reason I put off starting the series for so long is one of the same reasons blocking the writing of the paper: some of the introductory material is some of the most difficult to write. It has been such a long time that I no longer know how to adequately explain why the problem is so difficult.

    My sympathies to Jules

  • big-chungus4 3 hours ago
    So the model generates code, and let's say it is wrongly typed, we then take the rightly typed version and use cross entropy between them? Is that right? That just sounds like the typical training, unless you can somehow take arbitrary code that the model generated and automatically find the rightly typed version, so you won't need a dataset for it
    • yorwba 3 hours ago
      Rather than letting the model generate arbitrary code and type-checking it afterward, the author wants to pre-restrict the output with templates that are well-typed by construction and only let the model make choices between valid alternatives in that restricted output space.
  • woolion 1 hour ago
    I'm not sure what to make of TFA (I don't have time right now to investigate in details, but the subject it interesting). It starts with saying you can stop generation as soon as you have an output that can't be completed -- and there's already more advanced techniques that do that. If your language is typed, then you can use a "proof tree with a hole" and check whether there's a possible completion of that tree. References are "Type-Constrained Code Generation with Language Models" and "Statically Contextualizing Large Language Models with Typed Holes".

    Then it switches to using an encoding that would be more semantic, but I think the argument is a bit flimsy: it compares chess to the plethora of languages that LLM can spout somewhat correct code for (which is behind the success of this generally incorrect approach). What I found more dubious is that it brushed off syntactical differences to say "yeah but they're all semantically equivalent". Which, it seems to me, is kind of the main problem about this; basically any proof is an equivalence of two things, but it can be arbitrarily complicated to see it. If we consider this problem solved, then we can get better things, sure...

    I think without some e.g. Haskell PoC showing great results these methods will have a hard time getting traction.

    Please correct any inaccuracies or incomprehension in this comment!

    • bgavran 4 minutes ago
      Author here - thanks for engaging.

      On existing techniques - Type-Constrained Generation paper is discussed in the blog post (under Constrained Decoding), and I'd group typed holes in the same bucket.

      The problem with those methods is that they're inference time: they don't update the weights. In this case, constrained decoding prevents the model from saying certain things, without changing what the model wants to say. This is especially problematic the more complex your type systems get, without even taking into account that type inference is for many of these undecidable.

      Meaning, if I give you a starting string, in the presence of polymorphisms and lambdas you might not always be able to tell whether it completes to a term of a particular type.

      On the syntactic difference: I'd gently reframe. The question isn't whether syntactically different programs are semantically equivalent, it's that regardless of which form you pick, the existing methods don't let the model learn the constructor choice.

      That's what the next section is about.