Hacker News .hnnew | past | comments | ask | show | jobs | submit | ngrislain's commentslogin

Yes the user has to be cooperative somehow. You could emulate linear/affine types like features with indexed monads though.


You are right, what I wrote is more of a PoC. It's valid for blocking sockets on the happy path.


Fair point! Updated. I’m definitely coming at this more from a Lean 4/formal methods perspective than a POSIX one.


Just finished


Yes, this year I'm going for Lean 4: https://github.com/ngrislain/lean-adventofcode-2025

It's a great language. It's dependent-types / theorem-proving-oriented type-system combined with AI assistants makes it the language of the future IMO.


Isn't the whole point of AoC to NOT use AI? Even says so in the FAQ


Yes, I'm doing it without AI to learn the language, nonetheless I do think that Lean 4 + AI is a super-powerful combination.


Like with the leader board. People do it to score points, not to learn. Hence, cheating.


A good opportunity to learn a new programming language: https://hackernews.hn/item?id=46105849


Advent of Code 2025 in Lean...


Yes it is true that the model has undergone SFT, and RLHF, and other alignment procedures, and hence the logprobs do not reflect the probability of the next token as in the pre-training corpus. Nevertheless, in concrete applications such as our main internal use-case: structured data extraction from pdf documents it revealed very valuable. When the value was obviously well extracted, the logprob was high and when the information was super hard to find or impossible the model would output - or hallucinate - some value with much lower logprob.


We need to build a syntax tree and be able to map each value (number, boolean, string) to a range of character and then to a GPT token (for which OpenAi produces logprobs). This is the reason we use Lark.


Same token usage. Actually OpenAI returns the logprob of each token conditional on the previous ones with the option logprobs=true. This lib simply parses the output json string with `lark` into an AST with value nodes. The value nodes are mapped back to a range of characters in the json string. Then the characters are mapped back to the GPT tokens overlapping the character ranges and the logprobs of the tokens are summed.


That's great to hear, thanks for the explanation! Super excited to try this out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: