I haven't always done this, and the knowledge base used to visibly degrade over time. Reviewing a PR does not take a long time, maybe a few minutes, and this compounds over time.
LLM evaluations are very sensitive to the details of the prompt's structure. This post shows how using structured generation reduces the results' variance and the ranking shifts.
That whole structured generation line of work looks promising. I hope someone else takes this and runs evaluations on other benchmarks. Curious to see if the results translate!
Agreed! While these results are very promising, there's still a lot to explore in this space.
In addition to the "prompt consistency" and "thought-control" ideas mentioned in the post, I'm definitely curious how the performance is on more complex structured data (things like codegen).
Enlighten me please
reply