Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: ChainFactory – Run Structured LLM Inference with Easy Parallelism (github.com/pankajgarkoti)
8 points by garkotipankaj 44 days ago | hide | past | favorite
hi everyone.

how does moving llm call prompts and output structure definitions away from code into configuration land sound?

would you use something like this if it was stable and well documented enough?

please don't hold back the criticism. i appreciate all feedback (constructive & otherwise).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: