HN2new | past | comments | ask | show | jobs | submitlogin

This sounds like cheating to me. Human training will get good results, like chatgpt, and this has value, but we all want the ai to do all the work, don't we? I ask as an almost complete ignorant regarding the subject, and might aswell be wrong.


It depends on your definition of cheating, but this is definitely not "using humans to answer all questions one could ask". Rather it's a way to tune language models to be more like assistants rather than "most likely continuation machines". For context I recommend watching the discussion on what chatgpt is/does[0], and what the open assistant project's aim is[1].

[0]: https://www.youtube.com/watch?v=viJt_DXTfwA [1]: https://www.youtube.com/watch?v=64Izfm24FKA




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: