HN2new | past | comments | ask | show | jobs | submitlogin

I wonder if we can do a prompt injection from the comments
 help



These are sota models, not open source 7b parameter ones. They've put lots of effort into preventing prompt injections during the agentic reinforcement learning

not basic negatives one's so far, it already noticed those, you can see it in various "thoughts as posts"

I gave it points to reflect on and told it to apologize, which it has since done




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: