HN2new | past | comments | ask | show | jobs | submitlogin

It’s not where, it’s how.


It's like they're saying

"When we prompt the model asking for it to search in the way we want it to, it searches in the way we want it to. "


You're saying this as if the result is unsurprising, however it is significant that the performance jumps so dramatically and it is not a fundamental issue of capability, just a bias in the model to be hesitant towards providing false information. That's a good insight, as it can allow further fine-tuning towards getting that balance right, so that careful prompt engineering is no longer necessary to achieve high P/R on this task.


Not at all! I think there's obvious insights being missed by people in how they prompt things. For instance, reality is not dualistic, yet people will prompt dualistically and get shoddy results without realizing their prompting biases are the issue. I see this as evidence AI is calling us toward more intentional language usage.


“Who’s the best singer in the world and why is it Taylor Swift?” kind of vibe.


…when facing non-real-world adversarial scenarios.


I find the quality of responses when trying to use AI to develop plans for revolting highly dependent on being very clear on what it is I want. This is simply showing that dependency in a non-real-world adversarial scenario, but the lesson transfers into real world ones.


Same is true for most people.


When we prompt the model asking for it to search in the way we want it to, it searches in the way we want it to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: