Hacker News new | past | comments | ask | show | jobs | submit login

I categorically do not want an LLM regurgitating answers at me.

What I want is a librarian, who can sift through the impossibly vast oceans of information and point me at resources that are relevant to my query.




Why not? A librarian leaves you with more work to do. An LLM is an assistant that adapts the information to your specific scenario saving you work.


An LLM is not an assistant. It's a tool that will fill up the gaps with plausible sounding content. If it happens to have a lot of data on what you're looking for in its training good. If not, you SOL, risking being fooled by confidence.


Sure and that’s still useful.


But less useful than a librarian.


The beauty of tools is you can pick the tool based on the job. Personally I use both search and ChatGPT4 and Copilot in IDE at various times throughout any day.


Not if the tools changes? Like in this example, where Google Search is no longer a librarian(though arguably it hasn’t been a good one for 10+ years) but has become a chat bot.

Where to go if I want the Google search of 1999?


They’re not replacing search results are they? I thought just adding more content to the page.

Otherwise, try Bing or Kagi.


I agree but for people like you and me that use HN.

But a big majority of people can't even read properly. That's why LLMs are disruptive. You think they're dumb, you should see some people I work with.


Let me choose how much work is enough for me. I don't need an LLM wiping my butt, and I enjoy reading the filtered results; not all work is bad.


The same could be said about the librarians job. Sure pick your own level of involvement.


I feel like context is far more important than the actual answer.

If you want to develop your skills, it's just as important to be aware of the things that aren't the answer, rather than just a single hyper-specific curated answer.


Or it'll flat out hallucinate information.


Of course. But one can know it can hallucinate and I still get value from it.


That's fair. I've gotten value out of hallucinations. But the source was typically in a ziploc bag, not a piece of software.


As others have mentioned in nearby threads, Kagi has some good examples of this in action. Appending a question mark to your query will run a GPT model to interpret and summarize the results, providing sources for each reference so that you can read through in more detail. It's way less "trust me bro" than google and feels more like a research assistant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: