An LLM is not an assistant. It's a tool that will fill up the gaps with plausible sounding content. If it happens to have a lot of data on what you're looking for in its training good. If not, you SOL, risking being fooled by confidence.
The beauty of tools is you can pick the tool based on the job. Personally I use both search and ChatGPT4 and Copilot in IDE at various times throughout any day.
Not if the tools changes? Like in this example, where Google Search is no longer a librarian(though arguably it hasn’t been a good one for 10+ years) but has become a chat bot.
I feel like context is far more important than the actual answer.
If you want to develop your skills, it's just as important to be aware of the things that aren't the answer, rather than just a single hyper-specific curated answer.
As others have mentioned in nearby threads, Kagi has some good examples of this in action. Appending a question mark to your query will run a GPT model to interpret and summarize the results, providing sources for each reference so that you can read through in more detail. It's way less "trust me bro" than google and feels more like a research assistant.
What I want is a librarian, who can sift through the impossibly vast oceans of information and point me at resources that are relevant to my query.