Hi HN,
Today we're launching phind.com, a developer-focused search engine that uses generative AI to browse the web and answer technical questions, complete with code examples and detailed explanations. It's version 1.0 of what was previously known as Hello (beta.sayhello.so) and has been completely reworked to be more accurate and reliable.
Because it's connected to the internet, Phind is always up-to-date and has access to docs, issues, and bugs that ChatGPT hasn't seen. Like ChatGPT, you can ask followup questions. Phind is smart enough to perform a new search and join it with the existing conversation context. We're merging the best of ChatGPT with the best of Google.
You're probably wondering how it's different from the new Bing. For one, we don't dumb down a user's query the way that the new Bing does. We feed your question into the model exactly as it was asked, and are laser-focused on providing developers the most detailed and comprehensive explanations to code-related questions. Secondly, we've focused the model on providing answers instead of chatbot small talk. This is one of the major improvements we've made since exiting beta.
Phind has the creative abilities to generate code, write essays, and even compose some poems/raps but isn't interested in having a conversation for conversation's sake. It should refuse to state its own opinion and rather provide a comprehensive summary of what it found online. When it isn't sure, it's designed to say so. It's not perfect yet, and misinterprets answers ~5% of the time. An example of Phind's adversarial question answering ability is https://phind.com/search?q=why+is+replacing+NaCL+with+NaCN+i....
ChatGPT became useful by learning to generate answers it thinks humans will find helpful, via a technique called Reinforcement Learning from Human Feedback (RLHF). In RLHF, a model generates multiple candidate answers for a given question and a human rates which one is better. The comparison data is then fed back into the model through an algorithm such as PPO. To improve answer quality, we're deploying RLAIF — an improvement over RLHF where the AI itself generates comparison data instead of humans. Generative LLMs have already reached the point where they can review the quality of their own answers as good or better than an average human rater tasked with annotating data for RLHF.
We still have a long way to go, but Phind is state-of-the-art at answering complex technical questions and writing intricate guides all while citing its sources. We'd love to hear your feedback.
Examples:
https://phind.com/search?q=How+to+set+up+a+CI%2FCD+pipeline+...
https://phind.com/search?q=how+to+debug+pthread+race+conditi...
https://phind.com/search?q=example+of+a+c%2B%2B+semaphore
https://phind.com/search?q=What+is+the+best+way+to+deploy+a+...
https://phind.com/search?q=show+me+when+to+use+defaultdicts+...
Discord: https://discord.gg/qHj8pwYCNg
Theorem equiv_pq_qp : forall (p q : Prop), (p -> q) <-> (q -> p). Proof. intros p q. split. - intros p_imp_q q_imp_p. apply q_imp_p. apply p_imp_q. assumption. - intros q_imp_p p_imp_q. apply p_imp_q. apply q_imp_p. assumption. Qed.
... together with a lengthy and convincing explanation in natural language.
Sophists would be delighted by these mechanized post-truth AI systems.