HN2new | past | comments | ask | show | jobs | submitlogin
Understanding ChatGPT prompt engineering through plug-ins manifests (pretzelbox.cc)
1 point by Sai_ on May 25, 2023 | hide | past | favorite | 4 comments


> This means that about half the plugin developers are not taking their description_for_model string seriously.

That is a rather bold statement without proper evidence. You are assuming a longer description is better, but is that really the case? Is it universal or does the optimal length depend on the plugin complexity?

> Personally, I think ChatGPT's strategy of opening up their language model to plugins makes them a far more compelling product than Google or Bing Search.

Both are evolving, for example, Bard's upcoming features: https://beebom.com/google-bard-ai-best-features/ (including a plugin like feature, but many that ChatGPT does not have, or cannot have because they would have to build email & office suite like GOOG and MSFT)


> That is a rather bold statement without proper evidence.

I based this on a couple of insights

1. OpenAI itself encourages adding examples to prompts where necessary. Given the same initial prompt, this means that the prompt with examples will elicit more specific results.

If the goal of a plugin is to be used by ChatGPT, then adding examples and exhaustively covering different conversational tracks in which that plugin can be used can only help.

> Is it universal or does the optimal length depend on the plugin complexity?

If the plugin is so trivial that it can be captured by a short description, then a longer description will not be necessary. It remains to be tested what ChatGPT will do if it is asked to choose between two identical plugins with short and long descriptions.

This gets to the heart of why optimizing for AI is so fascinating to me. You're working to attract a non-human intelligence to your door. Of course, I realize that Google's search algos are also AI infused so we have been doing this for years now but this is the first time I am actively thinking about how AI will pick my content over someone else's.


Might the difference be explained by the number of endpoints the plugin exposes, implying more endpoints, more prompt and examples?

I would check out langchain and gpt-index, much better than OpenAI stuff


Yes, langchains and llama-index are both amazing techs.

They are separate from plug-ins though.

Plug-ins are ChatGPT specific (afaik) and are driven by free form content written by plug-in developers for ChatGPT.

Langchains start with human input, then go off to do their own thing without necessarily investing free form content written specifically for the model.

Llama-index starts with Freeform content written for people and makes it easier for the underlying model to read/infer from it.

The way I see it is that there used to be SEO to please the Pagerank. Soon, there will be GEO to please the generative engine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: