I am finding there is value in having a tight loop between human & machine (LLM). Trying to design LLM "agents" that take a single human command and then run off into the dark forest for a billion tokens is not the right path.
We should be designing "agents" that aggressively involve the user at every possible opportunity. In all contexts of use, you should have something like an AskUserQuestion() function that can be called in parallel each turn. Then, the user serves the AI in answering these questions before they get what they originally asked for. Effectively, this makes the user do a big information loop that could have otherwise been skipped in their initial prompt, but guiding the user when they are not 100% focused can help a lot.
I thought this was what we were aiming for this entire time, but it seems like the goldilocks zone is a bit more complicated than is preferred with fly-by-night AI vendors. To build this requires deep understanding of the "happy paths" through the business. How could you expose functions to the LLM if you don't know what they are or are experiencing disagreement regarding the process around them as a team?
The ultimate conclusion I wind up with is that very few teams & products are organized enough to plug directly into an LLM agentic cybernetic whatever you want to call it. If you cant get the team & customers all on the same page about what the product is, I don't know how the AI is going to help. Once you do get everything standardized enough, you may find you don't care about the purported benefits of AI anymore (you've already realized them).
We should be designing "agents" that aggressively involve the user at every possible opportunity. In all contexts of use, you should have something like an AskUserQuestion() function that can be called in parallel each turn. Then, the user serves the AI in answering these questions before they get what they originally asked for. Effectively, this makes the user do a big information loop that could have otherwise been skipped in their initial prompt, but guiding the user when they are not 100% focused can help a lot.
I thought this was what we were aiming for this entire time, but it seems like the goldilocks zone is a bit more complicated than is preferred with fly-by-night AI vendors. To build this requires deep understanding of the "happy paths" through the business. How could you expose functions to the LLM if you don't know what they are or are experiencing disagreement regarding the process around them as a team?
The ultimate conclusion I wind up with is that very few teams & products are organized enough to plug directly into an LLM agentic cybernetic whatever you want to call it. If you cant get the team & customers all on the same page about what the product is, I don't know how the AI is going to help. Once you do get everything standardized enough, you may find you don't care about the purported benefits of AI anymore (you've already realized them).