Seems like it isn't all the protocol? A lot of it is the and the current tools. If you have a hardcore dev who knows auth they can implement to spec and be safe/simple-ish.
MCP Client builders have been asking SDK builders to enforce Structured Output schemas on MCP Servers. The Python SDK has agreed.
Here I assert this is another example of an old paradigm misunderstanding a new one.
Specifically, it is traditional builders and "agentic workflow" builders misunderstanding the various roles in an MCP flow.
Namely, an MCP Client's role is to thinly connect an LLM, a user, and an API... and then get out of the way.
The LLM doesn't gain from wrapping tool output in additional metadata and boilerplate. That just makes the output harder to parse while also making it less similar to the patterns the LLM trained on.
The Client doesn't need the structure to help it orchestrate/route tools... it is the LLM's job to orchestrate.
If you want the Client to orchestrate, that's totally fine. Build with Langchain, not MCP.
The reason agentic workflows need strict adherence to structured input/output is because they are so rigid. Each interaction between one step and another is highly coupled and basically "one-shot".
To get "flexibility" across tasks, you have to layer many of these "one-shot" and coupled flows on top of each other. This is what makes agentic workflows like RPA.
Implicitly, we often treat LLMs as one-shot too. If we ask it to do a coding task, it might make up a method name and we say it hallucinated. But that's raising the bar far above what a human would face. A human would get to goole the method or see in the IDE the warning that the method doesn't exist. Our experience with flaky LLM coding is actually caused by this "one shot" assumption.
MCP flows are the opposite of one-shot. The LLM can use a tool, make a mistake with the data structure, read the error, fix it and move on. There is much lower gain to optimizing to avoid that specific mistake because the LLM still got to the right answer and, with infinite Servers/Tools, the LLM may never do that exact same flow again.
MCP affords a much more self-correcting and flexible system. To the extent there is an art to improving the LLM-Tool interaction, it lies in having the Server builder be thoughtful about how to name the tools and parameters and docstrings.
Enforcing schemas from the client actually makes the least important player (the Client) slightly better off by hamstringing the stars of the show (the LLM and the tool). The real answer, build your clients differently.
You nailed it. I've been serving banks who have this exact problem. They have to be so careful with their data they can only buy over-engineered solutions that barely meet their actual need.
I once leaked an API key and instantly got a mail from GitGuardian informing me of the leak. It was
a) super helpful as I wouldn't have known otherwise
b) it seemed like a clever way to spread their name/biz
thanks! built it solo - nextjs + fastify + mongodb mostly. scanner runs separate n just keeps scanning public github commits in real-time.
yea gitguardian's alerts are super useful, got one myself once too
api radar’s kinda diff tho - more like public dashboard vibes. anyone can just browse and see what kind of keys are leaking n where. useful if ur into redteaming, trends, or just curious lol
Def true when I'm using Gemini with 400k tokens in context... I toggle to Youtube while I wait and have usually started working for a new company by the time I remember to go back to what I was prompting.