Hacker News .hnnew | past | comments | ask | show | jobs | submit | dend's commentslogin

One of the MCP Core Maintainers here, so take this with a boulder of salt if you're skeptical of my biases.

The debate around "MCP vs. CLI" is somewhat pointless to me personally. Use whatever gets the job done. MCP is much more than just tool calling - it also happens to provide a set of consistent rails for an agent to follow. Besides, we as developers often forget that the things we build are also consumed by non-technical folks - I have no desire to teach my parents to install random CLIs to get things done instead of plugging a URI to a hosted MCP server with a well-defined impact radius. The entire security posture of "Install this CLI with access to everything on your box" terrifies me.

The context window argument is also an agent harness challenge more than anything else - modern MCP clients do smart tool search that obviates the entire "I am sending the full list of tools back and forth" mode of operation. At this point it's just a trope that is repeated from blog post to blog post. This blog post too alludes to this and talks about the need for infrastructure to make it work, but it just isn't the case. It's a pattern that's being adopted broadly as we speak.


> modern MCP clients do smart tool search that obviates the entire "I am sending the full list of tools back and forth" mode of operation

This has always surprised me as this always comes up in MCP discussions. To me, it just seem like a matter of updating the protocol to not have that context hungry behaviour. Doesn't seem like an insurmountable problem technically.

Glad you say it has already been addressed. Was the protocol itself updated to reflect that? Or are you just referring to off-spec implementations?


Anthropic solved this problem like 3 AI years ago: https://www.anthropic.com/engineering/code-execution-with-mc...

How come there isn't an mcp://add?url=https://username:password@mcpserver.com/path URL so my browser can open my client to auto-install the MCP yet? I shouldn't have to mess with config files to install an MCP, I should be able to just click a button on the site and have my client pop up and ask if I want to install it.

> modern MCP clients do smart tool search that obviates the entire "I am sending the full list of tools back and forth" mode of operation

How, "Dynamic Tool Discovery"? Has this been codified anywhere? I've only see somewhat hacky implementations of this idea

https://github.com/modelcontextprotocol/modelcontextprotocol...

Or are you talking about the pressure being on the client/harnesses as in,

https://platform.claude.com/docs/en/agents-and-tools/tool-us...


More of the latter than the former. The protocol itself is constrained to a set of well-defined primitives, but clients can do a bunch of pre-processing before invoking any of them.

Fully agree.

If you don't change your approach but just use CLI "intead of" MCP, you'll end up with a new spin on the same problems. The guardrails MCP provides (identity, entitlement, multi-principal trust boundaries) still need to exist somewhere.

https://forestmars.substack.com/p/twilight-of-the-mcp-idols


Screw MCP. It’s not even a protocol. It’s a strongly worded suggestion at best.

The post isn't MCP vs CLI. It covers where MCP wins.

> The entire security posture of "Install this CLI with access to everything on your box" terrifies me This is fair for hosted MCPs, However I'm not claiming the CLI is universally more secure. users needs to know what they're doing.

Honestly though, after 20 years of this, the whole thread is debating the wrong layer. A well-designed API works through CLI, MCP, whatever. A bad one won't be saved by typed schemas.

> At this point it's just a trope that is repeated from blog post to blog post

Well, "Use whatever gets the job done" and "it's just a trope" can't both be true. If the CLI gets the job done for some use cases, it's not a trope. It's an option. And I'd argue what's happening is the opposite of a trope. Nobody's hyping CLIs because they're exciting. There's no protocol foundation, no spec committee, no ecosystem to sell into. CLIs are 40-year-old boring technology. When multiple teams independently reach for the boring tool, that's a signal, not a meme.

> This blog post too alludes to this and talks about the need for infrastructure to make it work

When tool search is baked into Claude Code, that's Anthropic building and maintaining the infrastructure for you. The search index, ranking, retrieval pipeline, caching. It didn't disappear. It moved.

And it only works in clients that support it. Try using tool search from a custom Python agent, a bash script, or a CI/CD pipeline. You're back to loading everything.

A CLI doesn't need the client to do anything special. `--help` works everywhere. That's the difference between infrastructure that's been abstracted away for some users and infrastructure that's genuinely not needed.


Core contributor behind this project. If folks have any feedback, we're all ears!


Do you plan on supporting OpenAI Codex or Cursor CLI?


I've installed it, but not yet tried it. Will report back soon :).

Mainly tried Kiro so curious how it compares.


I am curious about how you think about memory.


One of the MCP Core Maintainers here. I want to emphasize that "If you see something, say something" very much works with the MCP community - we've recently standardized on the Spec Enhancement Proposal (SEP) process, and are also actively (and regularly) reviewing the community proposals with other Core Maintainers and Maintainers. If there is a gap - open an issue or join the MCP Contributor Discord server (open for aspiring and established contributors, by the way), where a lot of contributors hang out and discuss on-deck items.


I am just glad that we now have a simple path to authorized MCP servers. Massive shout-out to the MCP community and folks at Anthropic for corralling all the changes here.


Completely agree - and yet auth is still surprisingly difficult to set up, so I built a library to simplify the setup. In this repo, there is:

- a Typescript library to set up ChatGPT-MCP compatible auth

- Source code for an Express & NextJS project implementing the library

- a URL for the demo of the deployed NextJS app that logs ChatGPT tool calls

Should be helpful for folks trying to set up and distribute custom connectors.

https://github.com/mcpauth/mcpauth


What is the point of a MCP server? If you want to make an RPC from an agent, why not... just use an RPC?


It is easier to communicate and sell that we have this MCP server that you can just plug and play vs some custom RPC stuff.


But MCP deliberately doesn’t define endpoints, or arguments, or return types… it is the definition of custom RPC stuff.

How does it differ from providing a non MCP REST API?


The main alternative one would have for having a plug-and-play (just configure a single URL) non-MCP REST API would be to use OpenAPI definitions and ingesting them accordingly.

However, as someone that has tried to use OpenAPI for that in the past (both via OpenAI's "Custom GPT"s and auto-converting OpenAPI specifications to a list of tools), in my experience almost every existing OpenAPI spec out there is insufficient as a basis for tool calling in one way or another:

- Largely insufficient documentation on the endpoints themselves

- REST is too open to interpretation, and without operationIds (which almost nobody in the wild defines), there is usually context missing on what "action" is being triggered by POST/PUT/DELETE endpoints (e.g. many APIs do a delete of a resource via a POST or PUT, and some APIs use DELETE to archive resources)

- baseUrls are often wrong/broken and assumed to be replaced by the API client

- underdocumented AuthZ/AuthN mechanisms (usually only present in the general description comment on the API, and missing on the individual endpoints)

In practice you often have to remedy that by patching the officially distributed OpenAPI specs to make them good enough for a basis of tool calling, making it not-very-plug-and-play.

I think the biggest upside that MCP brings (given all "content"/"functionality") being equal is that using it instead of just plain REST, is that it acts as a badge that says "we had AI usage in mind when building this".

On top of that, MCP also standardizes mechanisms like e.g. elicitation that with traditional REST APIs are completely up to the client to implement.


I can’t help but notice that so many of the things mentioned are not at all due to flaws in the protocol, but developers specifying protocol endpoints incorrectly. We’re one step away from developers putting everything as a tool call, which would put us in the same situation with MCP that we’re in with OpenAPI. You can get that badge with a literal badge; for a protocol, I’d hope for something at least novel over HATEOAS.


REST for all the use cases: We have successfully agreed on what words to use! We just disagree on what they mean.


Standardising tool use, I suppose.

Not sure why people treat MCP like it's much more than smashing tool descriptions together and concatenating to the prompt, but here we are.

It is nice to have a standard definition of tools that models can be trained/fine tuned for, though.


Also nice to have a standard(ish) for evolution purposes. I.e. +15 years from now.


Agreed. Without standards, we wouldn’t have the rich web-based ecosystem we have now.

As an example, anyone who’s coded email templates will tell you: it’s hard. While the major browsers adopted the W3C specs, email clients (I.e. email renderers) never adopted the spec, or such a W3C email HTML spec didn’t exist. So something that renders correctly in Gmail looks broken in Yahoo mail in Safari on iOS, etc.


Standards are very important, especially extensible ones where proposals are adopted when they make sense - this means companies can still innovate but users get the benefit of everything just working.

But browsers/web ecosystem are still a bad example as we had decades of browsers supporting their own particular features/extensions. This has converged slightly pretty much because everything now uses Chrome underneath (bar Safari and Firefox).

But even so...if I write an extension while using Firefox, why can't I install that extension in Chrome? And vice-versa? Even bookmarks are stored in slightly different formats.

It is a massive pain to align technology like this but the benefits are huge. Like boxing developers in with a good library (to stop them from doing arbitrary custom per-project BS) I think all software needs to be boxed into standards with provisions for extension/innovation. Rather than this pick & choose BS because muh lock-in.


Standardization. You spin up a server that conforms to MCP, every LLM instantly knows how to use it.


MCP is an RPC protocol.


Not everyone can code, and not everyone who can code is allowed to write code against the resources I have.


you have to write code for MCP server, and code to consume them too. It's just the LLM vendor decide that they are going to have the consume side built-in, which people question as they could as well do the same for open API, GRPC and what not, instead of a completely new thing.


The analogy that was used a lot is that it's essentially USB-C for your data being connected to LLMs. You don't need to fight 4,532,529 standards - there is one (yes, I am familiar with the XKCD comic). As long as your client is MCP-compatible, it can work with _any_ MCP server.


The whole USB C comparison they make is eyeroll inducing, imo. All they needed to state was that it was a specification for function calling.

My gripe is that they had the opportunity to spec out tool use in models and they did not. The client->llm implementation is up to the implementor and many models differ with different tags like <|python_call|> etc.


Clearly they need to try explaining it it easy terms. The number of people that cannot or will not understand the protocol is mind boggling.

I'm with you on an Agent -> LLM industry standard spec need. The APIs are all over the place and it's frustrating. If there was a spec for that, then agent development becomes simply focused on the business logic and the LLM and the Tools/Resource are just standardized components you plug together like Lego. I've basically done that for our internal agent development. I have a Universal LLM API that everything uses. It's helped a lot.


The comparison to USB C is valid, given the variety of unmarked support from cable to cable and port to port.

It has the physical plug, but what can it actually do?

It would be nice to see a standard aiming for better UX than USB C. (Imho they should have used colored micro dots on device and cable connector to physically declare capabilities)


Certainly valid in that just like various usb c cables supporting slightly different data rates or power capacities, MCP doesn't deal with my aforementioned issue of the glue between MCP client and model you've chosen; that exercise is left up to us still.


My gripe with USB C isn't really on the nature, but on the UX and modality of capability discovery.

If I am looking at a device/cable, with my eyes, in the physical world, and ask the question "What does this support?", there's no way to tell.

I have to consult documentation and specifications, which may not exist anymore.

So in the case of standards like MCP, I think it's important to come up with answers to discovery questions, lest we all just accept that nothing can be done and the clusterfuck in +10 years was inevitable.

A good analogy might be imagining how the web would have evolved if we'd had TCP but no HTTP.


100% agree but with private enterprise this is a problem that can never be solved; everyone wants their lock-in and slice of the cake.

I would say for all the technology we have in 2025, this has certainly been one of the core issues for decades & decades. Nothing talks to each other properly, nothing works with another thing properly. Immense effort has to be expended for each thing to talk to or work with the other thing.

I got a Macbook Air for light dev as a personal laptop. It can't access Android filesystem with a phone plugged in. Windows can do it. I know Apple's corporate reasoning, but just an example of purposeful incompatibility.

As you say, all these companies use standards like TCP/HTTP/Wifi/Bluetooth/USB/etc and they would be nowhere without them - but literally every chance they get they try to shaft us on it. Perhaps AI will assist in the future - tell it you want x to work with y and the model will hack on it until the fucking thing works.


Just to add one piece of clarification - the comment around authorization is a bit out-of-date. We've worked closely with Anthropic and the broader security community to update that part of MCP and implement a proper separation between resource server (RS) and authorization server (AS) when it comes to roles. You can see this spec in draft[1] (it will be there until a new protocol version is ratified).

[1]: https://modelcontextprotocol.io/specification/draft/basic/au...


What percentage of the MCP spec is (was?) LLM output?

It's setting off all kinds of alarm bells for me, and I'm wondering if I'm on to something or if my LLM-detector alarms are miscalibrated.


Can only speak for the authorization spec, where I am actively participating - zero. The entire spec was written, reviewed, re-written, and edited by real people, with real security backgrounds, without leaning into LLM-based generation.


Idk, I'm kind of agnostic and ended up throwing it in there.

Regurgitating the OAuth draft don't seem that usefull imho, and why am I forced into it if I'm using http. Seems like there are plenty of usecases where un-attended thing would like to interact over http, where we usually use other things aside from OAuth.

It all probably could have been replaced by

- The Client shall implement OAuth2 - The Server may implement OAuth2


For local servers this doesn't matter as much. For remote servers - you won't really have any serious MCP servers without auth, and you want to have some level setting done between client and servers. OAuth 2.1 is a good middle ground.

That's also where, with the new spec, you don't actually need to implement anything from scratch. Server issues a 401 with WWW-Authenticate, pointing to metadata for authorization server locations. Client takes that and does discovery, followed by OAuth flow (clients can use many libraries for that). You don't need to implement your own OAuth server.


Bearer tokens work elsewhere and imho are drastically simpler than oauth


But where would you get bearer tokens? How would you manage consent and scopes? What about revocation? OAuth is essentially the "engine" that gives you the bearer tokens you need for authorization.


I know it's not auth-related, but the main MCP "spec" says that it was inspired by LSP (language server protocol). Wouldn't something like HATEOAS be more apt?


Author of the article here (thank you mpweiher for the submission). Pi-Hole has been, hands-down, the best infrastructure investment in our household. At this point I have 2MM+ domains blocked and the performance has been great.


Coordinator of the authorization RFC linked in this post[1].

The protocol is in very, very early stages and there are a lot of things that still need to be figured out. That being said, I can commend Anthropic on being very open to listening to the community and acting on the feedback. The authorization spec RFC, for example, is a coordinated effort between security experts at Microsoft (my employer), Arcade, Hellō, Auth0/Okta, Stytch, Descope, and quite a few others. The folks at Anthropic set the foundation and welcomed others to help build on it. It will mature and get better.

[1]: https://github.com/modelcontextprotocol/modelcontextprotocol...


A nice, comprehensive yet accessible blog post about it can be found here[1], got submitted earlier[2] but didn't gain traction.

[1]: https://aaronparecki.com/2025/04/03/15/oauth-for-model-conte...

[2]: https://hackernews.hn/item?id=43620496


Great news - Aaron has been a core reviewer and contributor to the aforementioned RFC.


Yeah figured he had to be involved and saw his name on the pull request after I posted.

Really enjoyed the article he wrote, just wanted to promote it some more. I learned of several things that will be useful to me beyond MCP.


Impressive to see this level of cross-org coordination on something that appears to be maturing at pace (compared to other consortium-style specs/protocol I've seen attempted)

Congrats to everyone.


Awesome! Thanks for your work on this.


Can't take any credit - it's a massive effort across many folks much smarter than me.


This reminds me of something Adam Smith said in The Wealth of Nations:

"People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices."

Ymmv, but I cannot image that this "innovation" will result in a better outcome for the general public.


I am working on a reverse-engineered SDK for Stream Deck devices, called DeckSurf:

https://deck.surf

The SDK is open-source and on GitHub:

https://github.com/dend/decksurf-sdk

It's a hobby project, but one that I love working on because it unlocks some _really_ great hardware to be open to do anything I want it to be rather than be constrained by out-of-the-box client software that asks me to sign in with an account to get an extension installed.


Shockingly, they do! Quite a few folks that I've talked to recently expressed that they are subscribed to more than one email newsletter and read them fairly consistently.


Yeah, unfortunately, c'est la vie. Renting a domain is still better than not having it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: