Hacker News .hnnew | past | comments | ask | show | jobs | submit | resonious's commentslogin

This seems more like a KDE thing then a Wayland thing. At least for me on GNOME Wayland is strictly better. And the newer Wayland-only desktops like Niri are arguably better then that.

This is the first time I've seen MCP's push capabilities come in handy. I'm not much of an MCP nerd though so I don't know much. But when I read the spec it looked extremely over engineered partly because of the 2 way nature of it.

Unfortunately, we're all stuck moving at the speed of the model labs because of the subscription models that they've provided.

The rest of us were able to implement things like push a long time ago, but because Claude Code and Codex stubbed those things out, we couldn't really use them for 'most agent users'.

In fairness to OpenAI, they have been generous in allowing for example OpenCode to sign in with your ChatGPT subscription – so you _could_ build a more powerful agent (which OpenCode is... not) – but unfortunately GPTs' instruction following just isn't up to snuff yet. Hopefully they pre-train something amazing this year!


Completely agreed. The thought that "people emailing each other" is a problem that should be "automated" away is delusional.

Same here but with human children.


How did you get human children?


Hospital has plenty. The key is coming in after hours.


Morgue too, for both assertions above.

(okay, that joke was tasteless, but admit it - you probably giggled before you remembered to be horrified)


Talking to the right women.


I think it's an old stereotype. When Rust started gaining popularity, I did see comments like that. Even felt compelled to post them sometimes. But now that we have real production Rust experience, we're a bit more nuanced in our views.


I'm also having a really good time having LLMs write code in Rust. In Typescript they tend to abuse `any` or just cast stuff around, bypassing the type system at every corner. In Rust they seem to take compiler errors to heart and things tend to work well.


You might also have success asking your agent to run `eslint` at the end of every subtask and instruct it to always address lint errors, even if they are preexisting. I agree with your diagnosis; there's "implicit prompting" in Rust being more strongly typed and the agent "knowing" that going in but we can compensate with explicit prompting. (You do also have to tell it to actually fix the lints, I find, or it will conclude they aren't relevant.)


I'll add that agents (CC/Codex) very often screw up escaping/quoting with their bash scripts and waste tokens figuring out what happened. It's worse when it's a script they save and re use because it's often a code injection vulnerability.


I want them to be better at it, but given how hard it is for me as a human to get it right (which is to say, I get it wrong a lot, especially handling new lines in filenames, or filenames that start with --) I find it hard to fault them too much.


I agree with a lot of the siblings that it's probably not the same people. But for the overlap that probably does exists, I don't think "because it's AI" is their reasoning. If I were to guess, I'd say it's something closer to "exploring the potential of this new thing is worth the risk to me".


A lot of us are being forced to deploy AI, and have concluded that the built in security issues are essentially unsolved. So we’re stuck.


You're not being forced to deploy OpenClaw, are you? That would be quite concerning!


I think there's some value in pure vibe coding. To your point though, the best way to extract that value is to know intimately at which point the agents tend to break down in quality, which you can only do if you read a lot of their output. But once you reach that level, you can gain a lot by "one-shotting" tasks that you know are within their capacity, and isolating the result from the rest of your project.


How often do the AIs devolve at some task? Or does switching models make those assumptions inaccurate?


It does work well logically but performance is pretty bad. I had a nontrivial Rust project running on Cloudflare Workers, and CPU time very often clocked 10-60ms per request. This is >50x what the equivalent JS worker probably would've clocked. And in that environment you pay for CPU time...


The rust-js layer can be slow. But the actual rust code is much faster than the equivalent JS in my experience. My project would not be technically possible with javascript levels of performance


That's fair and makes sense. In my case it was just a regular web app where the only reason for it being in Rust was that I like the language.


did you profile what made it so slow specifically? sounds waaaaay worse than I would expect


I did. I don't remember the specifics too well but a lot of it was cold starts. So just crunching the massive wasm binary was a big part of it. Otherwise it was the matchit library and js interop marshalling taking the rest of the time.

edit: and it cold started quite often. Even with sustained traffic from the same source it would cold start every few requests.


the JS layer is slow, indeed, but it shouldn't be that much slower that it meaningfully impacts frontend apps

A demonstration of that by the creator of Leptos:

https://www.youtube.com/watch?v=4KtotxNAwME


Fascinating video, thanks for sharing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: