Hacker News .hnnew | past | comments | ask | show | jobs | submit | miningape's commentslogin

I prefer the old name, it was much more memorable because it's funny.

"Yeah I'm playing with my Coq to try and get it working again"


standup> My coq experiments are promising, but it feels a little harder than it should be, and I'm worried it'll be too slow to deliver without some training and long practice sessions.

This is probably the most insane take I've read all year. As though an LLMs don't have an increased chance to bork code when they have to write it multiple times for different platforms - even LLM users benefit from the existence of libraries that handle cross platform, low level implementation details and expose high level apis.

A while loop, some prompts basically amounting to "this is how you format a system call" and "make no mistakes", there's also a regex + executor for detecting and executing system calls.

you forgot the memory model. Which is an absolutely essential and hard to design part of the agent.

and occasionally, UI prompts with QA.

Memory model? I would not want agent to remember previous conversations.

Ever? You would not want to accumulate any useful context at all?

Maybe in the future but with the current models I found the constantly accessible memories to be an impediment. I don't want models to record and repeat mistakes or suboptimal strategies.

Gemini (just in the browser) has been really bad about conflating a bunch of similar projects. It remembers "oh, you have a home server that does XYZ", so my new home server that's doing ZYX instead must be the same system.

Look at the JWT standard, it usually contains things like claims, roles, user ids, etc.

If a consultant made the same mistakes I'd expect the consultant to be held accountable, not the client business that hired the consultancy - they knew they didn't have the requisite skills and so outsourced to an "expert" (and therefore can't be judged for not knowing how to secure their software since they did everything possible)

In this case the "client" is fully liable for the security issues.


> Wanna get back to hunting for that metal song?

Absolutely flawless save chat


Well, I have something to tell you about the last decade+ of EU immigration policy


This is my take too - LLMs aren't an excuse to lower our standards, they're the reason to raise them.

Put simply LLMs perform better on better code.


I think what matters more than the abstract class vs if statement dichotomy, is how well something maps the problem domain/data structures and flows.

Sure maybe its fast to write that simple if statement, but if it doesn't capture the deeper problem you'll just keep running head first into edge cases - whereas if you're modelling the problem in a good way it comes as a natural extension/interaction in the code with very little tweaking _and_ it covers all edge cases in a clean way.


Seems to have been removed, I'm getting a 404


A brief glance at the other posts makes me feel that this is not a bad thing perhaps. That blog seems quite high on rant and low on substance.


Neopeasant uprising suppressed.


But still getting upvotes...


Who needs to open the link when the author is well known and the title is catchy?


It’s up now.


Different version back up:

> So this isn’t the original post I had here. The original post was AI slop


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: