Hacker News .hnnew | past | comments | ask | show | jobs | submit | barrkel's commentslogin

Christy Kinahan is still at large, in or around Dubai.

A false representation of user interest

This, for me, is a current design flaw of Postgres. You expect your database to trade off memory vs spilling to disk within the resources you give it and the load it's under. Databases are much like operating systems; filesystems, scheduling, resource management, all the same things an operating system does, a database server needs to implement.

Work_mem is a symptom of punting. It gives the DBA an imprecise tool, and then implicitly offloads responsibility to not allocate too many steps in their query plans using global knowledge of physical execution steps which are dependent on statistics, available concurrency and so on.

The database ought to be monitoring global resource use for the query and partitioning it into stages that free up memory or spill to disk as necessary.

This all fundamentally goes back to the Volcano iterator pull design. Switching to pulling batches instead of row should already improve performance; and it would leave open the option of using a supervisor to schedule execution (query stages using fibers or similar continuation compatible control flow instead of recursive calls), with potential restarts / dynamic replanning when things like global memory limits are approached. Using batches leaves more performance margin for heavier calling mechanisms, and also opens the door for more vectorized strategies for operators.


I agree. I've known how it works for years, and I think the current setting is a cop-out.

In TFA it's set to a measly 2MiB, yet tried to allocate 2TiB. Note that the PG default is double that, at 4MiB.

What the setting does is offload the responsibility of a "working" implementation onto you (or the DBA). If it were just using the 4MiB default as a hardcoded value, one could argue it's a bug and bikeshed forever on what a "good" value is. As there is no safe or good value, the approach would need to be reevaluated.

The core issue is that there is no overall memory management strategy in Postgres, just the implementation.

Which is fine for an initial version, just add a few settings for all the constants in the code and boom you have some knobs to turn. Unfortunately you can't set them correctly, it might still try to use an unbounded amount of memory.

While the documentation is very transparent about this, just from reading it you know they know it's a bad design or at least an unsolved design issue. It just describes the implementation accurately, yet offers nothing further in terms of actual useful guidance on what the value should be.

This is not a criticism of the docs btw, I love the technically accurate docs in Postgres. But it's not the only setting in Postgres which is basically just an exposed internal knob. Which I totally get as a software engineer.

However from a product point of view, internal knobs are rarely all that useful. At this point of maturity, Postgres should probably aim to do a bit better on this front.


There's essential complexity and accidental complexity.

A sufficiently detailed spec need only concern itself with essential complexity.

Applications are chock-full of accidental complexity.


I can taste the difference between a $100 wine and $400 wine, but it's maybe 20% better, if it's possible to flatten extra layers of flavour into a linear scale. It's easier to appreciate for different levels of quality from the same producer. My example is drawn from Casanova di Neri Tenuta Nuova vs Cerretalto. They're basically the same style, the Cerretalto just has extra.

Across different grapes and regions and it's like apples and oranges. Sometimes I want a savory Burgundy, sometimes I want a Coke. If you don't know what wine from a terroir tastes like, and hankering after that, don't spend extra on it.

I'd generalize that to cheese. Can't beat a good aged Comte (a raw milk cheese), but it's not everyday cheese.


(I mean, not every day, but every 2-3 days... I'm more of a St nectaire guy myself)

Claude munches through Ruby just fine, all day long.

As an owner of a 96 core 9995wx, nobody is buying one for desktop PC much less laptop level software.

To justify the investment you need to have tasks that scale out, or loads of heterogeneous tasks to support concurrently.


What tasks are you running on your 96 core 9995wx?


LLVM developer compiling the full LLVM stack every 10 minutes.


Make -j97 presumably. Or MPI jobs.


Right, this is a car-priced CPU and the only rational reason to have one is that you can exploit it for profit. One pretty great reason would be giving it to your expensive software developers so they don't sit there waiting on compilers.


I’ll push back and say there are people who buy it for desktop but primarily for workstation like uses such as simulations.

A ton of my FX artist friends have specced out their home rigs with one or something in its orbit.


I always tell Claude, choose your own stack but no node_modules.

What's missing is another LLM dialog between you and Claude. One that figures out your priorities, your non-functional requirements, and instructs Claude appropriately.

We'll get there.


This perhaps reflects the general divide in viewpoints on “vibe-coding”. Do you let go of everything (including understanding) and let it rip, or require control and standards to some degree. Current coding agents seem to promote the former. The only way with their approach, is to provide them with constraints?

> What's missing is another LLM dialog between you and Claude. One that figures out your priorities, your non-functional requirements, and instructs Claude appropriately.

There are already spec frameworks that do precisely this. I've been using BMAD for planning and speccing out something fairly elaborate, and it's been a blast.


Apart from dexterity, bipedal machines are unstable and require dynamic adjustment to stay upright, as I understand it.

The mechanism humans use to stay upright after an unexpected loss of balance, flailing etc., would not be safe to be around when a robot employs them.


World War 1 was not the kind of war that delivered freedom. It was more the kind that elites entered into without full regard of the costs.


An opinion that formed after the war, but not actually anchored in reality. None of the elite really wanted a war, some levels of the military did. Nicolas II raged against his generals that he did not want to mobilize and send men to their deaths. The German leadership didn't want a war, they thought it was inevitable but that they'd lose. The Austro-Hungarians definitely didn't want a war with Russia but did want to give the Serbs a black eye for the assassination in Sarajevo, and made a number of bad decisions. The British tried to stop the war and a number of politicians there wrote about the potential consequences before it happened.

In a way it's sadder than other conflicts: none of the participants entered the war for power or control, they all thought they were defending themselves. Plenty of people knew the human cost would be high. Events and fear and lack of fast communication just took over. And it set up the conditions for WW2 and probably the cold war.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: