Hacker News .hnnew | past | comments | ask | show | jobs | submit | the-grump's commentslogin

Rick Roderick on Habermas.

The series "Self Under Siege" is one of my favorite things on YouTube. Highly recommend watching all 8 in order.

https://youtu.be/aXkmmfaZhEg


For the benefit of those like me who haven't heard of Roderick, some further digging leads to this: https://rickroderick.org/max-roderick/

Major, major rabbit hole warning. You think you're about to read something about a philosophy professor, and what you get is an Alice Munro/Larry McMurtry mashup. His son seems to be a pretty amazing writer in his own right.


+1 These are incredible lectures and there's another series from Rick Roderick on the history of western philosophy that I also love: https://youtube.com/playlist?list=PLxPmwaGMOAvsZp9vavFkyxYFZ...

Stealing this brilliant idea. Thank you for sharing!


I wish I could say I came up with it, but it's just a small variation on something I saw here on HN!


For big tasks you can run the plan.md’s TODOs through 5.2 pro and tell it to write out a prompt for xyz model. It’ll usually greatly expand the input. Presumably it knows all the tricks that’ve been written for prompting various models.


That's the first thing that came to mind when I saw this website. The Bay Area is famous for its numerous superfund sites (among many other things, thankfully).


Isn't that literally every modern OS, always, unless you tell it to act differently?


Yes - I didn't mean to imply it was only one of the OSes. Further up the comments people were talking about how memory efficiency is now more important but I was trying to make the point that with compression and virtual memory it still doesn't matter all that much even if memory is double the price.


If running low on memory seems to matter less now than it did a couple of decades ago, I'd rather say that's because fast SSDs make swapping a lot faster. Even though virtual memory and swapping were available even on PCs since Windows 3.x or so, running out of memory could still make multitasking slow as molasses due to thrashing and the lack of memory for disk cache. The performance hit from swapping can be a lot less noticeable now.

Of course compression being now computationally cheap also helps.


Same reaction. I can't even say the author is correct/wrong because I couldn't get through it.


[flagged]


[flagged]


A PhD caliber thesis with devastating epistenics about failures that have claimed conservatively a dozen lives is going gray on an 18 year old account but this shit is fine.

@dang, I'm now threatening to buy a massive oppo campaign with immaculate data and OpenAI's fundraise hanging by a thread.

Fix it or I'll fix it.


[flagged]


Would you please stop posting like this?


No, I will continue to raise the alarm bells until YC affiliated companies and executives stop getting sued for manslaughter or it's moral equivalent.

Profanity is not ugly, ugly is ugly and you back Insta cart slave labor practices with bipartisan objections of disgust.


Dan's offline now. I'm the other moderator here. We can't allow commenting like this to continue without taking any action. It has nothing to do with the targets of your attacks, and everything to do with keeping HN healthy. HN is not the venue for campaigns against specific individuals or organizations. The people you're referring to are not on HN, and haven't been for a long time. The people here are Dan and I and ordinary HN users. These ongoing outbursts have no effect on the companies and people you're talking about, and serve only to make HN worse for the people who are here. You're welcome to take whatever action you’re legally able to against the people and companies you've mentioned. But we can't have you continually venting this stuff in unrelated HN threads.


Be precise in such serious accusations.

Be precise about the harm done to the community, which I've been part of for longer than it's newer members have been alive. A community in which my accurate forecasting of "risk of ruin" type outcomes has error bars between 60-90%.

Be precise about what a healthy HN means, because that's not written down anywhere, the guidelines such as they are? A masterclass in selective enforcement of blank-check norms for money.

You've got the same dataset I do, and exactly the same access to legitimate authority as opposed to self-arrogated police powers on behalf of public benefit corporations which have neither benefited the public nor a shareholder.

I was here long before you or Dan, and if you ban me, it will be the wedge I need to move this conversation somewhere else.

Let's dance.

edit:

and one more thing, quote a primary source once in a while.

i have better citations ranting than you do larping adult:

https://www.wired.com/story/instacart-delivery-workers-still...


If you read this and mount a credible objection that can't be addressed by tweaks to methodology, then I will leave the site forever.

But the asymmetry of the power of selective participation is tyrannical: you engage when you like, your silence is a moral victory by default, and I'm the senior community member by a lot.

https://github.com/b7r6/cassandra-dissertation

6 kids dead, not counting Suchir.

Engage constructively, substantial ly, and in public, or deal with my press releases.

The data shows black holes in comments and submissions that correlate with Altman. I ran it on myself to not fix anyone. There are other search parameters that are worse, it's open source, proven in lean4 to a growing degree, and you win by making an argument, not being an unclected apparatchik.


This time silence looks guilty, because this time I brought the corpus and the math.

The burden is on you now to show you're not a parrot for goons.


Ok, this is not good for anyone, so I've banned the account until we have some reason to believe that things have stabilized.

I know it may be hard to believe right now, but we appreciate you and your contributions. We can't have users going on tilt on Hacker News though. As I said, experience has taught us that it's not good for anyone.


It's good to see just how far we can push the mods here until we finally face consequences!

Thank you for keeping this place healthy day after day after day.


I feel so weird not being the grumpy one for once.

Can't relate to GP's experience of one-shotting. I need to try a couple of times and really hone in on the right plan and constraints.

But I am getting so much done. My todo list used to grow every year. Now it shrinks every month.

And this is not mindless "vibe coding". I insist on what I deploy being quality, and I use every tool I can that can help me achieve that (languages with strong types, TDD with tests that specify system behaviour, E2E tests where possible).


I regret using the term "one-shot", because my reality isn't really that. It's more that the first shot gets the code 80-90% of the way there, usually, and it short-circuits a ton of the "code archaeology" I would normally have to do to get to that point.

Some bugs really can be one-shotted, but that's with the benefit of a lot of scaffolding our company has built and the prompting process. It's not as simple as Claude Code being able to do this out of the box.


I'm on my 5th draft of an essentially vibe-coded project. Maybe its because I'm using not-frontier models to do the coding, but I have to take two or three tries to get the shape of a thing just right. Drafting like this is something I do when I code by hand, as well. I have to implement a thing a few times before I begin to understand the domain I'm working in. Once I begin to understand the domain, the separation of concerns follows naturally, and so do the component APIs (and how those APIs hook together).


My suggestions:

- like the sister comment says, use the best model available. For me that has been opus but YMMV. Some of my colleagues prefer the OAI models.

- iterate on the plan until it looks solid. This is where you should invest your time.

- Watch the model closely and make sure it writes tests first, checks that they fail, and only then proceeds to implementation

- the model should add pieces one by one, ensuring each step works before proceeding. Commit each step so you can easily retry if you need to. Each addition will involve a new plan that you go back and forth on until you're happy with it. The planning usually gets easier as the project moves along.

- this is sometimes controversial, but use the best language you can target. That can be Rust, Haskell, Erlang depending on the context. Strong types will make a big difference. They catch silly mistakes models are liable to make.

Cursor is great for trying out the different models. If opus is what you like, I have found Claude code to be better value, and personally I prefer the CLI to the vscode UI cursor builds on. It's not a panacea though. The CLI has its own issues like occasionally slowing to a crawl. It still gets the work done.


My options are 1) pay about a dollar per query from a frontier model, or 2) pay a fraction of that for a not-so-great model that makes my token spend last days/weeks instead of hours.

I spend a lot of time on plans, but unfortunately the gotchas are in the weeds, especially when it comes to complex systems. I don't trust these models with even marginally complex, non-standard architectures (my projects center around statecharts right now, and the semantics around those can get hairy).

I git commit after each feature/bugfix, so we're on the same page here. If a feature is too big, or is made up of more than one "big" change, I chunk up the work and commit in small batches until the feature is complete.

I'm running golang for my projects right now. I can try a more strongly typed language, but that means learning a whole new language and its gotchas and architectural constraints.

Right now I use claude-code-router and Claude Code on top of openrouter, so swapping models is trivial. I use mostly Grok-4.1 Fast or Kimi 2.5. Both of these choke less than Anthropic's own Sonnet (which is still more expensive than the two alternatives).


> and personally I prefer the CLI to the vscode UI cursor builds on

So do I, but I also quite like Cursor's harness/approach to things.

Which is why their `agent` CLI is so handy! You can use cursor in any IDE/system now, exactly like claude code/codex cli


I tried it when it first came out and it was lacking then. Perhaps it's better now--will give it a shot when I sign up for cursor again.

Thank you for sharing that!


When you say “iterate on the plan” are you suggesting to do that with the AI or on your own? For the former, have any tips/patterns to suggest?


With the AI. I read the whole thing and correct the model where it makes mistakes, fill the gaps where I find them.

I also always check that it explicitly states my rules (some from the global rules, some from the session up until that moment) so they're followed at implementation time.

In my experience opus is great at understanding what you want and putting it in a plan, and it's also great at sticking to the plan. So just read through the entire thing and make sure it's a plan that you feel confident about.

There will be some trial and error before you notice the kind of things the model gets wrong, and that will guide what you look for in the plan that it spits out.


> Maybe its because I'm using not-frontier models to do the coding

IMO it’s probably that. The difference between where this was a a year ago and now is night and day, and not using frontier models is roughly like stepping back in time 6-12 months.


What has worked for me is having multiple agents do different tasks (usually in different projects) and doing something myself that I haven't automated yet.

e.g. managing systems, initiating backups, thinking about how I'll automate my backups, etc.

The list of things I haven't automated is getting shorter, and having LLMs generate something I'm happy to hand the work to has been a big part of it.


You missed the entire point of the strong static typing.


I don’t think I did. I am one of the very few people who have had paying jobs doing Scala, Haskell, and F#. I have also had paying jobs doing Clojure and Erlang: dynamic languages commonly used for distributed apps.

I like HM type systems a lot. I’ve given talks on type systems, I was working on trying to extend type systems to deal with these particular problems in grad school. This isn’t meant to a statements on types entirely. I am arguing that most systems don’t encode for a lot of uncertainty that you find when going over the network.


You're conflating types with the encoding/decoding problem. Maybe your paying jobs didn't provide you with enough room to distinguish between these two problems. Types can be encoded optimally with a minimally-required bits representation (for instance: https://hackage.haskell.org/package/flat), or they can be encoded redundantly with all default/recovery/omission information, and what you actually do with that encoding on the wire in a distributed system with or without versioning is up to you and it doesn't depend on the specific type system of your language, but the strong type system offers you unmatched precision both at program boundaries where encoding happens, and in business logic. Once you've got that `Maybe a` you can (<$>) in exactly one place at the program's boundary, and then proceed as if your data has always been provided without omission. And then you can combine (<$>) with `Alternative f` to deal with your distributed systems' silly payloads in a versioned manner. What's your dynamic language's null-checking equivalent for it?


With all due respect, you can use all of those languages and their type systems without recognizing their value.

For ensuring bits don't get lost, you use protocols like TCP. For ensuring they don't silently flip on you, you use ECC.

Complaining that static types don't guard you against lost packets and bit flips is missing the point.


With all due respect, you really do not understand these protocols if you think “just use TCP and ECC” addresses my complaints.

Again, it’s not that I have an issue with static types “not protecting you”, I am saying that you have to encode for this uncertainty regardless of the language you use. The way you typically encode for that uncertainty is to use an algebraic data type like Maybe or Optional. Checking against a Maybe for every field ends up being the same checks you would be doing with a dynamic language.

I don’t really feel the need to list out my full resume, but I do think it is very likely that I understand type systems better than you do.


Fair enough, though I feel so entirely differently that your position baffles me.

Gleam is still new to me, but my experience writing parsers in Haskell and handling error cases succinctly through functors was such a pleasant departure from my experiences in languages that lack typeclasses, higher-kinded types, and the abstractions they allow.

The program flowed happily through my Eithers until it encountered an error, at which point that was raised with a nice summary.

Part of that was GHC extensions though they could easily be translated into boilerplate, and that only had to be done once per class.

Gleam will likely never live to that level of programmer joy; what excites me is that it’s trying to bring some of it to BEAM.

It’s more than likely your knowledge of type systems far exceeds mine—I’m frankly not the theory type. My love for them comes from having written code both ways, in C, Python, Lisp, and Haskell. Haskell’s types were such a boon, and it’s not the HM inference at all.


> ends up being the same checks you would be doing with a dynamic language

Sure thing. Unless dev forgets to do (some of) these checks, or some code downstream changes and upstream checks become gibberish or insufficient.


I know everyone says that this is a huge issue, and I am sure you can point to an example, but I haven’t found that types prevented a lot of issues like this any better than something like Erlang’s assertion-based system.


When you say "any better than" are you referring to the runtive vs comptime difference?


Yes they can. The size of many codebases is much larger and LLMs can handle those.

Consider also that they can generate summaries and tackle the novel piecemeal, just like a human would.

Re: movies. Get YouTube premium and ask YouTube to summarize a 2hr video for you.


Novel is different from a codebase. In code you can have a relationship between files and most files can be ignored depending on what you're doing. But for a novel, its a sequential thing, in most cases A leads to B and B leads to C and so on.

> Re: movies. Get YouTube premium and ask YouTube to summarize a 2hr video for you.

This is different from watching a movie. Can it tell what suit actor was wearing? Can it tell what the actor's face looked like? Summarising and watching are too different things.


Yes, it is possible to do those things and there are benchmarks for testing multimodal models on their ability to do so. Context length is the major limitation but longer videos can be processed in small chunks whose descriptions can be composed into larger scenes.

https://github.com/JUNJIE99/MLVU

https://huggingface.co/datasets/OpenGVLab/MVBench

Ovis and Qwen3-VL are examples of models that can work with multiple frames from a video at once to produce both visual and temporal understanding

https://huggingface.co/AIDC-AI/Ovis2.5-9B

https://github.com/QwenLM/Qwen3-VL


You’re moving the goalposts. Gary Marcus’ proposal was being able to ask: Who are the characters? What are their conflicts and motivations? etc.

Which is a relatively trivial task for a current LLM.


The Gary Marcus proposal you refer to was about a novel, and not a codebase. I think GP's point is that motivations require analysis outside of the given (or derived) context window, which LLMs are essentially incapable of doing.


People don't want to face the music but the way we're fishing is completely unsustainable.

The way we live on land is unsustainable too, of course.


There's a massive reduction in the whale song of the blue whales. Almost halved. They are presumably starving.

That something ginormous can be so elegant, beautiful and sleek is hard to conceive till one meets a blue whale. Let's let them thrive on the blue planet.


The Blue Whale population has actually increased since the 70s. When they were critically endangered, their population numbered roughly 1,000-2,000 but population estimates for today put the number at roughly tenfold that. The 1966 worldwide moratorium on whaling has been incredibly successful and we’ve also seen recoveries in Humpback and Grey Whales.


Compared to numbers at peak whaling you are correct. I was commenting on a more recent phenomena.

https://www.nationalgeographic.com/animals/article/ocean-hea...

http://archive.today/2025.09.03-030523/https://www.nationalg...


We keep talking about “sustainability” but sustainability is a secondary issue here.

The primary issue is that we are taking individuals and basically torturing and/or killing them, rarely for good reasons.

It won’t even be decades before our descendants look back at horror for how we treat them, not unlike how we can’t even imagine how our ancestors thought it was ok to have human slaves.

The major difference will be that the horrors of human chattel slavery (even the name clearly links it to how we treat non human animals) have largely only been recorded via text. The horrors of our actions will be available in text, images, videos for all to see in perpetuity by just looking at an Instagram archive.


so we need to extract resources from space asap, now that the planet cant sustain entire human race


Adding more resources doesn't solve the problem that they aren't being managed sustainably. We can't exhaust all the resources in space, but we could definitely exhaust all of the resources accessible to us in space. Like how we can't exhaust all of the oil or all of the gold on this planet, but we could exhaust all of the resource which can be mined economically.

This was once explained to me with a metaphor of a bacteria colony in a jar. The colony doubles every 24 hours. So they quickly exhaust the space in the jar. No problem, you give them another jar. 24 hours later, their population doubles, and they have filled both jars.


Yes, it does


Would you care to elaborate?


Resource intensity of GDP has been falling for decades, most quickly in developed economies. Space-based resource extraction isn’t going to be radically cheaper (if it ever is cheaper) than terrestrial sources with known propulsion, so that balance is unlikely to shift. Herego, replacing terrestrial extraction with moderately-cheaper space-based extraction would reduce harm to our ecosystem without changing our economies to turbo-consume materials and thereby accelerate terrestrial extraction.


I agree it may reduce harm (depending on how the actual costs shake out), but the calculus remains that if you have access to finite resources but your needs are expanding exponentially, and you are not recycling them in some way, you will run out of resources no matter how many you have.

I'm not opposed to exploiting resources in space, I think we should pursue the goal of being an "interplanetary species", but I think it's important to understand that it isn't a silver bullet or a free lunch. We still have to change our economy to be more sustainable.

Not to mention that it is not clear that exploiting space resources or becoming interplanetary is possible. I presume that it is. But we shouldn't bank our future on something unproven. We don't know if we're a decade away from mining our first asteroid or a century. We should assume that our future is here on Earth with the resources currently available to us, until proven otherwise.


> if you have access to finite resources but your needs are expanding exponentially

Our material needs in many categories are not expanding exponentially. On a per-capita basis, in advanced economies, it's been flat in several categories.

If anything, the constraints of spacefaring seem perfect for nudging a culture and economy towards conservation and recycling. Building lunar and Martian colonies requires short-term sustainability in a way that does not have clean parallels on Earth.

> we shouldn't bank our future on something unproven

Nobody is banking on space-based resource extraction.

> We should assume that our future is here on Earth with the resources currently available to us, until proven otherwise

Bit of a paradox to this. On one hand, sure. On the other hand, given two civilisations, one which assumes space-based resource extraction and one which does not, which do you think is going to get there first?


> On a per-capita basis, in advanced economies, it's been flat in several categories.

Right, but our population is, at this time, growing exponentially. That may change but hasn't yet.

> If anything, the constraints of spacefaring seem perfect for nudging a culture and economy towards conservation and recycling.

Quite possibly! I agree. But what I was saying is that getting access to resources does not solve sustainability. If anything this is an argument that sustainability is a prerequisite for space travel and not the other way around.

> Nobody is banking on space-based resource extraction.

I understand this is not your position, and I appreciate that your position is reasonable and informed. But it is what was being discussed when you joined the conversation. And it is something I hear people say all the time.

Specifically, this is what I was responding to: https://hackernews.hn/item?id=46563421

> [Which] do you think is going to get there first?

Are these hypothetical civilizations on the brink of unlocking space travel? Or are they 100 years away? The civilization hell bent on space is likely to burn themselves out and replace their leadership with people with more grounded ideas if unlocking space travel isn't a realistic possibility for them. If space travel is right around the corner than my expectation would be the grounded civilization freaks out about national security and joins this space race in earnest. I think in either scenario, all else equal, it's a coin flip. The tortoise and the hare both have viable strategies given the right conditions.

This is kinda sorta what happened in the space race. The USSR pursued rockets aggressively and took a massive early lead, believing that ICBMs were the solution the the USA's dominance in bomber aircraft. But they couldn't sustain that pace. If I recall correctly, by the time we landed on the moon they hadn't launched a mission in years. The USA more or less gave up on manned space travel and space colonization shortly thereafter. Obviously both continued to explore space and the tide is beginning to change, but I think that's a natural experiment which roughly addresses this question. (Not to the exclusion of future attempts with better technology going better.)


> our population is, at this time, growing exponentially

Not in advanced (i.e. materially intensive) economies. And global population models are currently all aiming towards stabilization.

> this is an argument that sustainability is a prerequisite for space travel and not the other way around

How so? Without space travel, there is no near-term incentive to develop those technologies. (The terrestrial incentives are all long term.)

> Are these hypothetical civilizations on the brink of unlocking space travel? Or are they 100 years away?

China and America are technologically within a decade of establishing Moon and Mars bases. Not permanent, independent settlements. But settlements that need to be as self-sustaining as possible nevertheless on account of launch costs and travel time.

> that's a natural experiment which roughly addresses this question

I see a different reading. We got a lot of sustainability-progressing technology out of the space race.

Alignment with the goal of human colonization wasn’t yet there. But there are reasons to be optimistic with modern materials, bioengineering and computational methods. Methods that could very easily also yield literal fruits that make our economies more sustainable at home.


> Without space travel, there is no near-term incentive to develop those technologies.

Of course there is. Our climate is getting less hospitable, right now, in our lifetime. Storms are stronger, wildfires are more frequent and severe, we're beginning to strain our fresh water aquifers, etc. We are seeing really alarming rates of decline of flying insect biomass and other signs of an ecosystem in distress, and that ecosystem provides us with trillions of dollars of value. There is no human industry without our ecosystem to support us.

Solar, wind, etc. are also getting more and more competitive with fossil fuels, providing a purely monetary incentive.

And if we disregard all long term incentives, who cares about space? Even if we use very optimistic figures we're not going to be exploiting extraterrestrial resources for a few decades. And if we encounter significant setbacks (which I have to imagine we will) that take quite a long time.

> China and America are technologically within a decade of establishing Moon and Mars bases.

I'll believe it when I see it. But if this is true, then wouldn't you say, by your logic, that this is a near term incentive for developing sustainable technologies?

> But there are reasons to be optimistic...

I agree. I don't think we really disagree in principle on any of this. I think we have different values and different levels of skepticism (or perhaps are skeptical of different things) but broadly/directionally agree.


> if we disregard all long term incentives, who cares about space?

The short-term incentives string together into a long-term plan.

> Our climate is getting less hospitable, right now, in our lifetime. Storms are stronger, wildfires are more frequent and severe, we're beginning to strain our fresh water aquifers, etc. We are seeing really alarming rates of decline of flying insect biomass and other signs of an ecosystem in distress, and that ecosystem provides us with trillions of dollars of value

Which has been enough urgency to do what exactly?

> Solar, wind, etc. are also getting more and more competitive with fossil fuels

Great example of folks pursuing short-term profit incentives making progress towards a long-term goal.

> if this is true, then wouldn't you say, by your logic, that this is a near term incentive for developing sustainable technologies?

If we try. Yes. If we gut those programmes, no. (For the technology benefits we just have to try.)

> we have different values and different levels of skepticism (or perhaps are skeptical of different things) but broadly/directionally agree

I think so too.

I think some people are motivated by stewardship and others by exploration. Focusing one one at the expense of the other is a false economy. And pursuing both doesn’t necessarily mean a long-term trade-off.


> The short-term incentives string together into a long-term plan.

Sustainability isn't different in this regard. Eg, algae farming is a promising way to produce protein and fix carbon. But the economics aren't there yet, so commercial algae farms are pursuing higher margin markets like supplements and inputs to cosmetics rather than food (with the notable exception of feed in aquaculture like salmon).

When solar was still very expensive it was deployed in weird environments like satellites and oil fields where the grid wasn't available. But that proved to be stepping stones to a much larger solar industry.

> Which has been enough urgency to do what exactly?

Lots of stuff, for instance the €5.5B MOSE seawall built by Venice and various projects along the Colorado River to secure the water supply of the southwest USA (to pick just one, Las Vegas' Third Straw at around $1.3B). When disasters happen we obviously spend hundreds of millions to a billion+ on cleanup, rescue, etc. It's also been urgent enough to drive a huge amount of research, advocacy, etc.

Granted, those are fairly low numbers in the scheme of things. But the inaction is driven by interests who stand to lose money, not by economic rationality. I don't think even those monied interests are acting rationally in the long run. They're protecting their interests in the short term but greatly jeopardizing them in the long term. That's a similar false economy/market failure.


now this guy's just a straight shooter with upper management written all over him.


[flagged]


Poor countries have actually less impact on our planet than wealthy ones.

There are many ways to handle population control, not only controlling natality. That wouldn't be popular but you could imagine a mandatory euthanasy at 55 or 60 for example.


"Poor countries have actually less impact on our planet than wealthy ones."

in the near future, richer countries would wage war againt "poor" country to take their water source

this is what gonna happen, not if but when

with the ever growing technology achievement and billions more people to come and desire to consume finite resources, what do you think gonna happen ????


No, we need to reduce dramatically our own population.


That sort of thinking needs to first and almost-entirely be directed at China, India and Africa, then we can talk about sustainability and what the West can do.


The west is wasting much more resources and contributing to global warming more than these 3 continents/countries combined.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: