HN2new | past | comments | ask | show | jobs | submit | pennomi's commentslogin

Not at orbital speeds you can’t. You’re being deliberately obtuse.

The Blue Origin hate is mostly how opaque the program is compared to SpaceX.

Surely the could put a traditional upper stage on Super Heavy and just go directly to the moon, no? I’m not sure what the obsession with second stage reuse is, because you lose almost all your margin.

I'm not sure what the obsession with airplane reuse is. Why not just build a new one for each flight?

You don’t gain additional margin throwing away an airplane. Reuse is a lovely idea but the rocket equation is a harsh mistress.

What citizens are asking for age verification? I have met exactly ZERO people who want this. It’s only the authorities who want it.

I think you’ll have to do multi-shot generation to correct this, each diffusion is going to represent a single “thought”.

Though with the speed it’s running that’s not necessarily a deal breaker. I suspect diffusion models will need different harnesses to be effective.


I have been saying this for a while now. We have barely scratched the surface on both algorithmic and hardware optimizations for AI. I suspect we will definitely get many orders of magnitude speed up on high quality AI.

The real question is if it ends up “smart enough” or we take that extra compute budget and push the boundary further. Right now it seems making the models larger really only works up to a certain point.


The big problem with AI has been that it has always been so energy intensive compared to biological intelligence. However once, you bake the models into ASICs, suddenly the power consumption goes way down, and moreover the inference WILL be ~250X faster than it currently is (which is already on par with the speed of a human thinking).

That's a very scary inflection point. Imagine in 24 months, a Opus 4.6 level Diffusion based model etched directly onto silicon using the latest TSMC process node.

At that point knowledge work will incredibly commoditized.

I have Opus 4.6 one-shotting recreations of 90s videogames for less than the inflation adjusted cost of buying those original games when they were released! Now cut that cost down by 250X!


The “a token is a token” effect makes LLMs really bad at some things humans are great at, and really good at some things humans are terrible at.

For example, I quickly get bored looking through long logfiles for anomalies but an LLM can highlight those super quickly.


We better solve the energy usage and cooling first otherwise that will be a very spicy body mod.

Yeah, feeding that speed into a reasoning loop or a coding harness is going to revolutionize AI.

When humans fail a task, it’s obvious there is no actual intelligence nor understanding.

Intelligence is not as cool as you think it is.


It can still be cool- but maybe it's just not as rare.

I assure you, intelligence is very cool.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: