HN2new | past | comments | ask | show | jobs | submit | adammarples's commentslogin

What's the point of training a nnet on outputs from the original 3 rules so it can effectively just relearn them?

Kinda agree. Training the network with RL instead and penalizing collisions and rewarding collecting something like food would be interesting.

As long as the birds can't change direction too quickly (e.g. output acceleration, not velocity) I'd guess you get flocking.


I agree that this would be a more interesting approach; I think you might need more incentives to create a flock though (aerodynamic benefits, predator protection, etc.)

Awesome idea!

based on the article the noida approach has better performance and is able to run on a GPU while the Sterling implementation must run on a cpu.

What happens when you want to simulate millions of rules?

What if they weren't noids?


Yeah, this is well presented slop.

Thanks!

The entire point of a race to the bottom is that your competitors keep reducing their prices until those billions disappear

Well done for figuring it out

Presidents don't. That's because presidents don't unilaterally dictate trade policy, or start wars, or micromanage immigration policy. Congress is supposed to do the work of running the country. However, this president has side stepped all the usual ways that the US runs under the guise of "emergency" powers.

It's hilarious that the about page doesn't tell you anything about the project

In what way is a brush a program? How do I program it? I've spent 5 minutes and clicked on every single thing I can find but I give up looking.

If you hover next to the brush in the Edit mode as well, theres a tiny pencil icon there that opens that brush's code. Sorry it was hard to discover!

Click "Editor", then click "Add a brush".

It's way cooler on desktop than mobile.

Constantly reported on in the FT

This happened because you and so many other switched this weekend

If the prompt is the compass, and represents a point in space, why walk there? Why not just go to that point in image space directly, what would be there? When does the random seed matter if you're aiming at the same point anyway, don't you end up there? Does the prompt vector not exist in the image manifold, or is there some local sampling done to pick images which are more represented in the training data?

So I’m not an expert, this post was just based on my understanding, but as I understand it: the prompt embedding space and the latent image space are different “spaces”, so there is no single “point” in the latent image space that represents a given prompt. There are regions that are more or less consistent with the prompt, and due to cross-attention between the text embedding vector and the latent image vector, it’s able to guide the diffusion process in a suitable direction.

So different seeds lead to slightly different end points, because you’re just moving closer to the “consistent region” at each step, but approaching from a different angle.


One way of thinking about diffusion is that you're learning a velocity field from unlikely to likely images in the latent space, and that field changes depending on your conditioning prompt. You start from a known starting point (a noise distribution), and then take small steps following the velocity field, eventually ending up at a stable endpoint (which corresponds to the final image). Because your starting point is a random sample from a noise distribution, if you pick a slightly different starting point (seed), you'll end up at a slightly different endpoint.

You can't jump to the endpoint because you don't know where it is - all you can compute is 'from where I am, which direction should my next step be.' This is also why the results for few-step diffusion are so poor - if you take big jumps over the velocity field you're only going in approximately the right direction, so you won't end up at a properly stable point which corresponds to a "likely" image.


and?

The cost of a company like Amazon or Google losing their piece of that $1T annual budget is greater than their exposure to the failure of Anthropic.

Not according to published Financials.

Also $1T is dishonest. DoD spends less than 0.1% of that on cloud services.


Source?

Half of that budget gets contracted out to Lockheed, Raytheon, Northrop, Boeing, General Dynamics, etc. Those companies absolutely do spend money on the hyperscalers.


Great. So you've gone down from $1T to "half of that budget".

If you're honest with yourself, you'll find the true number.


obviously, I was never suggesting that the DoD spends $961b a year on cloud computing.

Look, it’s a very simple question: Amazon has invested $8b into anthropic. Do you think if the DoD disappeared tomorrow that Amazon would lose more than $8b in revenue over the next 5 years?

I think you underestimate how large the DoD budget is and how many times that money changes hands in the pursuit of fulfilling contracts. $20b-$25b in revenue per year across all hyperscalers is a totally reasonable estimate.


Why on earth would you compare $8 billion of equity investment in another company (which is likely worth far more now) to $8b of revenue?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: