HN2new | past | comments | ask | show | jobs | submitlogin
Simulating fluids, fire, and smoke in real-time (andrewkchan.dev)
784 points by ibobev on Dec 19, 2023 | hide | past | favorite | 169 comments


As a person who did a PhD in CFD, I must admit I never encountered the vorticity confinement method and curl-noise turbulence. I guess you learn something new every day!

Also, in industrial CFD, where the Reynolds numbers are higher you'd never want something like counteracting artificial dissipation of the numerical method by trying to applying noise. In fact, quite often people want artificial dissipation to stabilize high Re simulations! Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.


> Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.

The first rule of real-time computer graphics has essentially always been "Cheat as much as you can get away with (and usually, even if you can't)." Also, it doesn't even have to look right, it just has to look cool! =)


“Everything is smoke and mirrors in computer graphics - especially the smoke and mirrors!”


This was the big realization for me when I got into graphics - everything on the screen is a lie, and the gestalt is an even bigger lie. It feels similar to how I would imagine it feels to be a well-informed illusionist - the fun isn’t spoiled for me when seeing how the sausage is made - I just appreciate it on more levels.


My favorite example of the lie-concealed-within-the-lie is the ‘Half-Life Alyx Bottles’ thing.

Like, watch this video: https://www.youtube.com/watch?v=9XWxsJKpYYI

This is a whole story about how the liquids in the bottles ‘isn’t really there’ and how it’s not a ‘real physics simulation’ - all just completely ignoring that none of this is real.

There is a sense in which the bottles in half-life Alyx are ‘fake’ - that they sort of have a magic painting on the outside of them that makes them look like they’re full of liquid and that they’re transparent. But there’s also a sense in which the bottles are real and the world outside them is fake. And another sense in which it’s all just tricks to decide what pixels should be what color 90 times a second.


I want to see that shader. How is sloshing implemented? Is the volume of the bottle computed on every frame?

Clearly, there's some sort of a physics simulation going on there, preserving the volume, some momentum, and taking gravity into account. That the result is being rendered over the shader pipeline rather than the triangle one doesn't make it any more or less "real" than the rest of the game. It's a lie only if the entire game is a lie.


Is it really doing any sloshing though? Isn't it "just" using a plane as the surface of the liquid? And then adding a bunch of other effects, like bubbles, to give the impression of sloshing?


This is the perfect example of what I meant. So many great quotes in this about the tricks being stupid and also Math. Also the acknowledgment that it’s not about getting it right it’s about getting you to believe it.


At some point every render-engine builder goes through the exercise of imagining purely physically-modeled photonic simulation. How soon one gives up on this computationally intractable task with limited marginal return on investment is a signifier of wisdom/exhaustion.

And, yes, I've gone way too far down this road in the past.


I have heard the hope expressed, that quantum computers might solve that one day, but I believe it, once I see it.

Till then, I have some hope, that native support for raytracing on the GPU will allow for more possibilities ..


Not being a graphics person, is this what hardware ray tracing is, or is that something different?


Rayteacing doesn't simulate light, it simulates a very primitive idea of light. There's no diffraction, no interference patterns. You can't simulate the double-slit experiment in a game engine, unless you explicitly program it.

Our universe has a surprising amount of detail. We can't even simulate the simplest molecular interactions fully. Even a collision of two hydrogen atoms is too hard - time resolution and space resolution is insanely high, if not infinite.


And what’s more, it simulates a very primitive idea of matter! All geometry is mathematically precise, still made of flat triangles that only approximate curved surfaces, every edge and corner is infinitely sharp, and everything related to materials and interaction with light (diffuse shading, specular highlights, surface roughness, bumpiness, transparency/translucency, snd so on) are still the simplest possible models that five a somewhat plausible look, raytraced or not.


Btw there is some movement towards including wave-optics into graphics instead of traditional ray-optics: https://ssteinberg.xyz/2023/03/27/rtplt/


This is very cool. Thx for sharing


Raytracing simulates geometrical optics. That it doesn't take interference pattern into account is therefore a mathematical limitation and of course true, but irrelevant for most applications.

There are other effects (most notable volume scattering), which could be significant for the rendered image and which are simulatable with raytracing, but are usually neglected for various reasons, often because they are computationally expensive.


> You can't simulate the double-slit experiment in a game engine, unless you explicitly program it.

Afaik you can't even simulate double-slit in high-end offline VFX renderers without custom programming


I wanted to make a dry joke about renderers supporting the double slit experiment, but in a sense you beat me to it.


There sure has been a lot of slits simulated in Blender.


Even the ray tracing / path tracing is half-fake these days cause it's faster to upscale and interpolate frames with neural nets. But yeah in theory you can simulate light realistically


It’s still a model at the end of the day. Material properties like roughness are approximated with numerical values instead of being physical features.

Also light is REALLY complicated when you get close to a surface. A light simulation that properly handles refraction, diffraction, elastic and inelastic scattering, and anisotropic material properties would be very difficult to build and run. It’s much easier to use material values found from experimental results.


If I understood Feynman’s QED at all, light gets quite simple once you get close enough to the surface. ;) Isn’t the idea was that everything’s a mirror? It sounds like all the complexity comes entirely from all the surface variation - a cracked or ground up mirror is still a mirror at a smaller scale but has a complex aggregate behavior at a larger scale. Brian Green’s string theory talks also send the same message, more or less.


Sure, light gets quite simple as long as you can evaluate path integrals that integrate over the literally infinite possible paths that each contributing photon could possibly take!

Also, light may be simple but light interaction with electrons (ie. matter) is a very different story!


Don't the different phases from all the random path more or less cancel out , and significant additions of phases only come from paths near the "classical" path? I wonder if this reduction would still be tractable on gpus to simulate diffraction


That’s what I remember from QED, the integrals all collapse to something that looks like a small finite-width Dirac impulse around the mirror direction. So the derivation is interesting and would be hard to simulate, but we can approximate the outcome computationally with extremely simple shortcuts. (Of course, with a long list of simplifying assumptions… some materials, some of the known effects, visible light, reasonable image resolutions, usually 3-channel colors, etc. etc.)

I just googled and there’s a ton of different papers on doing diffraction in the last 20 years, more that I expected. I watched a talk on this particular one last year: https://ssteinberg.xyz/2023/03/27/rtplt/