The point is that if you’re not doing perf analysis and logging/graphing but just want to display FPS to the player, last-frame FPS is pretty useless because the number changes too fast.
Didn't seem AI-generated to me. Just the short, three-word-sentence pithy style that's become really popular these days that LLMs have learned to ape. But IMO it actually works well here, it reminds me of Peter Watts's very human style (cf. eg. https://www.rifters.com/crawl/?p=11546).
And SUB is also always a single cycle on any practically useful architecture since the 70s. Theoretical archs where SUB might be slower than XOR don't matter.
You misunderstood the GP - they were criticizing the way some programmers use "code should be self-documenting" as an excuse when they actually mean "I’m too lazy to write comments even when I really should". Just like "premature optimization is bad" may in fact mean something like "I never bothered to learn how to measure and reason about performance"
I would expect pausing to bring a game’s CPU/GPU usage down to near-zero, which won’t happen if the game keeps redundantly rendering the exact same frame. A game engine can optimize this by special casing time scale zero to simply render a single textured quad under the pause menu (which is probably what one of the commenters in TFA referred to).
You would expect it to do that, and I'd say that's a desirable behaviour, but it's not really that simple and you certainly don't get that for free.
Typically any of the common modern engines with a "time scale" variable like that are not at all optimising anything in that way.
It's likely that the physics engine won't be stepped with a zero delta time, which will reduce the time spent on physics, but that's more of a byproduct of how physics engines work[0] than an optimisation.
You would have to go out of your way to capture the scene and display it "under the pause menu" in that way.
Not saying nobody does that, just that it's not something the engine is giving you for free nor is it related to the time scale variable.
Further, doing that won't necessarily reduce resource usage.
For example, if there isn't some sleep time inserted in the main loop when in a menu, or v-sync[1] to limit the framerate, the result of the simplified scene (just the menu and the quad with the captured scene) is an extremely high framerate, which may or may not cook the hardware more than the in-game load.
[0] Typical rigidbody physics engines are only (what I'll call) predictably stable with a constant delta time (same dt every tick). And a common way to manage this is with a time accumulator main loop, that only steps physics for whole units of dt.
[1] And v-sync isn't a silver bullet. consider refresh rates, different hardware, different drivers, driver overrides.
You can set x87 to round each operation result to 32-bit or 64-bit.
With this setting in operates internally exactly on those sizes.
Operating internally on 80-bits is just the default setting, because it is the best for naive users, who are otherwise prone to computing erroneous results.
This is the same reason why the C language has made "double" the default precision in constants and intermediate values.
Unless you do graphics or ML/AI, single-precision computations are really only for experts who can analyze the algorithm and guarantee that it is correct.
reply