Hacker News .hnnew | past | comments | ask | show | jobs | submit | fweimer's commentslogin

What language is this article talking where compilers don't optimize multiplication and division by powers of two? Even for division of signed integers, current compilers emit inline code that handles positive and negative values separately, still avoiding the division instruction (unless when optimizing for size, of course).

Well, Sawyer started writing Transport Tycoon in 1992, when free or affordable C compilers were not as widely available. Turbo C was never known for optimizations. GCC 1.40 was good enough for Linus, but I guess Chris was already a good assembly programmer.

That's what I would have thought as well, but looks like that on x86, both clang and gcc use variations of LEA. But if they're doing it this way, I'm pretty sure it must be faster, because even if you change the ×4 for a <<2, it will still generate a LEA.

https://godbolt.org/z/EKj58dx9T


Not only is LEA more flexible I believe it's preferred to SHL even for simple operations because it doesn't modify the flags register which can make it easier to schedule.

It's more about the non-destructive destination part, which can avoid a move. Compilers tend to prefer SHL/SAL of LEA because its encoding is shorter: https://godbolt.org/z/9Tsq3hKnY

shlx doesn't alter the flag register.

SHLX does not support an immediate operand. Non-destructive shifts with immediate operands only arrive with APX, where they are among the most commonly used instructions (besides paired pushes/pops).

They use LEA for multiplying with small constants up to 9 (not only with powers of two, but also with 3, 5 and 9; even more values could be achieved with two LEA, but it may not be worthwhile).

For multiplying with powers of two greater or equal to 16, they use shift left, because LEA can no longer be used.


Using an lea is better when you want to put the result in a different register than the source and/or you don't want to modify the flags registers. shlx also avoids modifying flags, but you can't shift by an immediate, so you need to load the constant into a register beforehand. In terms of speed, all these options are basically equivalent, although with very slightly different costs to instruction caches and the register renaming in the scheduler. In terms of execution, a shift is always 1 cycle on modern hardware.

It was written in assembly so goes through an assembler instead of a compiler.

I assume GP is talking about the bit in the article that goes

> RCT does this trick all the time, and even in its OpenRCT2 version, this syntax hasn’t been changed, since compilers won’t do this optimization for you.


That makes more sense, I second their sentiment, modern compilers will do this. I guess the trick is knowing to use numbers that have these options.

There was a recent article on HN about which compiler optimizations would occur and which wouldn't and it was surprising in two ways - first, it would make some that you might not expect, and it would not make others that you would - because in some obscure calling method, it wouldn't work. Fixing that path would usually get the expected optimization.

It allows review of the way the merge conflict has been resolved (assuming those changes a tracked and presented in a useful way). This can be quite helpful when backporting select fixes to older branches.

To make this more confusing, there is a fast mode of Opus 4.6, which (as far as I understand it) is supposed to deliver the same results as the standard mode. It's much more expensive, but advertised as more interactive.

But the real scenario is going to be different in two ways: Market capitalization of the new company will only be a small fraction of the index total, even after it's been inflated as indicated. And not all investors in companies on the index are index funds, which brings down the number shares needed to align a fund.

Maybe they propose the rule change because it adjusts for some other problematic effect of the existing index rules? The discontinuity might seem acceptable because it is unlikely to be reached according to their simulations.


COBOL certainly had a lasting impact, but only for some application domains. The rest didn't seem to be particularly successful or impactful. Maybe RAD if you consider office application macros and end user report generation in it. (Spreadsheets extended programming to non-programmers and had a long-lasting impact, but I wouldn't call them RAD.)

Thats sorta the point... at one time or another, those were all projected to be 'the end of programming as we know it'...

Hell, COBOL's origins was in IBM wanting to make programming an 'entry level' occupation.

Oddly enough, spreadsheets had a huge impact (and still run a lot of companies behind the scenes :-P ) But I can't remember anyone claiming they would 'end programming' ?


What's the one successful one? Visicalc?

I would say the one that definitely changed programming was moving from the punch card era. A lot of these others that people are mentioning I don't think really changed programming, they just looked like they were going to.

The last time I've used a check was close to thirty years ago. I assume ahartmetz's experience is similar.

Many countries have functioning giro systems. The U.S. is just an outlier.


I've never written a check, but I have had to deposit occasional checks. In the last 6 years the only checks I've received were first paychecks at a new job (before direct deposit was set up) and my covid stimulus checks.

The x86-64 build runs about 50% more linker tests than the i686 build.


If you do not properly MIME-decode email, you end up with at least some base64-encoded conversations.


Mature Software Product Support without Sustaining Engineering through at least 31-Dec-2028

Apparently, it's out of support the same way RHEL 6 is out of support.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: