Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin

Checking the index in every iteration of a loop will effectively double the amount of operations performed in every iteration of the loop for this scenario. That is, O(2n) versus O(n)


Not really, things like branch prediction and JIT optimizations will kick in pretty fast. This is why these benchmarks on jsperf suck - the code gets a different 'fresh' version every time and jits (like crankshaft in v8 ) don't get to work.

Effectively, you're checking 'the performance of interperted JavaScript' most the the time.


I bet my bottom dollar that branch prediction is not going to magic away that index dependent if statement in a language like JS, even with Crankshaft in full effect.


Here, it's within 20% of the performance. Definitely not n vs 2n http://jsperf.com/oh-look-branches


The actual percentage depends on the CPU. On mobile the cost would likely be higher (e.g. on my phone it's around 33%), on old CPU can be even higher, but people don't usually care about them these days.


Hi! Hoped you liked the plugs here. Write more!


But the code is actually doing a whole bunch of things, let's say 20, with each loop. So it's O(21n) vs. O(20n).

2n implies looping twice or something, not a single extra operation.

(and 20 is probably a gross underestimate if we're comparing console.log to a mere ==)


I made an assumption, you're making an assumption too. I'd agree that in any case where it would actually matters, you'd be doing a small amount of ops in the if statement. The complexity change will depend on what you were doing in the loop as compared to the speed of a single if statement. If it were a large amount of ops, then yes, it'd be a more minor difference.


My point is that "O(2n)" is an extremely specific and misleading way to characterize 'extra conditional per iteration'. It doesn't matter what the actual percentage difference is.


I agree, now what is the correct notation so we all can learn? :-)


It's completely pointless to use such a notation here in lieu of description or benchmarks, but if you insist then use something like O(kn) vs. O((k+1)n)


I've expected the JS interpreter to optimize that out. Turns out I was wrong and the version from the article is even 40% slower for a loop that iterates over 5000 elements and does a trivial amount of work:

http://jsperf.com/for-loop-vs-for-in-loop/18


You're not giving the JIT a chance to warm up at all. Also, you're degrading into 'slow mode' because your array has different types. Basically, it looks like you're doing a lot of bad things to the JIT on purpose :P

Both the different types and the delete (array with holes) are _very_ harmful for performance in v8.


The arrays with holes are called Sparse Arrays, I googled for 'sparse array v8 performance' and didn't find much, but I've previously seen a Google Presentation on a slide-sharing site that did mention how poor sparse arrays perform when compared to normal contiguous arrays.


That's not how big O notation works. You need to eliminate constant multipliers because you can't make any assumptions about the amount of work each iteration involves. It certainly doesn't correlate directly to machine instructions.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: