HN2new | past | comments | ask | show | jobs | submitlogin

Is there a reason you couldn't use whole numbers in place of decimals?


The problem still exists. It's less so specific to floating point numbers and more so a product of the limited number of states any data type can hold. Integers and floating point numbers are both still 64 bit on most modern computers, unless you are using big-int structures which are much more computationally intensive.


Granted, I'm no mathematician, but...

If we have a limit of 2^64 for $x, can't we set $y = representive of multiples of $x's upper limit?

$y = 5 = 5x2^64

What's the reason for not being able to sub-divide numbers and operations into smaller, more manageable forms?


Eh, was that rule not circumvented with conway? You can compress more data into a given numeric format, by moving the complexity into rule systems- aka determinstic games? Like a big number of chess games, or a group of possible games growing from just a sequence of letters and numbers.

The tradeoff for this limited space is though a complex ruleset and to recreate the data, lots of computation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: