My answer to that is that there's no way to know in advance if 64 (or any other #) of bit is enough or not. Tech history is full of examples of arbitrary limits being defined thinking that there's no possible way anyone would need to exceed them, only to find that everyone needs to exceed them later on.
Is 64 bits enough? Maybe. Maybe not. But if you're wanting to future-proof, then setting an arbitrary limit is not the way to go.
I'd agree, but the historical computing restrictions were usually hitting ceilings over 8, 12, or 16 bits when those things were very expensive.
32bits is just about large enough (within an order of magnitude sense) if the space was used more efficiently.
128bits I've heard described is like "every atom in the universe" big. If so, then 64 is probably enough for every atom on Earth.
Now I've just thought of another angle, similar to UUIDs. They are used because they can be assigned randomly without worry of collision. But I don't think IP6 addresses are being assigned randomly, hmm.
> 32bits is just about large enough (within an order of magnitude sense) if the space was used more efficiently.
Well, not really. Between just the populations of the US, Europe and China (places with high levels of internet connectivity), you have over 2 billion people (this site claims over 5 billion internet users: https://www.statista.com/topics/1145/internet-usage-worldwid...).
> But I don't think IP6 addresses are being assigned randomly, hmm
That is, in fact, exactly how IPv6 addresses are assigned using SLACC with Prefix Delegation. Your ISP assigns you a prefix, and your computer randomly picks an address within it. You can also self-assign a (non-routable, like 10...*) prefix from the fc00::/7 ULA block by randomly filling in the remaining 57 bits to form a /64 subnet.
> Between just the populations of the US, Europe and China (places with high levels of internet connectivity), you have over 2 billion people
You have to consider the context: back then, multi-user computers were common. Each user didn't have their own computer; instead, they had a terminal to connect to a central computer. So a single computer would serve tens or hundreds of people, and as computers became more powerful, you could expect each computer to be able to serve even more people.
Not really. Under ten billion would be plenty if used efficiently. A significant fraction of addresses we want to be private, and not directly routable.
Sure, if you want your refrigerator, oven, and dishwasher publicly addressable on the internet it isn't enough, but you don't actually want that.
Further, 64 bits is many orders of magnitude overkill already. So what does 128 bring to the party, besides making addresses harder to type?
Those estimates typically confuse the 10^100 upper bound on the number of atoms in the universe with 2^100. The 2^128 number of addresses in IPv6 is clearly more than the latter, but dwarfed by the former. There are roughly 10^40 or so atoms in the universe per IPv6 address; by mass that's approximately one address for each combined biomass of earth.
Earth mass divided by 2^64 is roughly 357 tons. There are roughly 2^33 humans on earth, so 2^64 is "only" a billion or two addresses per person; tha's far fewer addresses than the number of human cells out there.
Is 64 bits enough? Maybe. Maybe not. But if you're wanting to future-proof, then setting an arbitrary limit is not the way to go.