With IANA recently allocating its last 2 /8s from the IPv4 free pool to APNIC, and about to announce automatic allocation of each of the last 5 /8s to the RIRs, the end of IPv4 is truly nigh. The RIRs’ pools will run out over the course of the next year, and that’ll be it – no more v4 addresses left.
However, why can’t we just reclaim unused and reserved addresses? Surely there’s a few legacy /8s assigned to organisations that could be clawed back? Couldn’t we check whether assignments are being used and reclaim ones that aren’t? What about the large, former-class-E space? Couldn’t one or more of those buy a good bit of time for IPv4?
This post examines these questions and largely concludes “no”. The issues with IPv4 depletion are simply fundamental to its (fixed) size. The IPv4 address space simply is too small to accommodate the growth of the current internet, and reclamation is unlikely to buy any useful time.
NAT could buy some amount of time, but even the NAT space seems like it may be too small for developed-world levels of technology to be deployed globally.
“Unused” address space
If you were to “ping” or otherwise probe all the assigned address space, you might find significant chunks of it are not reachable from the public, global internet. E.g. the US military has large assignments which are not advertised, as do other organisations. So why don’t we reclaim those assignments, let the organisations switch to NAT, and make them available?
Well, just because address-space is not globally reachable does not mean it is not being used for inter-networking. The criteria for IPv4 assignments has always been a need for globally unique address-space, not a need for global reachability. Many organisations have need for private inter-networking with other organisations (financial organisations notably), which is hard to do in a scalable way with NAT. So such use is justified, and we can’t really reclaim it.
Former Class-E Space
What about the 16 /8s that comprise the former Class-E space? Even ignoring 255/8, which likely will never be useable, that’s still a fairly big chunk of address space – more than 5% of the total 32bit address space. Why not re-use that, surely that would make a big difference?
Unfortunately there are major challenges to using this address space. It has long been classed as “Reserved – for future use”, and remains so. A lot of networking code that deals with forwarding or routing packets checks that addresses are not reserved. This means if anyone were assigned addresses from this range they would find they would not be able to communicate with much of the rest of the internet. Even if most of the internet upgraded their software to fix this, the poor user would still find some sites were unreachable. The sites with the old software might not ever notice there was a problem, and might not even have an economic incentive to upgrade (“you want me risk causing problems for my network with an upgrade, to fix problems only you and a few others have?”)!
If we are forced to assign from the former-Class-E space, it will be a sign that the IPv6 rollout is perhaps in serious trouble.
The core of the problem: The size of the IPv4 address space
The nub of the problem is that IPv4 is simply too small.
IPv4 has 32 bit addresses, giving 4.29G addresses, roughly divided into 256 pieces, called /8s, for top-level assignments. Of this space, 18 /8s are reserved in their entirety for special purposes and will never be useful for general assignment; 1 /8 is reserved for private-networking; 16 /8s are tied up in the former Class-E space and likely not useful, as above. There are other reservations across the address space, but in smaller quantities that we can ignore in impact here. That still means 221 /8s = 3.71G address – 86% of the total address space – is available for global assignment (and the private 10/8 takes some pressure off that global space). This is equivalent to a 31.788 bit address space.
Now, average daily assignment rates have been running at above 10 /8s per year, for 2010, and approached 15 /8s towards the end. This means any reclamation effort has to recover at least 15 /8s per year just to break even on 2010’s growth. That’s 5.9% of the total IPv4 address space, or 6.8% of the assignable address space. Is it feasible to be able to reclaim that much address space? Even if there were low-hanging fruit to cover the first year of new demand, what about there-after? Worse, demand for address space has been growing supra-linearly, particularly in Asia and Latin America. So it seems highly unlikely that any reclamation project can buy anything more than a years worth of time (and reclamation itself takes time).
Seen another way, there are approaching 7G people in the world – 6.9G in 2010. Giving 1 address for every 1.86 people (in 2010). Even if we reclaimed old-Class-E, IPv4 still only provides 3.98G = 231.89 addresses, or 1 address for every 1.73 people.
Worse we can not use the address space with perfect efficiency. Because of the need for hierarchical assignment, some space will be wasted – effectively some bits of addresses are lost to overheads such as routing. Empirical data suggests a HD-ratio of 0.86 is the best achievable assignment density. This means that with the 3.98G assignable addresses with class-E reclaim, only 3.98G0.86 = 2(31.89*0.86) = 2(27.43) = 181M will actually be useable as end-host addresses, giving 1 IPv4 address for every 38 people (in 2010)!
Yet, people in the developed world today surely use multiple IP addresses per person. They have personal computers at work and home, eBook readers, mobile phones, etc. All of which depend on numerous servers which require further IP addresses. The people in the developing world surely aspire to similar standards of technology. If we assume the density of IP/person is heavily skewed towards the reported 20% of the world population who manage to earn more than $10/day, then that means that today each IP address is being used by around 7 people. If the skew is heavily biased towards just 10% of the world population, the figure would be around 4 people per address. It’d be interesting to get precise figures for this.
Can NAT save us?
Many organisations are likely to try buy time with NAT. But how much? NAT gives us only 16 extra bits. Assuming they were free bits, that would give us a 2(27.43+16) bit address space = 11,850G addresses. On the face of it seems like this would do for quite a while. It’d allow 1 connection at a time between every host, which is still sufficient to allow all processes to communicate with each other if a higher-level multiplexer protocol is agreed on (it’d be HTTP based, given current trends).
Unfortunately though, this won’t work with TCP, as it is. When TCP closes a connection it will go into a TIME_WAIT state, where it will not allow connections from the same (src,dst) 4-tuple. TCP remains in this state for 1 or 2 minutes on most implementations. Which means you need at least 60 ports if you want to be able to open connections to same host on average 1/s (you probably don’t want to generally, but think of bursts). For every 0.5s, you need 120 ports.
In practical terms, this means probably at least 8 bits of the port-space need to be reserved for TCP on each host. Leaving 8 bits to extend the address space with. This gives 2(27.43+8) = 46G addresses = 6.7 addresses/2010-person (NB: addresses/person instead of the person/address used above) = 0.15 people/address.
This though assumes the HD-ratio assignment-density model applies only over the scale of the IP addresses, and that the borrowed port-space will be allocated with near perfect efficiency. If that were not to be the case, if instead the extra port space also were subject to the HD-ratio model, then the numbers become instead (2(31.89+8))0.86 = 2(31.89+8)*0.86 = 21.3G addresses & 3 addresses/2010-person = 0.32 people/address.
Is that enough? Certainly for a time. It doesn’t seem a comfortable margin though, particularly as it may require some further changes to end-hosts to be truly workable.
Errata
This blog post almost certainly has mistakes, possibly significant. Those noted so far, and any other significant edits:
- Missing “not”: those assigned old-class-E addresses would not be able to communicate with much of rest of internet
- Added people/address numbers in last section, for ease of comparison with previous figures.
Stephen Strowes said
For interest, RIR run-out projections, assuming current allocation growth patterns, and no inter-RIR trading: http://www.potaroo.net/tools/ipv4/rir.jpg
David said
Interesting that the poorer continents are predicting times 2+ years after the richer continents.
“We just got this infrastructure, it’ll be good for years”? Or will it take that long before their user bases’ need’s pick up?
Thomas Bridge said
Paul,
While it doesn’t affect the main thrust of your post I’m not sure the use of multiple devices is driving the rtake up, given that Nat doez actually work there.
I’m not familiar with the details of how 3G works ( ie if my iPad is currently having exclusive use of a public IP ) but apart from that every works behind NAT – this inlcudes PCs, printers, Wiis and even the TV.
The real issue IMO is the huge rates of takeup in parts of the world that had relatively low internet usage.
Paul Jakma said
Yes, depletion is being driven by growth in the developing world.
The reason I was looking at number of devices was to try work out the current useage density in the richer segment of the world was to see if we could work out whether NAT would be able to support such densities if such densities were deployed globally. Sorry if that was unclear!
Thomas Bridge said
Paul,
My gut feel says that the number of devices per person/household is probably only a small part of the uptake.
The way I look at it is this: a household might start off buying a single computer which would require a single, public IP address. Often, that computer these days would be stuck behind some kind of telco broadband router. When the household decides to turn internet on on the Wii, connect a network printer, the work laptop, the wifes laptop and the brand new iPad, they don’t increase their demand for public IPs. Pretty much all the internet things normal people do now work reasonably well behind NAT.
Outside, most wireless hotspots use NAT. With regards to 3g, I could be wrong but I think my internet traffic on a 3G network goes through my provider’s gateway which uses NAT (and a quick, non comprehensive search of the routing DBs implies they have much fewer IPs than then they do 3g customers).
This would indicate to me that increasing numbers of IP enabled devices per person is not the major driver in IP address consumption.
Having said all that, I’d prefer a world without NAT.
Paul Jakma said
To be clear, the driving force for uptake is expansion in Latin America and, most particularly, Asia. This is evident from the trends in RIR allocations.
The reason why current density (i.e. how many addresses do people use today) in the developed world is interesting is to allow us to extrapolate to whether or NAT can work once internet useage globally reaches similar levels of usage to useage amongst rich people today (the 20% or even just 10% of the world today who are rich enough to own computers, phones, etc and can afford broadband and 3G type access). The question I’m interested in answering is “Can NAT allow IPv4 to support the global population?”. Current useage density already incorporates fairly significant deployments of NAT, just as you say.
The answer seems to be “yes”, NAT does seem to offer a significant amount of breathing room.
I didn’t quite do an apples-apples comparison though. The 181M IPv4 addresses (useable according to the HD-ratio model) are not being used by all the 2010-people, but only about 10% to 20% of the current population. So that’s 3.8 to 7.6 people / address (rather than 38 people/address). With maximised NAT deployment, the address space distributed globally (over todays population) is 0.32 people. If we get to to 10G population, it’s 0.5 people / address.
NB: many of my usages of “address” are in the abstract sense as “identifier” – not “IPv4 address”, obviously. Where a NAT “address” is a composite of a 32bit identifier and some borrowed portion of the port identifier (mapped dynamically at some edge NAT box). Obviously 🙂
Mark Parker said
“Seen another way, there are approaching 7G people in the world – 6.9G in 2010. Giving 1 address for every 1.86 people (in 2010).”
Why would you need more than that?
Firstly, most of that population is in 3rd world countries, most can’t even get clean water, let alone a computer, and even less so an internet connection. with DNS and virtual servers you can fit literally hundreds of servers on a single IP address. Even in a world where everyone with a computer has two domain names each, we are a long way from exhausting IPv4.