Microsoft use 25.0.0.0/8 for some of their cloud services: you can see it in mail headers from Hotmail and Exchange Online. (Or could - I have not looked recently to see if this is still the case.) Microsoft stated on the mailop list on 16 July 2016 that they have an agreement with the MOD that it is OK to use these addresses like this. I don’t know if other organisations squatting on this address range have similar agreements...
NAT breaks certain things, and broken NAT breaks them further. The specific thing I can speak of is VoIP or more accurately SIP. SIP was designed for the public internet and so there are NAT workarounds, such as: if my interface has an RFC 1918 IP, I’ll use STUN along with certain SIP headers and tags to indicate I’m behind a NAT and need special treatment by the remote SIP agent. Using public IP space as NAT foils this logic entirely.
Apparently hamachi also uses the 25.x.x.x ip range for their vpn interfaces[1]. Not sure why anyone would use a random delegated range when there are plenty available, especially the 100.64.0.0/10 range.
I am not surprised. Private networks conflicting with each other has burned me before. So they are probably using a range that technically breaks the Internet, but in practice, doesn't break their customers' internal networks. Horrifying on some level, but no doubt necessary to sell an enterprise product.
I had some troubles with AWS when I created a VPC private subnet that happened to conflict with an internal IP range used by Docker. (You can change that, but not if you use their managed k8s offering. At the time it made sense, but, upon further reflection, I am baffled as to why Docker needs IP ranges outside of what the k8s CNI gives it. But I can assure you -- stuff broke in exciting and fun ways!) It was at that point I realized we had to have some central management of "internal" IP addresses, just like the IANA but for internal addresses. We were an ISP, so we already used Netbox, and it ended up being quite straightforward. Plus, the documentation was great -- you would see a connection from 10.42.123.8, look it up in Netbox, and see "oh that's the management network in NYC-FOOBAR-42".
Anyway, be careful about private subnets. Someone else already has a private subnet with that IP range, and those two networks can never talk to each other. No doubt, some Hamachi customer ran into this problem at some point in the past ;)
It is possible to have those networks talk to each other using bi-directional 1:1 NAT, but it will almost certainly cause you far more trouble than it is worth. I have done it, and I don't recommend it.
I saw this years ago on Rogers mobile devices in Canada. Freaked me out when I noticed it. But then it's behind NAT, so whatever. Always wondered why they chose that address space though. Like there aren't enough addresses to use in 10/8 or 172.16/12
> Like there aren't enough addresses to use in 10/8 or 172.16/12
10/8 only has 16.7 million IPs. According to Google, Rogers has 10.8 million subscribers. Considering private IP blocks usually can't be 100% utilized (because of subnetting), it wouldn't surprise me if they've exhausted the actual private IP ranges.
There's no reason an ISP can't overcommit a /8 network. There's no particular reason they need to promise that you can reach the IP of another Rogers subscriber. One "instance" of 10/8 per region or household or whatever would work just fine.
It would maybe work, but it would also vastly complicate everything. Debugging and logging, setup, maintenance would all be rigged to support that address reuse.
Yes, but you need a lot more gateways, jumphosts and general indirection to deal with the different network segments. And you need to customize a lot if stuff, because you can't just log "it came from 10.12.13.14", you need to log the network segment and gateway it came from as well. Non-flat address spaces are hell.
That hardware isn't already there. They won't share a switch, but they will share routers. IP routing with overlapping networks doesn't really work, so you have to get creative with e.g. DNAT or proxies. Both of which consumes additional resources, even if the routers can do it, you will have to buy bigger licenses, processors will have more load, etc.
Yes, in a simple routing setup, the limiting factor is how many ways you can divide up your address space hierarchically, not some giant 2^N number.
10../8 could be split into three hierarchies each with 2^8 entries:
• 10.X../16: sites (e.g. global offices)
• 10.X.Y../24: on site vlans / individual buildings, typically a broadcast network (although switches use MACs to limit actual broadcasting.)
• 10.X.Y.Z/32: individual hosts
It’s not an enormous amount of space, really, hence IPAM. You could divide it on non-8-bit boundaries. Oof.
With 128-bit IPv6, each ISP has a /32, each client a /48, and each broadcast domain a /64, leaving a remaining 64 bits for clients to just randomly make up addresses as they wish.
That’s still 16-bits of address space to work with when creating networks (just as you have 16-bits for the X.Y in 10.X.Y../24) but all addresses are globally routable and each network can support essentially infinite hosts and without needing DHCP, instead of just 254.
> each broadcast domain a /64, leaving a remaining 64 bits for clients to just randomly make up addresses as they wish.
What's the use of this especially when the prefix is dynamic? Two major cellular ISPs in India hand out dynamic prefix IPs. Are they doing it wrong? Are ISPs supposed to hand out static mac bound IPs for ipv6?
FWIW, you can easily use all but two of the addresses in a /8 if you set your subnet mask to 255.0.0.0. MIT used to run their public IPs this way. The entire campus was one huge switched network. You could get any IP in their /8 and take it anywhere on campus.
Forgive me, I'm far from wearing a network hat and still struggle to wrap my head around things after 15 years. I don't know if this will make sense.
But seeing as the ISP uses NAT anyway, is there any way to further route the private ranges behind private ranges? I don't know if this is what 'Double NAT' is, I tried searching online to see if this would work or if it would cause all sorts of issues. I'm not too familiar with ISP Natting as my home ISP has always assigned a public IP address.
I'm no ISP network engineer, but at a guess I'd probably look at splitting up customers into some sort of logical grouping (say per state or something) where they all sit behind the same CGNAT infra anyway, and give each of those their own 10/8.
Yes, you can keep NATting the same address space as many times as you want. As long as you have proper network boundaries there's nothing preventing, say, your ISP from using 10/8 for the country, then each province having a router that NATs 10/8 up to your gateway, which then NATs 10/8 for your home network.
But the further you go into the NAT layers the worse performance you'll see, because each NAT adds some latency overhead and more places where things can go wrong.
Definitely. I've seen tethering implementations that put the tethered device on its own NAT, behind that of whatever network the host device is connected to.
Would you use a single NAT for a whole country? I always assumed ISP network infrastructures were regional. Not the least because if you put 10 millions IPs behind a single public IP, you will quickly run out of your 65000 ports.
You can use more connections than 65k, since connections are identified by srchost, srcport, desthost, and destport. You are restricted to 65k NATted connections to a single server's* web site, though.
Good point, but still, there aren’t that many google, instagram or windows update IPs. I can easily imagine more than 1% of the 10 millions people connecting to google simultaneously.
I can't recall, does TCP require that src:srcport->dest:destport pairs be unique, or is there another way to distinguish connections (sequence numberd maybe?)? I guess there are other IP protocols like UDP though...
This shouldn't be downvoted, it is the correct implementation.
The UK MOD needed a large, private network space, so they registered 25/8. They can make whatever integrations with other networks they might need, and know their private address space shouldn't collide.
> Always wondered why they chose that address space though.
So that their customers are free to use any RFC1918 space they want for their own networks, either on the customer network behind NAT or on VPNs the customer's connected to.
I used to work for a service provider, that solved the MPLS administration problem by hijacking 7.0.0.0 DoD space that wasn't publicly routed.
By using 7.x IP assigned loopback interfaces in customer MPLS space, we could export just the loopback interfaces into our managment vrfs (without burning our supply of public IPs.) Of course the one problem is that we'd never have been able to take a Federal contract...
I did a project with a bank where we had to connect our stuff to theirs via a VPN, but they required that the addresses inside the VPN were notionally public - so no 10.x. We borrowed 51.x, which at the time was used by the UK's Department of Work and Pensions but not actually routable. Occasionally new people on the team would ask why the VPN was called VPN-DWP-1.
I rebuilt a corporate network for a relatively well known CGI company around a decade ago. They had chosen to use a subnet owned by a big tech company as their internal network. Took me months to convince the CEO that a change was necessary. Ultimately they were unable to download something important that caused them to suddenly care.
Back when I worked in IT (15+ years ago), I went onsite to a bank that used some seemingly-random /20 subnet for their internal network.
Turns out that they had some piece of hardware that came with a "hard-coded" IP address (from Japan) and instead of figuring out how to change it, they just used that entire subnet as their internal range.
It took me several hours to figure this out as I was working on their Cisco equipment and trying to add sane firewall rules...
I had a client that was using a random, allocated class A, and had been since the 80s. When I said "you know...these addresses belong to someone else", their IT director shrugged and said "no one here needs to talk to Japan". No...I never did find out why they thought 133/8 was a cool network to hijack, but I did make a ton of consulting money readdressing them when they found out that using someone else's IP addresses had bigger implications than they thought.
I used to work on IT too (10+) and I was sysadmin in a bank (same?) and I was really puzzled when I saw SSH connections to the Sun servers from japan. I took me some time to realised that It was from our own systems.
This kind of thing seems innocent, but it really isn't.
Another comment put it as "25.x.x.x is not advertised globally, not announced with BGP, so they're using it as private IP space. This works because you will never connect to a 25.x.x.x IP. It's just NAT."
That sounds fine but you run into trouble when the owner suddenly starts using the space or it’s reallocated. This has happened quite a lot in the last decade as IP space was ever more in demand and thus ever more scarce. An example is the use of 1.0.0.0/8 - allocated to APNIC in 2010 - there is a detailed analysis of the "unintentional" traffic this network was receiving when first used https://www.potaroo.net/studies/1slash8/1slash8.html - over 165 megabits in 2010.
Among various lazy configs and people using the range as it was convenient I recall there was some default popular Cisco recommended config that used it. Though I can’t find a link right now.
Similar problems also happened in the 2007-2010 timeframe as a lot of people used to have static “Bogon filter” firewalls that dropped traffic from unallocated IP ranges (not those marked as 'never to be used', just those not allocated yet). As more and more ranges were allocated the people receiving them had all sorts of connectivity problems to random networks because of these old out of date static filters - in my experience as a hosting provider the most common offender was banks hilariously. In practice these filters provided relatively little security and just broke things instead some years after they were put in place. If you had a dedicated team managing your network and constantly watching these kinds of things - hyper-aware the filter was in place and vigilant to update them then maybe it’s a tactic you could use but as static network config that is left and forgotten about it was a terrible idea and I spent a lot of time chasing down working contacts for various networks to get them to fix their firewalls. Meanwhile as far as our customers (trying to use the IP space) are concerned it was our problem since it worked fine should they use another provider. And this was just a network in the 110.0.0.0/8 range - no fancy 1.0.0.0.
Back to this specific case. If for example we wanted to extend IPv4 a little more and the UK DOD decided to sell or allow this range to be reallocated (since as rightfully pointed out, it's not really being used right now) there would be a lot of problems using it because of configurations like this. And you have a bit of a chicken and egg problem in that you can’t really use it until it mostly works but people won’t fix their networks unless people are using it.
Hence why sounds kind of innocent but in practice these are terrible ideas and using IP ranges for purposes they are not intended for shouldn't be done.
This is partly why for CGNAT applications like this a new range was reserved in 2012 - 100.64.0.0/10 which is what should be used here. The reason to have a dedicated range for the “ISP side” rather than just using RFC1918 space is so it doesn't clash with whatever RFC1918 space the end user wants on the LAN side of their network. If both sides used RFC1918 and accidentally chose an overlapping range then the connection would not work.
There aren't enough private IPv4 addresses for large(ish) mobile networks to address all their customers devices. Some networks work around this by using public v4 addresses which simply aren't in use. The UK DOD's block has never been publicly advertised (AFAIK) so they decided it was a "safe" block to use for this purpose. This isn't a new practice - I remember seeing Sprint doing this same thing many years ago.
The "correct" way to handle this would be to reuse real private IPv4 addresses within your network by segmenting it somehow, or do what some networks have done (T-Mobile US is probably the biggest) - use IPv6 with NAT64. That lets them forego internal IPv4 entirely and only use it at the edge for NAT.
That's the best case scenario, considering that most customers probably don't need to connect to UK DOD. With the way IPv4 shortages are going, the more likely scenario is that some cloud provider/cdn buys that range, and sites start breaking randomly for customers.
I suspect major cloud providers may well end up buying the address space if it were to become available, so quite a few companies' services may not be reachable.
Most of these are routed and in very active use. I have suballocations of large chunks from 12/8 and 38/8. People will be unhappy if you picked most of them.
73/8 is entirely owned by Comcast and used for customers. 44 is no longer a /8, but sold+split and half in active use by AWS.
25.x.x.x is not advertised globally, not announced with BGP, so they're using it as private IP space. This works because you will never connect to a 25.x.x.x IP. It's just NAT.
Does that actually work? If ISPs assign, say, 8.8.8.0/24, then you're going to have trouble determining whether that DNS lookup bound for 8.8.8.8 is 8.8.8.8 on the Internet or 8.8.8.8 on some link-local host.
Because people typically use IPs designated as private behind a NAT, this isn't a problem in practice. But if you start using publicly-routable IP addresses internally, then you can create some fun problems. Actually, not even that fun. Just annoying.
You're missing the point; 8.8.8.8 was just an example. For any public IP assigned inside your network, clients trying to reach to the real address won't be able to get there, because they'll hit the internal one instead.
Best practice for an ISP that is implementing CGNAT is to use 100.64.0.0/10 as per RFC 6598. That way, your customers can use private IPv4 (10/8, 172.16/12, 192.168/16) and you can manage your ISP network in the 100.64.0.0/10 space.
Your CGNAT looks after translating your customer's ISP address (ie, the 100.64.0.0/10 side of their CPE).
Of course, the best solution is to move to IPv6 and allocate each customer:
* a /64 address out of the ISP's network space
* a /48 (or /56) delegation for their own network