You mention the quality several times in the article but it's not clear how this is verified. Do you have a set of known-location-ip-addresses around the world (apart from your home)? Or are we just assuming that latency is a good indicator?
I talk about it a bit in the article. The easiest solution is to use the last available hop. In most cases its close enough to properly detect the country even if the target blocks ICMP.
Email me if you would like to get some additional credits to test it out, dakulovgr gmail.
I wonder if you could optimize for reducing the total probe count (at the expense of possibly longer total time, though it may be faster in some cases) by using some sort of "gradient descent".
Start by doing the multi-continent probe, say 3x each. Drop the longest time probes, add probes near the shortest time, and probe once. Repeat this pattern of probe, assess, drop and add closer to the target.
You accumulate all data in your orchestrator, so in theory you don't need to deliberately issue multiple probes each round (except for the first) to get statistical power. I would expect this to "chase" the real location continuously instead of 5 discrete phases.
I just watched the Veritasium video on potentials and vector fields - the latency is a scalar potential field of sorts, and you could use it to derive a latency gradient.
Yes, most likely there are multiple algorithms that could be used to get better results with fewer probes, but I'm not smart enough to do the math and implement them.
Time of flight from three points gets you two options for position with GPS, but GPS signals propagate directly in free space. At least mostly, reflections happen.
Internet signals generally travel by cable, and the selected route may or may not be the shortest distance.
It's quite possible for traffic between neighboring countries to transit through another continent, sometimes two. And asymetric routing is also common.
Since this is using traceroute anyway, if you characterize the source nodes, you could probably use a lot fewer nodes and get similar results with something like:
a) probe from a few nodes on different continents (aiming to catch anycast nodes)
b) assuming the end of the trace is similar from all probes, choose probe nodes that are on similar networks, and some other nodes that are geolocated nearby those nodes.
c) declare the target is closest to the node with the lowest measured latency (after offsetting from node charachterized first hop latency)
You'll usually get the lowest ping times if you can ping from nearby customer of the same ISP as the target. Narrowing to that faster is possible if you know about your nodes.
Exploding production cost is pretty much the only reason (eg we hit diminishing returns in overall game asset quality vs production cost at least a decade ago) plus on the tech side a brain drain from rendering tech to AI tech (or whatever the current best-paid mega-hype is). Also, working in gamedev simply isn't "sexy" anymore since it has been industrialized to essentially assembly line jobs.
Have you played it? I haven't so I'm just basing my opinion on some YouTube footage I've seen.
BF1 is genuinely gorgeous, I can't lie. I think it's the photogrammetry. Do you think the lighting is better in BF1? I'm gonna go out on a limb and say that BF6's lighting is more dynamic.
Yes I played it on a 4090. The game is good but graphics are underwhelming.
To my eyes everything looked better in BF1.
Maybe it's trickery but it doesn't matter to me. BF6, new COD, and other games all look pretty bad. At least compared to what I would expect from games in 2025.
I don't see any real differences from similar games released 10 years ago.
Note that while the above is mostly true for free plans, it can also behave like a normal CDN on more expensive plans. The more expensive the more reliable and consistent it is.
An open source and easy to use globally distributed network monitoring and testing platform called Globalping https://globalping.io
Currently it's at stage of "RIPE Atlas with better UI and UX" but I plan to expand the functionality to cover a lot more use cases like ISPs,Cloud,Edge rankings and performance
I invite everyone (even those not affected in this instance) to consider switching away from relying on random third parties for your hosting. Like wtf how is did this ever become the norm even when there was a slight performance advantage from a shared cache, which doesn't even exist anymore these days.
A very specific problem with third party hosts : if you use noscript and only allow main domain script as a default, you have to either whitelist those CDN or authorize them on every site using it. Not the best experience. Especially with some sites having dozens of third party scripts so instead of trying to find the right one(s) to authorize you just close the tab and forget about it.
I do both. If you are loading a JS lib then check for its existence on the client. If it's not found load the resource from your server. You could also achieve this on the server side.
In this situation server bandwidth was an issue. Using an available CDN with the fall back was what we decided to do. It's not that complicated, from the client check if the object you expect to exist in the library exists, if not load it from your server.
When the server is behind a pipe of limited size. When most of your new visits happen all at once over a short period of time. When you can't rely on a mobile phones cache like a desktop because 90% of your visitors are using one. A CDN is a easy win to off load resources, even if they are small.
1. You realise caches, even for third party assets are partitioned by host domain in most modern browsers now? So you're not getting a benefit that users loaded the script from someone else's site.
2. For saving bandwidth on the server are you using proper cache setup with etags and If-Modified-Since support? It shouldn't have to download a first party asset again if it's in the cache. It's worth pointing out that non-ancient versions of all major web servers do this by default.
3. Haven't you now added latency by first making users download the loader script, have that execute, and then start loading the dependencies?
1) Yes, the other benefit at the time was a browser would only do X number of requests per domain, using the CDN allowed more requests. Also it was a bandwidth issue from the server.
2) Yes. I should also mention at the time mobile browser cache was not consistent/reliable.
3) The "loader script" is a basic if(!lib.func) { do the magic }. Its one extra thing I suppose. For the overall page load there are lower hanging fruit.
> The "loader script" is a basic if(!lib.func) { do the magic }. Its one extra thing I suppose. For the overall page load there are lower hanging fruit.
That's still a 100-200ms if the user is not on the same continent as your server.
Why? If you care about being even remotely reliable (hinted at with "production-focused"), why would you do this? Just use the same host as where you serve everything else from.
Browsers have partitioned caches per origin for over 10 years, so there's no performance gains from getting cache hits from other websites.
This brute force approach works much better than I expected as long as you have enough probes and a bit of luck.
But of course there are much better and smarter approaches to this, no doubt!