Hacker Newsnew | past | comments | ask | show | jobs | submit | jimaek's commentslogin

I talk a little about it in the article, but the main goal was to build something simple that works as proof of concept.

This brute force approach works much better than I expected as long as you have enough probes and a bit of luck.

But of course there are much better and smarter approaches to this, no doubt!


How did you know how well these results work?

You mention the quality several times in the article but it's not clear how this is verified. Do you have a set of known-location-ip-addresses around the world (apart from your home)? Or are we just assuming that latency is a good indicator?


I run about 270 servers in verified locations as part of the Globalping network https://globalping.io/users/jimaek so I had plenty of targets to test

I tested against them, as well as other infrastructure I control that is not part of the network, and compared to the ipinfo results as well


Atlas is great but it is focused more on academic research and professional use.

Globalping offers real-time result streaming and a simpler user experience with focus on integrations https://globalping.io/integrations

For example you can use the CLI as if you were running a traceroute locally, without even having to register.

And if you need more credits you can simply donate via GitHub Sponsors starting from $1

They are similar with an overlapping audience yet have different goals


I talk about it a bit in the article. The easiest solution is to use the last available hop. In most cases its close enough to properly detect the country even if the target blocks ICMP.

Email me if you would like to get some additional credits to test it out, dakulovgr gmail.


This is a little project exploring the feasibility of using a service such as Globalping for geo location needs.

I had fun making it but please note that the current implementation is just a demo and far from a proper production tool.

If you really want to use it then for best possible results you need at least 500 probes per phase.

It could be optimized fairly easily but not without going over the anon user limit which I tried to avoid


I wonder if you could optimize for reducing the total probe count (at the expense of possibly longer total time, though it may be faster in some cases) by using some sort of "gradient descent".

Start by doing the multi-continent probe, say 3x each. Drop the longest time probes, add probes near the shortest time, and probe once. Repeat this pattern of probe, assess, drop and add closer to the target.

You accumulate all data in your orchestrator, so in theory you don't need to deliberately issue multiple probes each round (except for the first) to get statistical power. I would expect this to "chase" the real location continuously instead of 5 discrete phases.

I just watched the Veritasium video on potentials and vector fields - the latency is a scalar potential field of sorts, and you could use it to derive a latency gradient.


Yes, most likely there are multiple algorithms that could be used to get better results with fewer probes, but I'm not smart enough to do the math and implement them.


The simplest is drop the longest latency probe, and add a new one in the proximity of the fastest.


isn't 3 theoretically enough?


Time of flight from three points gets you two options for position with GPS, but GPS signals propagate directly in free space. At least mostly, reflections happen.

Internet signals generally travel by cable, and the selected route may or may not be the shortest distance.

It's quite possible for traffic between neighboring countries to transit through another continent, sometimes two. And asymetric routing is also common.

Since this is using traceroute anyway, if you characterize the source nodes, you could probably use a lot fewer nodes and get similar results with something like:

a) probe from a few nodes on different continents (aiming to catch anycast nodes)

b) assuming the end of the trace is similar from all probes, choose probe nodes that are on similar networks, and some other nodes that are geolocated nearby those nodes.

c) declare the target is closest to the node with the lowest measured latency (after offsetting from node charachterized first hop latency)

You'll usually get the lowest ping times if you can ping from nearby customer of the same ISP as the target. Narrowing to that faster is possible if you know about your nodes.


It's fine, but definitely a downgrade compared to previous titles like Battlefield 1. At moments it looks pretty bad.

I'm curious why graphics are stagnating and even getting worse in many cases.


https://www.youtube.com/watch?v=gBzXLrJTX1M

Battlefield 6 vs Battlefield 1 - Direct Comparison! Attention to Detail & Graphics! PC 4K

The progress in 9 years do seems underwhelming.


Exploding production cost is pretty much the only reason (eg we hit diminishing returns in overall game asset quality vs production cost at least a decade ago) plus on the tech side a brain drain from rendering tech to AI tech (or whatever the current best-paid mega-hype is). Also, working in gamedev simply isn't "sexy" anymore since it has been industrialized to essentially assembly line jobs.


It’s far from an assembly line job, but it’s unstable, challenging and the pay hasn’t kept up with the rest of the tech sector.


Have you played it? I haven't so I'm just basing my opinion on some YouTube footage I've seen.

BF1 is genuinely gorgeous, I can't lie. I think it's the photogrammetry. Do you think the lighting is better in BF1? I'm gonna go out on a limb and say that BF6's lighting is more dynamic.


Yes I played it on a 4090. The game is good but graphics are underwhelming.

To my eyes everything looked better in BF1.

Maybe it's trickery but it doesn't matter to me. BF6, new COD, and other games all look pretty bad. At least compared to what I would expect from games in 2025.

I don't see any real differences from similar games released 10 years ago.


Note that while the above is mostly true for free plans, it can also behave like a normal CDN on more expensive plans. The more expensive the more reliable and consistent it is.


I don't understand why we even need such services. Why don't the browsers and maybe even the OS just not improve their included grammar checkers?

The Chrome enhanced grammar checker is still awful after decades.

Maybe the AI hype will finally fix this? I'm still surprised this wasn't the first thing they did.


An open source and easy to use globally distributed network monitoring and testing platform called Globalping https://globalping.io

Currently it's at stage of "RIPE Atlas with better UI and UX" but I plan to expand the functionality to cover a lot more use cases like ISPs,Cloud,Edge rankings and performance


I invite everyone affected to consider switching to a production focused CDN https://www.jsdelivr.com/

There is also a tool to simplify migration https://www.jsdelivr.com/unpkg


I invite everyone (even those not affected in this instance) to consider switching away from relying on random third parties for your hosting. Like wtf how is did this ever become the norm even when there was a slight performance advantage from a shared cache, which doesn't even exist anymore these days.


A very specific problem with third party hosts : if you use noscript and only allow main domain script as a default, you have to either whitelist those CDN or authorize them on every site using it. Not the best experience. Especially with some sites having dozens of third party scripts so instead of trying to find the right one(s) to authorize you just close the tab and forget about it.


I do both. If you are loading a JS lib then check for its existence on the client. If it's not found load the resource from your server. You could also achieve this on the server side.


Or you could simplify the whole thing and just host it yourself. It isn’t worth that level of complication.


In this situation server bandwidth was an issue. Using an available CDN with the fall back was what we decided to do. It's not that complicated, from the client check if the object you expect to exist in the library exists, if not load it from your server.


I’m curious, in what scenario was bandwidth for static resources an issue? It’s one of the most trivial things to serve cheaply.


When the server is behind a pipe of limited size. When most of your new visits happen all at once over a short period of time. When you can't rely on a mobile phones cache like a desktop because 90% of your visitors are using one. A CDN is a easy win to off load resources, even if they are small.


> A CDN is an easy win to off load resources, even if they are small.

Yes and setting up your own CDN config (Cloudfront, whatever) is minutes of work and brings all of the benefits you’ve outlined. And costs pennies.


1. You realise caches, even for third party assets are partitioned by host domain in most modern browsers now? So you're not getting a benefit that users loaded the script from someone else's site.

2. For saving bandwidth on the server are you using proper cache setup with etags and If-Modified-Since support? It shouldn't have to download a first party asset again if it's in the cache. It's worth pointing out that non-ancient versions of all major web servers do this by default.

3. Haven't you now added latency by first making users download the loader script, have that execute, and then start loading the dependencies?


1) Yes, the other benefit at the time was a browser would only do X number of requests per domain, using the CDN allowed more requests. Also it was a bandwidth issue from the server.

2) Yes. I should also mention at the time mobile browser cache was not consistent/reliable.

3) The "loader script" is a basic if(!lib.func) { do the magic }. Its one extra thing I suppose. For the overall page load there are lower hanging fruit.


> The "loader script" is a basic if(!lib.func) { do the magic }. Its one extra thing I suppose. For the overall page load there are lower hanging fruit.

That's still a 100-200ms if the user is not on the same continent as your server.


What does that actually mean? How do you check if a JS library exists on the client?


Why? If you care about being even remotely reliable (hinted at with "production-focused"), why would you do this? Just use the same host as where you serve everything else from.

Browsers have partitioned caches per origin for over 10 years, so there's no performance gains from getting cache hits from other websites.


Are you using a DNS ad blocker? An adblock list mistakenly added jsDelivr and later removed it. It would cause this issue


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: