>Automatically moving objects to be near the computation that needs it, is a long-standing dream. It's awesome to see that Cloudflare is giving it a try!
I'm not sure I see many real world applications for this. It seems to sit in the unhappy middle ground between local device storage and central storage. Local storage is the best performance because you eliminate network issues but then you have to deal with sync/consistency issues. Central storage & processing eliminates sync/consistency issues but can have poor performance due to network. Worker Durable Objects sits in the middle. You trade consistency complications for performance but instead of eliminating the network you're shaving some tens of miliseconds off the RTT. It's a level of performance improvement that essentially no one will notice.
To use their examples:
>Shopping cart: An online storefront could track a user's shopping cart in an object. The rest of the storefront could be served as a fully static web site. Cloudflare will automatically host the cart object close to the end user, minimizing latency.
>Game server: A multiplayer game could track the state of a match in an object, hosted on the edge close to the players.
>IoT coordination: Devices within a family's house could coordinate through an object, avoiding the need to talk to distant servers.
>Social feeds: Each user could have a Durable Object that aggregates their subscriptions.
>Comment/chat widgets: A web site that is otherwise static content can add a comment widget or even a live chat widget on individual articles. Each article would use a separate Durable Object to coordinate. This way the origin server can focus on static content only.
The performance benefits for the cart, social feed, and chat are irrelevant. Nobody cares if it takes 50 ms longer for any of those things.
IoT coordination is more promising because you want things to happen instantly. Maybe it's worth it here, but people usually have a device on their local network to coordinate these things.
Game server would definitely be an improvement. But these things are more complex than some JS functions and it would be a large effort to make them work with Durable Objects.
> The performance benefits for the cart, social feed, and chat are irrelevant. Nobody cares if it takes 50 ms longer for any of those things.
I think this is missing a few points:
1. Yeah they do. If your shopping cart responds 50ms faster when someone clicks "add to cart", you will see a measurable benefit in revenue.
2. It's actually a lot more than 50ms. A chat app built on a traditional database -- in which a message arriving from one user is stored to the database, and other users have to poll for that message -- will have, at best, seconds of latency, and even that comes at great expense (from polling). The benefit from Durable Objects is not just being at the edge but also being a live coordination point at which messages can be rebroadcast without going through a storage layer.
3. Yes, some databases have built-in pub/sub that avoids this problem and may even be reasonably fast, but using Durable Objects is actually much easier and more flexible than using those databases.
(1) If 50 ms is that important then the cart should be stored locally and synced in the background. That's my broader point. Performance sensitive things should use local storage. Things that are not should use the convenience of a central server.
(2) Nobody builds chat apps that way. The apples to apples comparison would be something using websockets and Redis. The only savings I see there are the time saved by the server being physically closer.
>It seems to sit in the unhappy middle ground between local device storage and central storage.
No, it's strictly better than both. You get the performance of local storage and the ease of programming of central storage.
Pretty much everyone chooses central storage at the moment, so the advantage of mobile objects manifests as performance.
I don't know about you, but I would find it a much nicer experience if when I clicked "Add Item" in a shopping cart on some website, it happened <5ms regardless of the quality of my network connection, while still being shared between different machines and never encountering consistency issues. The current "wait somewhere from half a second to several seconds for each click" is bad UX, even if users have gotten used to it.
Durable workers are still a network call over the internet. That is orders of magnitudes slower than using local storage in the browser.
>when I clicked "Add Item" in a shopping cart on some website, it happened <5ms
You're not going to get that level of performance. All edge storage does is saves you the time it takes a packet to go from the edge to the server. For example, I'm in Atlanta and the Hacker news server is in San Diego. This is my traceroute:
1 LEDE.lan (192.168.1.1) 12.652 ms 12.596 ms 12.561 ms
2 96.120.5.9 (96.120.5.9) 32.606 ms 32.604 ms 32.581 ms
3 68.85.68.85 (68.85.68.85) 34.831 ms 34.821 ms 34.792 ms
4 96.108.116.41 (96.108.116.41) 34.762 ms 34.730 ms
34.706 ms
5 ae-9.edge4.Atlanta2.Level3.net (4.68.38.113) 39.613 ms 64.603 ms 57.149 ms
6 ae-0-11.bar1.SanDiego1.Level3.net (4.69.146.65) 89.310 ms 66.298 ms 76.021 ms
7 M5-HOSTING.bar1.SanDiego1.Level3.net (4.16.110.170) 75.982 ms 69.094 ms 79.199 ms
The last hop in Atlanta before hitting the backbone has a round trip time of ~35ms and the first in San Diego is ~75ms. So everything else equal, if HN was served from a edge location I'd save 40ms on a page load. Ultimately 40 ms doesn't matter because it's not something that the end user can perceive.
> Durable workers are still a network call over the internet.
Getting the request to the object involves traversing the internet. Once there, actually talking to storage is extremely fast compared to classical monolithic databases. The key is that application code gets to run directly at the storage location.
Most applications need to do multiple round trips to storage to serve any particular request, which is where the costs add up.
> Ultimately 40 ms doesn't matter because it's not something that the end user can perceive.
In ~2005 when I started at Google, I learned that they had done a study that found that every millisecond of latency shaved off the time it took to serve search results would add $1M in annual revenue.
Users may not perceive 40ms in isolation, but they do perceive a web site "feeling" slower if every request takes 40ms longer.
Maybe, but not as they are. If I understand it correctly there's no limits by geography. If the Durable Object is created in country A when a worker from Country B accesses it then that data will be replicated to the worker in Country B.
No, that's not correct. The worker in country B will end up sending a message to the Durable Object which will still be located in Country A. An object only exists in one place at a time.
We are working on automatic migration, where if we noticed the object was more-frequently accessed in country B than in country A, then it gets moved to country B. But, that's a performance optimization, and it will be straightforward to implement policies on top of that to restrict certain objects to migrate only within certain political regions.
I'm not sure I see many real world applications for this. It seems to sit in the unhappy middle ground between local device storage and central storage. Local storage is the best performance because you eliminate network issues but then you have to deal with sync/consistency issues. Central storage & processing eliminates sync/consistency issues but can have poor performance due to network. Worker Durable Objects sits in the middle. You trade consistency complications for performance but instead of eliminating the network you're shaving some tens of miliseconds off the RTT. It's a level of performance improvement that essentially no one will notice.
To use their examples:
>Shopping cart: An online storefront could track a user's shopping cart in an object. The rest of the storefront could be served as a fully static web site. Cloudflare will automatically host the cart object close to the end user, minimizing latency.
>Game server: A multiplayer game could track the state of a match in an object, hosted on the edge close to the players.
>IoT coordination: Devices within a family's house could coordinate through an object, avoiding the need to talk to distant servers.
>Social feeds: Each user could have a Durable Object that aggregates their subscriptions.
>Comment/chat widgets: A web site that is otherwise static content can add a comment widget or even a live chat widget on individual articles. Each article would use a separate Durable Object to coordinate. This way the origin server can focus on static content only.
The performance benefits for the cart, social feed, and chat are irrelevant. Nobody cares if it takes 50 ms longer for any of those things.
IoT coordination is more promising because you want things to happen instantly. Maybe it's worth it here, but people usually have a device on their local network to coordinate these things.
Game server would definitely be an improvement. But these things are more complex than some JS functions and it would be a large effort to make them work with Durable Objects.