Nope, but I've used PlantUML for sequence diagrams before. I'd like to focus more on software architecture diagrams where layouts are less structured, more variable. I'd also like to focus on sleek presentation-ready UI (although I realize I have a long way to go there).
Many highly-walkable cities handle this by allowing freight deliveries in the very early morning, say 3am-6am. This can get a little annoying if it’s a mixed use commercial/residential space, as the residents then have to deal with freight truck backup noises in the wee hours. Fair trade off, I’d say, to live in a highly desirable and human-scale neighborhood.
Looks pretty neat! Since OP is the author, can I ask: how many simultaneous concurrent (client) connections does this support, how many simultaneous moving objects and at what update frequency? Do these limits change if the shapes get complicated?
> how many simultaneous concurrent (client) connections does this support?
The client connections are very lightweight. The protocol library is based on https://github.com/tidwall/redcon. I don't have exact numbers but this link may be helpful (which compares Redcon to Redis): https://simongui.github.io/2016/10/24/benchmarking-go-redis-.... I've seen it handle 16K+ client connection without issue, but typically a client library that supports pooling should be used (like Redigo).
> how many simultaneous moving objects and at what update frequency?
Depends on the complexity of the objects, the number of nearby objects, and the server hardware. Each collection of objects consists of one btree and one rtree for key/id lookups and spatial queries respectively. So updating a single point is like O(log(n)+log(m)). An in-memory database like Tile38 can achieve 100K+ per second. Utilizing network pipelining can help out too. FYI, I use a 4-core server with 8GB ram for testing.
In the case of a roaming geofence (http://tile38.com/topics/roaming-geofences), following each update there an additional query of the rtree O(log(n)) to retrieve nearby neighbors. If there's a lot of nearby objects, this can result in a lot of data sent over the network. But in general, a collection of millions of points spread over a continent with 100K+ updates per second should work, while keeping with realtimelyness.
> Do these limits change if the shapes get complicated?
Yes. Complicated objects such are MultiPolygons may require additional calculations like raycasting.
No one knows exactly what is going on at the Plank scale, That is partly what the guy says in the article.
“People trying to tie reality together don’t have any data, just a lot of beautiful math,” said Hogan. “The hope is that this gives them something to work with.”
Theories vary widely on what happens to the universe at that scale. This kind of experimental data might be the thing that transforms string theory from "Not even wrong" to actually wrong, or maybe even partly right.