I noticed that there are plenty of ways to setup single-system-image clusters for high performance computing, and I'm wondering if it's practical for web farms. It seems to be rarely used in the latter context, though I can see several advantages off the top of my head:
- You can write your web app as if it only runs on one machine, so you don't have to worry about distributed session storage. For some continuation-based web frameworks, serializing continuations seems pretty messy.
- You can use your RDBMS's own cache buffer instead of memcached, since it can be transparently distributed by the OS. So your app can simply talk to the database directly.
- From the propagandas, most SSI systems promise to automatically detect new nodes and add it to its pool of resources. So this saves you some time deploying your software as you scale up.
I was wondering if anyone here has any theoretical knowledge or experience in running SSI clusters, and how it compares to the classic load-balancer/web-servers/memcached/database-server configuration in serving multiuser web apps.
- You can write your web app as if it only runs on one machine, so you don't have to worry about distributed session storage. For some continuation-based web frameworks, serializing continuations seems pretty messy.
- You can use your RDBMS's own cache buffer instead of memcached, since it can be transparently distributed by the OS. So your app can simply talk to the database directly.
- From the propagandas, most SSI systems promise to automatically detect new nodes and add it to its pool of resources. So this saves you some time deploying your software as you scale up.
I was wondering if anyone here has any theoretical knowledge or experience in running SSI clusters, and how it compares to the classic load-balancer/web-servers/memcached/database-server configuration in serving multiuser web apps.
Thanks!