Random late reply, but: compression can compete surprisingly often. Latency can get less attention than overall IOPS numbers and such, but SSD service times for individual 4K random reads are often something like 0.1ms. You can pack a compressible page faster than that for sure.
(Intel's selling their new NVDIMMs as providing much better latency figures than SSDs, especially if you read a cache line out at a time instead of a whole page. Be nice to see the things and their pricing.)
Instead of a general LZ compressor Apple's stuff uses a word-at-a-time algorithm called WKdm, and there are others (there's one on GitHub called centaurean/density for example). Hardware can help--Samsung had a memory compressor in some chips and Qualcomm's server ARM chips used compression to avoid bottlenecks in memory bandwidth (but not increase capacity). Fun stuff, seems like somewhere more progress could be made.
(Intel's selling their new NVDIMMs as providing much better latency figures than SSDs, especially if you read a cache line out at a time instead of a whole page. Be nice to see the things and their pricing.)
Instead of a general LZ compressor Apple's stuff uses a word-at-a-time algorithm called WKdm, and there are others (there's one on GitHub called centaurean/density for example). Hardware can help--Samsung had a memory compressor in some chips and Qualcomm's server ARM chips used compression to avoid bottlenecks in memory bandwidth (but not increase capacity). Fun stuff, seems like somewhere more progress could be made.