Hey that isn't the really impressive, but I did the same thing with DOS configuration files with only using software that came with DOS. I played a game that required 12 megs of RAM and I had 8. I used drivespace to compress the drive and smartdrive to create diskcache. The dos extender of the game automatically swapped to the disk, when there wasn't enough ram. With my set up, the game swapped, then drivespace compressed the data and smartdrive cached the data, and it never hit the disk during a combat anymore and became playable.
The interesting part is that this thing is for MacOS and apparently mostly works by compressing unreferenced memory segments in RAM.
Contemporary world of various DOS extenders is completely different ball game to the extent that typical DOS extender was more complete operating system implementation than what classic MacOS ever hoped for.
By contrast, I remember people putting swap files (most prominently the Win386 one) into straight RAMDISKs, which should really just give you a net negative effect all across the board.
Unless the ramdisk (AKA tmpfs in modern Linux) is compressed.
Many low footprint Linux installations now include swapping to zram, which is compressed. The net effect is positive since you usually have an abundance of spare CPU cycles.
Well yeah, that was both josmala's (the original commenter) and my point. Putting the swap file in compressed memory (here DriveSpace+SmartDrive) can be good, putting it into a straight, uncompressed ramdisk is worse than useless.
If I remember correctly, this was usually done because Windows 3.x couldn't handle the full amount of RAM in the system directly for one or other reason. I might be misremembering but I recall some IO cards created a 'hole' by placing memory mapped IO regions somewhere well above 1MB (15M or so?) and Windows couldn't handle non contiguous physical RAM at that point in the address space as an example.
I played Doom like that in win3.11 (which supported disk swapping automatically including for DOS games) on a PC with 4MB RAM while Doom needed 8MB. However, the disk swapping was too slow to make it playable.
I played doom II with 4MB on a 386DX40. We had a DOS 6 boot menu to not load smartdrv. It ran good enough, except on the last level, where all the different enemy types caused all their graphics to be loaded at the same time. That was too much, and caused severe disk trashing (not swapping, DOOM managed the disk on its own).
doom used less memory, so it should have been good enough.
When I worked at MS I heard some chatter about people doing it with NT circa 2010 (I guess per this wikipedia article it didn't ship until five years later). Then after I left I heard there were Linux patches from before that (wikipedia cites 2008, with it being in the mainline tree in 2013). And then later, that Apple had also done it.
It's interesting that a bunch of major operating systems decided to do this roughly the same time.
There was various adhoc hackish support for compressed RAM and swapping to various RAM-like things (eg. VRAM) in Linux since at least 2004. But around the 2010 doing such stuff started to be somewhat worthwhile for somewhat normal use cases and not only neat trick that had some questionable benefit for live-CDs and nothing much else.
> It's interesting that a bunch of major operating systems decided to do this roughly the same time.
I'm guessing it's due to the popularity of 64-bit address space. There are lots of pointers in RAM, and while they use a lot of bits, they don't have much entropy. So they probably compress extremely well, and with compression, give one less reason to stick to 32-bit architecture. (I don't fully understand why, but I hear that people run 32-bit OSes on 64-bit machines to save memory. As long as one process uses less than 4G of RAM, maybe it's the right optimization. But memory compression certainly changes that calculus and gives the OS vendors one less use case to have to support a 32-bit OS for.)
One alternative would be to define an ABI for using the x86-64 processor with 32-bit pointers. This would let everyone leave the processor in the more modern 64-bit mode without increasing pressure on the memory subsystem.
It's been available in the mainline Linux kernel and in glibc for ages. It also breaks any code which assumes #ifdef __x86_64__ means 64-bit pointers, and the concept doesn't seem to attract a lot of excitement:
Or it's the latency chasm between RAM and disk. A cursory googling to refresh my own memory, SSD is at least 100x slower than peeking at RAM and possibly much more, depending on who's numbers you trust.
It seems at least plausible to save time by compressing pages in memory rather than going to disk. Certainly for low end or consumer systems not blessed with gobs of RAM.
It may sound so weird, but it's really a relatively straightforward equation. If the CPU is compressing fast relative to disk operations, the expected compression ratio is good (not hard to imagine for a lot of cases), and both memory and CPU overhead are tolerable, compressing RAM is favorable to paging to disk.
I wonder how much of an effect SSDs have on that balance.
Random late reply, but: compression can compete surprisingly often. Latency can get less attention than overall IOPS numbers and such, but SSD service times for individual 4K random reads are often something like 0.1ms. You can pack a compressible page faster than that for sure.
(Intel's selling their new NVDIMMs as providing much better latency figures than SSDs, especially if you read a cache line out at a time instead of a whole page. Be nice to see the things and their pricing.)
Instead of a general LZ compressor Apple's stuff uses a word-at-a-time algorithm called WKdm, and there are others (there's one on GitHub called centaurean/density for example). Hardware can help--Samsung had a memory compressor in some chips and Qualcomm's server ARM chips used compression to avoid bottlenecks in memory bandwidth (but not increase capacity). Fun stuff, seems like somewhere more progress could be made.
I picked up an old PowerMac for $10 a couple months ago which I've been puttering about with on the weekends.
I was surprised to find there are still a couple of Hotline servers running out there! Someone also took control of the hltracker.com domain so just launching the original app still gets you a server list.
I also tried installing RAM Doubler, but it refuses to initialize when you have 256 MB of RAM installed ("you don't need RAM Doubler" it said)
Back in the day I was a bigger fan of Speed Doubler though. It had neat stuff like a multi-threaded Finder copy.
I loved Hotline to the max! What a nifty little community [transgressive tendencies aside] and unique, useful associated server / client software. HL had a ton of utility, was full of creative people, and was where I learned most of my first computer skills. I think there are still a couple of trackers out there. I'd get back on HL in a minute.
HL was very interesting it took the BBS concept and moved it into the internet age. It felt very much like the old BBS dial-ups. HL brought that back for a while and I loved it as well. definitely a trip down memory lane.
I was the first to file against the makers of SoftRAM. I was 15 or 16, so it's in my dads name, but it was my idea to contact a lawyer and show my proof (the relevant SYS/DLLs were just renamed Windows files, hah!). I learned pretty early in life some important things about class-action lawsuits and lawyers in general... They offered to settle for $2,000 and $35,000 for the lawyers. I said no, so the lawyers offered me $2,000 of their cut on top. So that's how it went, I bought my first car off my father, and the lawyers banked $33,000.
It sounded like there is only 1 person the class action. Is it possible to have a class action with only 1 person? Sorry, I don't know the laws very well.
> [RAM Doubler 2] reclaims unused memory in application partitions, then compresses memory it can’t reclaim ... [it then uses] disk swapping ... as its final strategy
Reminds me of SoftRAM, which was rated 3rd "Worst Tech Product of All Time" by PC World. It also claimed to compress memory but actually didn't do that much.
We had RAM Doubler 1 and also used it on a 660AV, originally with 8MB and later 24MB of physical RAM. I remember the downloadable updates (and there were many!) could actually update the original installation floppy disk
There were some powerful utilities on the old Mac OS. Sometimes, it was a deal with the devil, like TimesTwo. It did the same thing but for harddrives. It often corrupted the darned things though... Ah, what we used to do to get another 4 MBs of RAM and 80 MBs of storage.
Recently I've been playing around with MacOS 9 for the first time in at least a decade (bought a $10 PowerMac, woohoo!) and have had a good time trying to recreate my old setup.
It really brought back to me just how many shareware and freeware utilities there were that changed how some very fundamental parts of the system worked. I've replaced the behaviors of window rendering (Kaleidoscope, Power Windows), text rendering (SmoothText), scrolling (Smart Scroll), drag and drop (Gravite), menus (Menutasking enabler), etc etc. Also how unstable everything becomes once you do that!
It really was the wild west and it's still fun to play with.
Amazingly fast window drawing too. I'd still like to have WindowShade and sounds for it. The endless series of extensions you somehow needed, until there were multiple rows across the screen at boot. (No modern OS matches holding Shift on reboot to disable extensions as a simple diagnostic.)
But I'm glad to have forgotten about handles and Carbon and having to buy your compiler -- remember Metrowerks CodeWarrior? Well, I was excited at the time.
This gets to the heart of my beef when people talk about "digital natives." Being able to use a mobile phone and a web browser does not make you a technology expert. There are plenty of Gen Xers or earlier who really knew the hardware and software they were using, even if they were adults when this Internet thing came along.
When I was a kid my only internet access was at the local library, and for a while they only had an ISDN connection shared between multiple computers/users.
Even after they upgraded to DSL, for a long time my only way to carry files home was through floppy disks.
I still remember the pure JOY I felt when I found an utility that could format floppies to store 1722 KB!
Nowadays Windows and Mac use RAM compression by default. Funny thing is that most Linux distros don't do that by default (which is bad for browsers). Search for ZRAM etc. if you want to know more.
Very nostalgic. I find it a bit much when the author says 8MB is 'only $50'. In the US maybe, could be double that in other countries. I was always drooling over the prices in US MacWorld. But also, $50 was worth a lot more back then. You youngsters with 64GB RAM and 8GB GPUs don't know how good you have it.
ZRam makes my ancient Chromebook work nice with Ubuntu. I'm thinking about turning it on on an old Zotec Zbox that only takes one stick of ancient laptop ram so I'm stuck on 2gb there.
Back then we did it because RAM was expensive, and there was rarely enough.
Today we use it because RAM is plentiful, and swapping to RAM is much faster than swapping to disk.
Ram doublers of the mid-90s were usually expensive hype. The most famous case is that of SoftRam which didn’t even try to do anything but just reported fake numbers. It was called placebo ware by some magazines. https://en.m.wikipedia.org/wiki/SoftRAM
Because of the way that classic MacOS (mis)managed memory, you (as in the user) had to specify how much memory was allocated to each program before you ran it and the memory had to be contiguous. It was easy to end up in a situation where you had plenty of free RAM but it was fragmented and you had to quit programs to free up enough RAM.
RAM Doubler for the Mac helped alleviate that by “doubling” the amount of RAM the system saw and then managing the memory by a combination of allocating memory dynamically in the “System” partition and compressing memory.
Another benefit of RAM Doubler for the PPC was that it could enable virtual memory (ie the modern nomenclature not just “disk swapping”) without allocating as much space on the disk.
>> But, as I said, these updates were all free and available on the Internet, so downloading updates wasn’t a problem for most people.
Kind of surprising in 1995. I didn't think that many people had internet access. Then again, power users probably did, who would be customers of RAM Doubler.