Hmm... My M1 Air 16GB from December is an order of magnitude higher than that, but I do usually have 10 VS Code windows open, lots of Node instances, databases, Slack, and LOTS of browser tabs. Should I be worried?
Percentage Used: 1%
Data Units Read: 74,899,871 [38.3 TB]
Data Units Written: 71,233,417 [36.4 TB]
If you can, run iosnoop in background and note every program that writes 4kb chunks a lot.
These are wearing your SSD down.
And no, they will not be combined together because most of these shitty apps have O_DIRECT flag set or call fsync() after each write(), making the OS obligated to hit SSD with 4kb write.
SSD's don't operate that way, though, a 4kb write is guaranteed to become a bigger write since SSD cell/block is usually either 512kb, or 1mb, or 2mb, or hell even 4mb. So one 4kb per second becomes 4mb per second, and that's 345.6GB/day.
Pretty scary how one shitty app can ruin your ssd so fast, huh? I saw google drive app do 50 small writes per second. That's ~2TB/day.
That's really interesting. Doing some Google'ing there seems to be almost no discussion on this issue. I would happily disable fsync() at the OS level for all apps at the expense of possible data corruption as this is only a dev machine and everything important is backed. I wouldn't know how to do this though.
I've investigated this in some depth for my personal boxes with slow HDDs, in lieu of a more formal writeup, here is the best solution I've found. First:
This lies to the OS and tells it that drive writes don't need to be fsync'd. Replace "sda" with whatever your drive in question is, naturally. Note that this is not persistent, you'll need to configure your init system to do this on boot. You can verify it's working by looking at /sys/block/sda/stat (see https://www.kernel.org/doc/html/latest/block/stat.html).
Next, in /etc/fstab configure your filesystem to be mounted with "barrier=0" (note that only some filesystems support this), which will often prevent data from getting written out to the disk at all, instead getting kept in cache.
You still need the first part because the filesystem layer won't cover all possible cases--for instance, LVM thin provisioning will issue a manual flush below the filesystem layer once per second, and there's no way to remove that.
One problem I haven't managed to solve is detection--in the unlikely event things don't shut down properly (e.g. a kernel panic), how do I find this out so that I can restore from a backup (rather than having something subtely corrupted somewhere)? This is conceptually easy with some global bit in a special sector used as a mutex, but I don't know of any existing off-the-shelf solution implementing this.
The equation is (months machines was used for)/(percentage used). Assuming you've used the machine for 2 months that comes out to 200 months or 16 years left. I suspect you won't care by then.
25% of the available space gone after four years could very well be a problem.
Also, if writes continue at the same rate then wear will increase exponentially, as the same number of writes are distributed over an increasingly smaller amount of space.