If stdout is going to a terminal (i.e., it hasn't been redirected to a file), then the standard library will by default flush on every '\n' anyway. Similarly for C with fputs, etc.
In C you control this with setvbuf, and of course, in C++ with iostreams it's a huge mess of rdbuf spaghetti and probably involving std::ios_base::sync_with_stdio as well.
Edit: My C preference and iostream hatred is showing again, because I answered a question that was mostly about C++ iostreams with results based around C stdio (I think iostream is possibly the worst standard IO API in existence for a popular language, so when using C++, I recommend using something custom calling out to OS syscalls or C stdio). I could easily be wrong about C++ iostreams, so the most relevant half of my original post is highly suspect. I'll test it out later, but my C stdio answer is as-is below.
I can't remember using an alternative platform that behaves differently.
A better test than the one you ran is to look at the resulting syscalls from a loop of `fputs("a line of output.\n", stdout);`, vs. `fputs("a bit of output. ", stdout);`. Buffering accumulates strings in memory before an eventual write syscall, so looking at syscalls is an easy way to see the difference: a write per-'\n' means line buffering.
Compare the syscalls of both using strace. I did so, and when stdout is to the terminal, I see lots of calls to `write(1, "a line of output\n", 17)` compared to `write(1, "a bit of output. a bit of output"..., 1024)`. (To confirm, manually insert an `fflush(stdout);` in the "a bit of output. " version, and you'll see lots of `write(1, "a bit of output. ", 17)`)
Then compare both when redirecting stdout to a file, and you'll see both cases make large write-s of 4KB. Only adding explicit fflush-es will make either version go back to small write-s of 17B.
Compiled the trivial test programs with `cc -O2 ./main.c -o ./main`
Tested g++/libstdc++ and clang++/libc++ and both operate identically to the C test above, testing `std::cout << "a line of output\n";` vs. `std::cout << "a bit of output. ";` (and with explicit flushes using `std::cout << "a line of output" << std::endl;` and std::cout << "a bit of output. " << std::flush;).
Looking for write syscalls is not a valid way to figure out where buffering is happening.
Write by default is famously buffered in Linux, and stdio/iostresm functions map closely to it. So, if you call these C functions N times, you'll see write being called N times.
Flushing calls fsync or some ioctl depending on the file descriptor is, iirc.
fflush and std::endl/std::flush are not synonymous with the fsync syscall. None of the standard libraries I tested above call fsync at all when using those functions. They flush the accumulated memory "to the file" in the sense of making a write syscall (or equivalent, e.g., writev), and they do not attempt to enforce a flush to disk, that's just left to the OS. The "line buffering" in question is the C/C++ library accumulated memory, and the relevant kind of "unbuffering/flushing" is just write-like syscalls.
I meant to reply much sooner, with more info from empirical tests. But now I'm happy to leave it at just the write stuff and the note about the meaning of stdio/iostream "flushing".
I think you have an incorrect or esoteric understanding of what the "buffering" in question is, but it doesn't matter to me. My argument is about what syscalls happen, and I don't care if you disagree with the (I think, standard) descriptor/terminology I'm using.
In C you control this with setvbuf, and of course, in C++ with iostreams it's a huge mess of rdbuf spaghetti and probably involving std::ios_base::sync_with_stdio as well.