There is no meaningful difference, because it is dependent on how CPUs handle FP arithmetic. And only significant change in that from time when this paper was published is that IBM got their decimal FP formats as optional part into standard that everyone uses for FP arithmetics.
Technically, that's true, but the more interesting question is "How do languages handle non-integers?"
Quite a few languages (like Lisp) have a rational data type, such that 1/3 + 1/4 is done exactly. Others have a decimal data type, whose main purpose is for calculations involving money. 0.1 + 0.2 = 0.3
And finally, I'm no expert on Mathematica, but IIRC it computes error bounds on your results, and it only displays decimal digits if they are within those bounds. It's the proper way of doing things, and again, 0.1 + 0.2 = 0.3
Most current languages don't 'handle' floating point numbers themselves, they simply use the CPU instructions available to work with them.
The only way around that is to use a library that supports higher-precision floating point through emulation. Then agian, that still doesn't change the theoretical background in any way.