In videography the term "log" is heavily overloaded and you'd want to ask for more detail in order to figure out exactly what is meant.
A pixel value, be it integer or floating point, means little on its own. There's a context for that value which is a color space. In the typical process, you have several color spaces in play: the camera has one for capture. There's one for color processing (the "working" space). And there's one for the display. When a pixel goes through the pipeline, it's processed via color space transformations.
In the "classic" color spaces, the pixel values have a linear relationship, and all of them carry the same amount of information. The "log" color spaces all have a non-linear (gamma) curve: they retain less information at very low and very high pixel values, but subsequently retain more information in the middle. It's a form of compression.
The human eye doesn't respond equally to all levels of brightness, so throwing away detail at the ends for more detail in the middle is usually a great choice. We retain information in the signal at the brightness level where the eye is able to perceive small details and texture, while throwing away information in the signal where it isn't.
We can now map more dynamic range into the same amount of bits, due to our non-linear compression. How large a dynamic range is given by underlying color space we are operating in.
If you go up in camera quality, you will typically see pixels use 10bits or more for their values. Combined with a log-curve, this leads to more information density, which allows capture of an even higher dynamic range. In turn, post-processing can now fix e.g. exposure to a much larger extent.
Finally, a LUT is linear approximation. "Real" color space transformation will use the underlying mathematical curves for much greater precision.
What's confusing is that many people will justify gamma encoding with the claim that visual perception is "logarithmic". I think this is misleading, because the perceptual justification is actually a power-law (Steven's Power Law) as contrasted by the opposing view that perception is logartihmic (Webner-Fechner law, see https://www.appstate.edu/~steelekm/classes/psy3203/Psychophy...). In practice I believe the actual justification for it was that it happened to match the transfer function of CRTs, and these days it's mostly kept around for compatibility, and as an optimization to avoid wasting bits (whether or not it truly fits the human model of perception).
As mentioned by https://computergraphics.stackexchange.com/questions/10315/t..., the real reason why log encoding is nice is because each stop of light gets roughly the same amount of bits. (Log encoding probably also isn't too bad a fit in terms of perception. In an alternate world where we weren't burdened by CRT baggage it'd be a replacement for the now-standard 2.2-esque power gamma).
Also the only reason why log encoded video "looks flat" is because traditional video workflows are not ICC color managed. If you properly applied the inverse transfer function (as any color managed system would automatically do) to display it on an e.g. sRGB screen, the video would appear close to what it did in real-life.
A pixel value, be it integer or floating point, means little on its own. There's a context for that value which is a color space. In the typical process, you have several color spaces in play: the camera has one for capture. There's one for color processing (the "working" space). And there's one for the display. When a pixel goes through the pipeline, it's processed via color space transformations.
In the "classic" color spaces, the pixel values have a linear relationship, and all of them carry the same amount of information. The "log" color spaces all have a non-linear (gamma) curve: they retain less information at very low and very high pixel values, but subsequently retain more information in the middle. It's a form of compression.
The human eye doesn't respond equally to all levels of brightness, so throwing away detail at the ends for more detail in the middle is usually a great choice. We retain information in the signal at the brightness level where the eye is able to perceive small details and texture, while throwing away information in the signal where it isn't.
We can now map more dynamic range into the same amount of bits, due to our non-linear compression. How large a dynamic range is given by underlying color space we are operating in.
If you go up in camera quality, you will typically see pixels use 10bits or more for their values. Combined with a log-curve, this leads to more information density, which allows capture of an even higher dynamic range. In turn, post-processing can now fix e.g. exposure to a much larger extent.
Finally, a LUT is linear approximation. "Real" color space transformation will use the underlying mathematical curves for much greater precision.