Instead of trying to smooth the entire image, it reduces blocking artifacts by finding plausible coefficients that reduce discontinuities at block boundaries (jpegs are encoded as a series of 8x8 blocks). "jpeg2png gives best results for pictures that should never be saved as JPEG", but knusperli works well on normal photographic content, since it only tries to remove blocking artifacts.
> A JPEG encoder quantizes DCT coefficients by rounding coefficients to the nearest multiple of the elements of the quantization matrix. For every coefficient, there is an interval of values that would round to the same multiple. A traditional decoder uses the center of this interval to reconstruct the image. Knusperli instead chooses the value in the interval that reduces discontinuities at block boundaries. The coefficients that Knusperli uses, would have rounded to the same values that are stored in the JPEG image.
> Instead of trying to smooth the entire image, it reduces blocking artifacts by finding plausible coefficients that reduce discontinuities at block boundaries
This is essentially the same thing, but using slightly different regularizers. In both cases, an image is found that whose jpeg compression is identical to the given one, by selecting the "most regular" image among the large family of such images. Jpeg2png minimizes the total variation over the whole image, and knusperly only along the boundaries of the blocks. The effect is quite similar at the end.
The effects are quite different, since reducing total variation also reduces texture, which is important to perceived quality. Here's a direct comparison-- note how knusperli retains slightly more detail instead of oversmoothing: https://mod.ifies.com/f/200710_lena_decode_comparison.png
Good point. Unfortunately, the lena image is quite bad for comparison, there's almost no texture (except for the blue kerchief). I wonder what would happen for a heavily textured image; say, white noise. Would knusperly smooth it only along the block boundaries and leave it alone elsewhere?
I wish people would stop using that image as an example. Not because of its content (what a stupid beat-up that was) but because the source file is of such terrible quality that it isn’t representative of the images we routinely handle today. And the extreme colour cast makes it useless for judging skin tones.
My guess is that quantsmooth won't handle squares in the background well, but will be better at the edges (without excessive blurring).
Update: I tested it, it's about 10% quality, jpeg-quantsmooth doesn't handle this quality. At 25% quality - result is fine, starting at 20% and less - not good.
Perhaps the synthesis of all three projects combined into one (taking best of each) could give a great result.
So then...
jpeg2png: overblurs and slow
knusperli: does only deblocking, not removes artifacts
jpegqs: fails to deblock low frequencies (less 25% quality)
I was going to mention Knusperli as well. It's worth mentioning that the "no more artifacts" claim in the title depends heavily on what you understand to be an artifact. With modern, good quality JPEG encoders (e.g. mozjpeg), blocking only appears when the file is heavily compressed, or after a lot of generation loss. A decoder like this can't recover information that isn't there, obviously. So at more moderate levels of compression, where there's no noticeable blocking but the information inside the blocks is distorted or missing, it's not likely to help you a lot.
(As a citation, I can only offer the fact that I've frequently tried to clean up JPEG images I've found online, and Knusperli is usually my first step. It's never a sufficient step, though.)
Instead of trying to smooth the entire image, it reduces blocking artifacts by finding plausible coefficients that reduce discontinuities at block boundaries (jpegs are encoded as a series of 8x8 blocks). "jpeg2png gives best results for pictures that should never be saved as JPEG", but knusperli works well on normal photographic content, since it only tries to remove blocking artifacts.
> A JPEG encoder quantizes DCT coefficients by rounding coefficients to the nearest multiple of the elements of the quantization matrix. For every coefficient, there is an interval of values that would round to the same multiple. A traditional decoder uses the center of this interval to reconstruct the image. Knusperli instead chooses the value in the interval that reduces discontinuities at block boundaries. The coefficients that Knusperli uses, would have rounded to the same values that are stored in the JPEG image.