It's an interesting phenomenon. When I first started using LLMs, I was impressed by its natural language generation capabilities, and thought it could write considerably well - using elegant structures, etc and so forth.
But after a while those structures became a sort of signature of LLM writing. They repeat the same style way too much, and with enough interactions it becomes grating to read.
You get a point multiplier for rewriting parts of whatever vomit the LLM gave you.
`1 x 0` is still `0` though.