Not sure why you have been downvoted. While the LLM's introspection can't be trusted, that's indeed what happens: asked to generate a random number, the LLM picks one that feels random enough: not a round one, not too central or extreme, no patterns, not a known one. It ends up being always the same.
It doesn't "pick" anything. It produces the most likely number after this question based on the data it has been trained with! Reasoning models might pick in a sense that they will come up the the rules (like the grand parent post shows), but still it will produce the "most likely" number after the reasoning.