The proof relies on the assertion that the supremum of an increasing sequence is equal to the limit. This is mathematical dogma, and should be introduced as such. Once that is accepted, it becomes obvious.
This is illustrative of what I see as a fundamental problem in mathematics education: nobody ever teaches the rules. In this case, the rules of simple arithmetic hit a dead end for mathematicians, so they invented a new rule that allowed them to go further without breaking any old rules. This is generally acceptable in proofs, although it can have significant implications, such as two mutually exclusive but otherwise acceptable rules causing a divergence in fields of study.
When I was taught this, it was like, “Look how smart I am for applying this obtusely-stated limit rule that you were never told about.” This is how you keep people out of math. The point of teaching it is to make it easy, not hard.
This is in large part due to the difficulty with reasoning about infinite representations. You do have to add axioms to your system to be able to reason about 0.9999...
Stating that 0.9999... = 1 without exposing these new tools meant to grapple with concepts that physically cannot be grappled with is a huge mistake.
And this I think is the real issue. When someone says that 0.999... = 1.0, what they are saying is that this is true given a number of assumptions that we are taking for granted that would not be obvious to a non-mathematician. There's a lot of math hiding in those '...'.
> the supremum of an increasing sequence is equal to the limit
-- this is not misinformation (and to anyone familiar with some introductory analysis, correct[1]). Of course, calling it "dogma" is a bit inflammatory, but not technically wrong. It's kind of of a made-up rule to help us work with infinities (particularly in ℝ -- but it happens all the time in set theory, as well).
But to agree with GP, touting it as "intuitive" or "mind-blowing" is indeed silly.
I do think it is technically wrong to call it dogma - the decimal is a geometric series with a limit, right? And limits have an unambiguous definition, it's the smallest value that the series approaches but never exceeds as it tends to infinity. I think the part that is admittedly weird is that the notation "0.999..." refers to the limit as the series tends to infinity, and it kind of hides that fact from you. Even just writing the geometric series down and plopping "infinity" as the value for x would be wrong, as it's the limit that is equal to 1 as x tends to infinity. So there's arguably more hidden notation than the ellipses implies, but nothing is pulled out of a hat here or defined for definitions sake.
> the decimal is a geometric series with a limit, right..
Right, but that's not really the crux of the matter. Hint: look at how the supremum is defined[1]. The definition of the supremum is how we end up with 0.999... = 1.
I suppose my point is that you could turn the repeating decimal into an infinite series, and a student might accept that, and you could define the suprememum and they might agree that it is 1, but then you ask them if the series is equal to the supremum, so they don't know what to do with the series, so they turn it into a sequence. Now they ask whether the last item in the sequence is equal to the supremum. Of course not! This is by definition.
And now you realize that you and the student have been operating by different rules. Their rules of equality are based on symbolic equality, so you actually have to relax the rules a bit to make limit equality work. And then, more importantly, you have to show that all the other rules are still intact. Actually, in this case, they aren't. Symbolic equality involving infinity is now horribly broken, and you have to express all equality in terms of limits to maintain consistency. Explore this further and you keep finding more inconsistencies that have to be settled by new rules that define new areas of mathematics.
So who is right? The natural world appears to be much more permissive than limit equality, preferring epsilon-equality. Symbolic equality is the only purely self-consistent system, but you can't do much with it. It's also possible that the natural world works with symbolic rules (quantum) but the complexity is great enough to resemble epsilon equality (continuum).
So, .999... == 1 by tautology. It's not some brilliant mathematical insight. The interesting part is the consequence of defining it as so.
This is illustrative of what I see as a fundamental problem in mathematics education: nobody ever teaches the rules. In this case, the rules of simple arithmetic hit a dead end for mathematicians, so they invented a new rule that allowed them to go further without breaking any old rules. This is generally acceptable in proofs, although it can have significant implications, such as two mutually exclusive but otherwise acceptable rules causing a divergence in fields of study.
When I was taught this, it was like, “Look how smart I am for applying this obtusely-stated limit rule that you were never told about.” This is how you keep people out of math. The point of teaching it is to make it easy, not hard.