Maybe things were different back when OP took calc but I’d bet any standard calculus textbook used for instruction in the past 20 years mentions this many times.
Mine didn't lead with it. It may have been buried in there somewhere but it wasn't highlighted as critically important. Instead you just got the mechanics of differentiating and integrating functions thrown at you without explanation.
This was back in the late 1990s and early 2000s. I hope things have improved.
We had a very rigurous program in highschool back in Eastern Europe, it was so good that taking clac in college in the US was redundant. However, I remember having a similar click moment when I visually or intuitively understood derivatives and integrals. Before that it juggling math mechanics, which in itself is also not bad and helps build a certain muscle that can be used later on. But I am familiar with the fumbling a bit through some math classes.
My conclusion is different though. I think there are different types of thinkers and different teaching styles and when the subject becomes very complex the disparity between the two is exacerbated. And luck plays the role in matching up with the right teacher/professor for you.
It's unfortunate you were in a geographic location and a time that exposed you to poor math teachers.
I was taught calculus in high school, well before university, in Australia in the late 1970s.
Visual analogies such derivatives being tangential rates of change and 2nd derivatives being local curvatures were taught and drawn on the board in the first week of two years of Calc I and Calc II classes that came before leaving high school to attend university.
This particular comment was probably downvoted because he basically just took the hottest buzzword of the last decade and inserted it with a very weak link to the topic at hand. You could copy and paste this comment to any article about any achievement and it would have the same relevance.
It is certainly not a “new way of thinking”. First of all, there is hardly any thought here. Secondly, “understanding deep learning mathematically” is a well known problem to mathematicians.
It is a comment that could have been generated by an (old, 2000s era) AI.
You know, some of us work in the field that you call "hottest buzzword of the last decade", we constantly see problems that could have benefited enormously from geniuses like above.
So for people like me at least, it's not a "very weak link to the topic at hand".
I already agreed that the op was hinting at an important problem. The link is weak because the op put hardly any effort into explaining the (possible) connection to these two particular undergraduates and their work. I am not claiming that math and deep learning are unrelated. Please read the rest of my response for context.
BTW, I am not using 'hottest buzzword of the decade' to denigrate the field of deep learning. I am using it to emphasize the op's shallow reference to the field.
"Continental philosophy" isn't useless, either, except by the definitions of those who made up the term "continental philosophy". They have a very limited notion of what "utility" is.
The things targeted as "continental philosophy" are mostly meaning-of-life questions. Those aren't "useful", but they're important to people. It's no more useless than literature or video games or skiing, which you don't need but are exactly what people do when they're not doing things that they are required to do.
Not all "continental philosophy" is done well, and the vagueness of the question means that it's hard to distinguish rigorously between "well done" and "not well done". But that doesn't diminish meaning, or value, that people find in it.
Exactly. It's "reasoned", and it's easy to think that reason is everything worth thinking about. That is, after all, reasonable.
But a huge swath of human activity isn't reasoned. A lot of human activity is spent just sitting around wondering what it all means, trying to live a "good" life, and wondering for that matter what "good" even means. Those are unscientific questions (or at least, we lack anything close to a science for discussing them), but it doesn't make them unimportant.
Those are the questions the "continentals" are usually working on. In my opinion, they're at their worst when they're trying to sound analytical about it. The more rigorous they get, the more they make it clear that they're actually pretty bad at rigor. Derrida is the most obvious example, and honestly I have only the word of his defenders that he's somehow trying to undercut rationality rather than simply being a charlatan.
In the end I treat the continentals the way I treat fans of soccer: I'm glad you're having a good time and finding meaning in what you're doing. That's worth it. But I'm gonna get irritated if you insist that I must find meaning in it as well. Maybe I will, maybe I won't, but the harder you insist the less interested I get.
> No one "does mathematics" in ZFC.
How is this not partial evidence that ZFC is trash?
What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
> I don't expect this to be solved now (tooling, as per above), but mathematicians should learn more category theory now as that works just fine pencil paper and brain. When the type theoretic tooling is ready they will be ready.
The abstractions of category theory are useless in many areas of mathematics. Prime example: PDEs.
Laypeople think that category theory is the 'ultimate math' because they hear that it provides bridges or analogies between different areas of math.
Perhaps programmers are especially prone to this because category theory does have some applications to programming.
The thing is, almost all of pure math is itself is a bridge between different areas of math. Some of these areas are bridged by category theory, some are bridged by other kinds of math, which have less catchy names.
> What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
Yes it is. Turing machine models are very limited, and a programme to let us achieve the things we can do with Turing machines (mainly runtime analysis) with a better model (i.e. a lambda-calculus style model) is a very good idea.
> The thing is, almost all of pure math is itself is a bridge between different areas of math. Some of these areas are bridged by category theory, some are bridged by other kinds of math, which have less catchy names.
I'd be equally interested in a programme of doing metamathematics in some non-category-theoretic model that was still "normal" mathematics in the same way that category theory is (and ZFC isn't). But I'm not aware of any such competing effort.
> What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
Yes it is. Turing machine models are very limited, and a programme to let us achieve the things we can do with Turing machines (mainly runtime analysis) with a better model (i.e. a lambda-calculus style model) is a very good idea.
What you wrote is a different justification for why turing tapes are worse than lambda calculus. It has nothing to do with the number of people programming on turing tapes, which is the argument that I was responding to.
I could easily have used 'lambda calculus' instead of 'turing tape' above. Most people do not code in the lambda calculus. They write haskell or javascript or whatever.
It doesn't mean that the lambda calculus is trash.
Likewise, most mathematicians don't work directly with ZFC. Doesn't mean ZFC is trash.
> I'd be equally interested in a programme of doing metamathematics in some non-category-theoretic model that was still "normal" mathematics in the same way that category theory is (and ZFC isn't).
My point is that almost all pure math (e.g: linear algebra, topology, differential geometry, category theory, group theory) is already metamathematics. Of course, there is a spectrum of 'meta-ness' but I think this is a continuous spectrum. I do not think there is a well-defined division between 'mathematics' and 'metamathematics'.
For example, can you give an argument for why, say, the irrationality of sqrt(2) is not 'metamath', yet godel's incompleteness theorem is 'metamath'?
> What you wrote is a different justification for why turing tapes are worse than lambda calculus. It has nothing to do with the number of people programming on turing tapes, which is the argument that I was responding to.
It has everything to do with it: the reason for wanting a lambda-calculus-like model is that lambda-calculus-like models are what working programmers actually program in. If programmers actually used languages that looked like turing tapes then turing tapes would be a good model for talking about programming in.
> I could easily have used 'lambda calculus' instead of 'turing tape' above. Most people do not code in the lambda calculus. They write haskell or javascript or whatever.
Haskell has been described as essentially a typed lambda calculus. You're treating this as a binary distinction when it isn't: there's a lot of value in the model that we can do formal program analysis with being close to the models that we like to program in, whether the models are exactly identical is a lot less significant than the degree of similarity. Likewise the problem that "mathematicians don't work in ZFC" isn't just that mathematicians are doing something slightly different day-to-day, it's that it's a very different paradigm from normal mathematics.
> My point is that almost all pure math (e.g: linear algebra, topology, differential geometry, category theory, group theory) is already metamathematics. Of course, there is a spectrum of 'meta-ness' but I think this is a continuous spectrum. I do not think there is a well-defined division between 'mathematics' and 'metamathematics'.
> For example, can you give an argument for why, say, the irrationality of sqrt(2) is not 'metamath', yet godel's incompleteness theorem is 'metamath'?
I'd argue that irrationality of sqrt(2) is applicable outside of a mathematical context - it's a fact about something we're modelling rather than solely a fact about our models ("2" and "sqrt" are of course abstract models, but they can be applied to model a variety of concrete things that we care about, and you can carry over the irrationality of sqrt(2) into at least some of those contexts, where it will translate into something meaningful and useful). Whereas godel's incompleteness theorem is a map for which there is no territory; it's a fact about abstract models that could never correspond to anything that wasn't an abstract model.
But if you want to regard number theory as a subset of metamathematics then I don't mind. When I say I want to be able to do metamathematics, I mean I want to be able to do all metamathematics; in particular I want to be able to talk about proofs in general. You could argue that the irrationality of sqrt(2) is a statement about proofs, but it's certainly not in a context that allows you to reason about general proofs, and number theory does not give you a first-class way to work with proofs in general (of course the Godel encoding exists, but it's extremely tedious and not useful for practical work). Likewise, as far as I know, there's no way to really talk about (general) proofs directly in terms of group theory or linear algebra.
> What percentage of coders do programming on a turing tape? Is this partial evidence that turing tapes are trash? Does that question even make sense?
ZFC should be more like machine code we actually use than turn tape concept which we don't. The fact that no one uses something higher level that compiles down to ZFC is disheartening.
As the FOM mailing list demonstrates, it's really about goal posts here.
For one camp, the goal posts are such that ZFC or many other things are equally good. ZFC by share age has the "large cardinal" advantage in that people have been grinding away at it longer.
For the other camp, large-cardinal-type research agendas aren't very interesting, and the goal posts are dramatically different.
I still think ZFC is trash, but I will admit my mistake in thinking other share my goal posts.
> The abstractions of category theory are useless in many areas of mathematics. For example, PDEs.
At the moment that is true.
But I think this is more due to the human concerns than the actually Math. Until Statistics overtook it, differential equalization were the most-applied branch of mathematics, which definitely influenced the culture around it. There is also the general algebraist---analyst cultural divergence.
I look forward to the day when the computer tools are so good they are used in those fields too. That should bridge the culture gap, and then we shall see what the math holds.
That's a flawed analogy. Mathematicians do use 'higher level languages', that's precisely why most of them don't care about HoTT vs set theory. Just like a web dev usually does not care about the instruction set of the processor.
I just had a flip through this book, yes all the differential geometry is there, but I think its a bit of a let down. If I think "functional" I want to think more abstractly in terms of categories, functors, functions.
The Lisp code in this book is truly not doing anything for me. Seems like a kind of direct numerical computation/translation. I actually think it would be better to do it in Haskell which seems like it would at least let you identify some more generic structures (oh we can actually implement this idea x as a functor/monad blah blah).