Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

imo modern math requires years of study to reach the border of known.

You're right -- and modern mathematics requires more study than a field like computer science, simply because computer science as a field hasn't had enough time to accumulate very much content. (This is one of the ideas I mentioned as floating around but not fully crystallized earlier.)

But this doesn't contradict what I was saying: My point was not about doing original research, but merely about crossing the threshold to being recognized as a prodigy. A 13 year old attending university is certainly noticed -- but that doesn't mean he's doing original research (I didn't until I was 15).



If math's content grows expotentially, it might explain why even one hundred years ago it was much easier to come up with serious work before studying for years.

Side note: it is interesting to precise what is the content actually. Besides the obvious definition, like theorems and proofs, I also see hidden content: paradigms and problem solving techniques. What else?


15? wow! may i look at your papers?


My publications are at http://www.daemonology.net/papers/, but this doesn't include the work I did when I was 15 -- that was a novel algorithm for computing polynomial GCDs over algebraic number fields, which I never published due to unresolved loose ends involving high-degree fields (professors encouraged me to publish it anyway -- but I didn't want to published something "unfinished").

It turns out that those "loose ends" are rather mixed up with the problem of integer factorization, which might be why I couldn't manage to tie them up. :-)


damn good for 19 years old, but as i see there aren't groundbreaking works i hoped to see. =(


damn good for 19 years old

I'm 27 years old, actually...

there aren't groundbreaking works

Ouch. I'd say that my shared caches side channel attack work was groundbreaking (although Shamir and his graduate students were only a few months behind me). I'd say that my projective algorithm for matching with mismatches is groundbreaking. The sqrt(5) \epsilon error bound on complex floating-point multiplication shouldn't have been groundbreaking, but apparently was -- I've never seen numerical analysts get so excited about a ~30% reduction in an error bound.


But how do you quantify groundbreaking? A lot of research can't be qualified as such until many years later, after a field has built on the foundation laid by the originator.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: