Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article starts with a false equivalence between archaic verbose language and mathematical notation.

For my part, I usually prefer the prose. Formulas are easier to read for people who are used to and have practice with mathematical notation, but even then they are easier to read for those people when they are sufficiently complete, and formulas seem to often come with huge leaps and unstated assumptions.

Especially in computer science papers I tend to see extensive use of formulas over prose or - preferably - code or pseudo-code as big warning signs: Often it turns out to mean the author is glossing over a massive amount of hugely important details. E.g. a common problem I saw when studying was papers that would describe processes, but omit any indications of sensible ranges for parameters that were essential to getting good results (I was working on techniques for image processing to improve OCR results), or greatly obscure details of algorithms that would have taken no more lines to write out in working code.



Does prose not also often come with huge leaps and unstated assumptions?


I think the solution is probably to have it all, according to your taste: code because you can run it as a working proof, and explanations in both comments and prose.


How do you prevent discrapencies between those? You already see it with comments on source code, which eventually get outdated.


I think the answer to this, if my experience as a maintenance programmer is worth anything, is: don't trust the prose. Trust the code (proof). This is why strong type systems are so insanely valuable in programming, especially if they encode proofs of side effects or lack thereof.

Maybe there's a sort of golden middle way here where all mathematical prose-proofs should be annotated[1] by the associated computer-checked proof. The trustworthiness of a particular proof could be assessed by the number of such references and how much coverage (of the prose) they provide...?

[1] Perhaps only by reference, as here. :)

EDIT: Quick edit, I say this as someone who -- earlier in xir career -- probably subjected a lot of people to somewhat verbose comments. In practice, my comments were usually right and the programming language wasn't powerful enough to capture the semantics of what I was doing. One hopes this is the distinction between good and bad comments. Sorry for veering off-topic.


Well I think you are talking about actual source code, and not the context of expository or academic writing. In that context, I would not trust comments even if companies had a strict policy of updating comments, because personally I just don't want to be susceptible to that.

It does make me wonder, though, about those computer-generated proofs which are so massive that no human can understand it. If you can run it...?


Hopefully, that won't be a problem with code written for a paper. You can freeze it then(for that version), unless errata is required.


It does! And that’s OK!

A proof is a proof if you’re convinced. Ideally, a proof is correct, but that’s not always the case.

(Except for undergraduates! Show your work! ;))


The archaic prose bit is there to point out that mathematical notation has evolved, in order to setup a presentation of a possible future evolution, namely hierarchical structure for proof parts. The article is not about prose vs. symbolic notation.


I have seen another instance of the talk summarized in the article. It is about machine-checked proofs (and even if you haven't seen the talk, have you seen who is giving it?). You won't see “huge leaps and unstated assumptions” in that format, although unimportant details may be relegated to fourth-level indentation or prelimary lemmas.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: