Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>As you can see, notation in functional languages is much closer to classic mathematical notation.

I'm yet to be convinced that this is a good thing.



In the example in question, the author compares apples to oranges: an imperative iterative Fibonacci print-first-n-fibs vs. a functional recursive Fibonacci return-nth-fib. An imperative recursive Fibonacci return-nth-fib would be far closer to classical mathematical notation.

A good way to cut through the ideological BS is to look at how mathematicians write pseudo-code. It's almost always either pure imperative, or imperative with a tiny smattering of OOP/FP stuff (no thicker than you'd see in a sophisticated C project).


    int fib(int n)
    {
    	return (n <= 1) ? n : fib(n-1) + fib(n-2);
    }


For my part I'm absolutely convinced it's a horrible thing. The set of people that are comfortable with mathematical notation is tiny, even as a proportion of software developers.

For my part, I am very much in favour of a lot of what is espoused by FP advocates, but the typical approach to syntax is definitively not one of the things I'm in favour of, and I'm convinced that "funny syntax" is a key reason why so many of the ideas of both FP languages as well as many non-FP languages (like Smalltalk) with odd syntax end up getting reinvented over and over again decades after the fact in languages with less obstructive syntax.

But then again I'm one of those weirdos that believes that even CS papers substantially abuses and overuses mathematical notation where code or pseudo-code would communicate the ideas better and to a wider audience.

And before anyone complains about the lack of rigour if papers were to include pseudo-code: during my MSc research, of the 60 or so papers I reviewed, exactly none included sufficient detail in the notation they used to replicate their experiments directly even if you'd had access to the same data sets - there were always lots of variables where if you were lucky they'd defined an interval of reasonable values, or where they lack sufficient formality - e.g. they'd specified mathematical operations without taking into account rounding or precision, while experimentation made it clear their actual results depended heavily on how you'd round or what precision you'd use; often these flaws would have been blatantly obvious as fuzzy handwaving if they'd written it out as pseudo-code or "just" plain English. Over many years both before or since I've come to expect this as the norm for CS research, and I tend to see mathematical notation is a big warning sign that what follows is likely to be full of handwaving and missing details - the exceptions are few and far between.

We see some of the cost of this in the sheer amount of CS research that dies on the vine, without getting wider attention, because a lot of research requires "interpreters" with a foot in each camp, and the good ones are almost as rare as unicorns.

Personally I care much less about lists of language features than I care about whether typical programs read well and easily, and the only people I ever encounter that considers mathematical notation to read easily are mathematicians and a tiny, tiny subset of developers with sufficient interest in maths.

Using mathematical notation for programs is about as sensible as using some minor natural language you happen to be familiar with, but which most other developers aren't. Norwegian would work fine for me...


But then again I'm one of those weirdos that believes that even CS papers substantially abuses and overuses mathematical notation where code or pseudo-code would communicate the ideas better and to a wider audience.

Agreed. I've been studying DSP for the past two years and it's amazing how many fundamentally simple concepts are obscured by jargon and overly formal mathematical descriptions. I understand that it's important to define these things precisely and rigorously on some level but I think there are much more effective and less pedantic ways of teaching these ideas.


If only they did define these things precisely and rigorously. Often it's sloppier than a lot of pseudo-code I see.

E.g. I did my MSc on reducing error rates in OCR through various statistical methods. As part of that I needed to review a lot of papers applying various filters to input images. I went through at least a dozen papers on thresholding (filtering out noise by excluding all pixels under a certain brightness, for example)

Nearly all the papers had formulas. In this case it was fine. A simple-stupid (and not very good) thresholding function is simply (pseudo code): f(color) = intensity(color) > threshold ? color : 0. It's hard to obscure that by changing the notation - it'll be obvious in most cases what such a simple function does.

But nearly all of the results depended on specific values for specific variables in those formulas, and nearly all of them left out sufficient information about the values that worked best (or worked all) and/or did what I did above: define another function - intensity() in my case - that was left undefined, even though what method you'd use to determine intensity would have drastic impact on the results.


(Peer) review is the problem. Everyone's working hard to sound as intelligent and rigorous as possible, which often means committing literary atrocities. I'm writing my master thesis right now with passive voice and everything. Just because that's the way it should be done, apparently.

I've jokingly considered exchanging "time" with "temporal dimension main anthropoperceptory vector". It does sound more impressive, doesn't it? It's such a masquerade.


Agreed. I spent about half the time on my masters thesis making word and phrase changes that made the end result harder to read and understand.


I agree. I've tried to get into Clojure several times. The thing is I actually like the first code example. It's more explicit to me about what is happening.


Let me use Haskell here.

I don't think that Haskell's notation is as much mathematical[1] as it is similar, yet better than mathematical notation. Programmers have to be understood by an entity that does not understand hand-waving or shortcuts ("this is trivial"). What you sometimes end up with as a result is a concise yet easy to understand notation with a small set of rules to remember, as opposed to ad-hoc rules that can be overridden at any point for the author's convenience[2]. And you don't even need to install LaTeX.

I don't like to think of FP in the style of Haskell so much as mathematical as expression oriented. It's simple to see where an expression starts, and where it ends, and what subexpressions it consists of. It is simple to compose expressions, as opposed to statements which you typically just make longer lists of or stuff into procedures.

PS: Do note some researcher's/Haskell programmers fondness of writing "Haskell" as mathematical LaTeX - things like replacing certain standard operators for prettier ones, like replacing `.` with the dot-operator more usually associated with function composition. This makes research papers harder to understand if their goal is to appeal to Haskell programmers' existing knowledge. An even worse offense is Graham Hutton's "Programming in Haskell" introductory book which to some degree uses this practice in the code samples. Who on earth thought that would be appropriate for an introductory book?

[1] I'm referring to "math" as most people encounter it. Not any specific discipline or calculus, like the lambda ones.

[2] And yes, you should be able to expand macros and such, if they are in play.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: