Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One principle that Niklaus Wirth has always espoused and with which I agree: you cannot reduce the complexity of a programming language by expanding the size of the language with greater features. This sounds like a contradiction, but as Wirth says in the preface to his Oberon programming tutorial:

"The language Oberon emerged from the urge to reduce the complexity of programming languages, of Modula in particular. This effort resulted in a remarkably concise language. The extent of Oberon, the number of its features and constructs, is smaller even than that of Pascal. Yet it is considerably more powerful."

The FreePascal of today has grown considerably over time. The same can be said of many popular languages today: PHP, Ruby, Python, C#, Javascript, etc. Some languages start big from the beginning (e.g. Ada).

With large languages there is also the feeling you've only learned a subset of the language, blissfully unaware of many other features the language offers. At least with smaller languages you have a greater chance to master the language whole.

Of course, it doesn't automatically follow that a smaller language will be simpler or easier to grasp. (There is plenty of argument surrounding Go's supposed simplicity.)



At some point the language becomes so small that the true featureset is hidden in libraries or extensions.

Prime example: Lisp. The language has the smallest possible syntax, but its semantics are mostly in special forms. If you don't know the exact meaning of all special forms, you don't know the language.


Right. To claim that Lisp is more flexible than any other programming language is like claiming that you are much more flexible at the atomic level than at the molecular level. Flexibility has its price. It's always a trade-off.


Ada was a real bear of a language. When it was introduced as the language of instruction in some class at UIC back in the 80s, students had to work on teams as a single compilation was enough to consume the weekly CPU time allocation for a student account on the VM/CMS mainframe. They had to set up file sharing between accounts and move to a new account with each new day's login. I think there were a few cases where students ran out of CPU time even with pooled accounts before they could complete an assignment.


One of the nice things about Oberon-07 is that the compiler I’ve played with could compile itself and the standard library all in a tiny fraction of a second.


When I benchmarked oberonc [0], an oberon-07 self-hosting compiler for the JVM, it took about 100 ms with a hot VM on a old Intel i5 @ 2.80GHz. That compiler follows the same one-pass compilation approach.

[0] https://github.com/lboasso/oberonc


I take a look at the list and found this: https://github.com/prospero78/Oberon07ru

Looks interesting. I wonder how hard/easy to port this to Linux/Mac, for example.


I think Krotov’s is the one that I have tried. The package includes (or used to include) a simple IDE (a code editor) also written in Oberon, which provides another good example.

http://exaprog.com/

https://github.com/AntKrotov/oberon-07-compiler

Linux seems to be supported.


If you are looking for a Oberon compiler that is easy to port then you could consider OBNC.

http://miasap.se/obnc/

The compiler is written in C and should compile on many POSIX compatible systems. Works like a charm on the Pinebook Pro for example.


V compiles itself in 150 ms on my machine.

0.6 seconds on a free tier AWS instance:

https://fast.vlang.io


^^ We can insert that message as a definition of shameless plug in a dictionary.


Note that before Oberon-7 happened, after Oberon came Oberon-2, Component Pascal, Active Oberon and Zonnon.

Although Wirth seems to only have been involved with Oberon-2 and Component Pascal.


I'm reading Hardcore Software by Steven Sinofsky as he writes it at the moment, it's very interesting.

https://hardcoresoftware.learningbyshipping.com/

One of the things he talks about it the approach the Microsoft Tools team took to implementing MFC in C++. He talks about a lesson he learned from Martin Carroll, one of the early C++ gurus, that essentially just because a feature exists, that confers no obligation to use it. “You’re writing code for your product, not a compiler test suite,”. He took that and turned it into the MFC team's approach of using C++ as a better C, not OOP for the sake of OOP.

I remember those days well, even then I was mostly working on Unix systems with a bit of Windows here and there, but I was well aware of what MS was doing at the time. He gives a really engaging account of those times and it's interesting how many of the lessons he learned and talks about are still relevant today.


But the principle also applies to the other way around: if the programming language is too "simple", complexity still arises, because you have to explicitly write into the code many things that a programming language could simplify. This makes the code larger than necessary and less easy to understand. So it needs the right balance. From my point Oberon-07 is already too simple. Personally, I find Oberon-2 the more practical programming language (e.g. it has dynamic arrays). I also find the requirement to capitalize keywords very impractical.


> you cannot reduce the complexity of a programming language by expanding the size of the language with greater features. This sounds like a contradiction

(Maybe I'm missing something, but) I'm not sure why this should sound at all like a contradiction – how could increasing a language's size/features reduce its complexity?! Maybe you meant "the complexity of programs in a language"?


It's often developers who are pushing for new features in a programming language. They think the addition of a feature will make it easier (or more convenient) to code the problem they face. Possibly reduce the lines of code.

That is the contradiction: does the addition of features makes the language more complex? Or do the features help simplify the code? Different answers for different languages?

It will be interesting to see the trajectory of new languages like Nim, Julia and Rust. These are medium-sized languages and Rust in particular is already considered complex by some programmers. Only time will show how much they grow in language features and complexity.


Thanks. It still feels to me like you're talking about the complexity of programs in the language, not the complexity of the language.


If the language makes writing the language (compiler, libraries, tooling, or whatever) easier, at the expense of making it harder to write programs in the language, then you're going to have a nice, easy-to-implement language that nobody will actually use to write programs. And what's the point of that?


Herb Sutter, a C++ expert, said that is possible (but it's not the general case) to simplify C++ by adding features. IIRC, on such feature is the "spaceship" operator <=>, which simplifies writing comparisons.

You do need to think carefully about how to add features, though.


Thanks. This does just seem to be a different way of talking about things! It seemed strange at first. What you talk about also seems to make the language more complex, and programs simpler. Seems like you and open-source-ux don't see it/talk about it that way though.

By "the language", I mean every part of it, (I thought that's what everyone means!) and you two seem to mean, only the parts of a language used in the programs you write.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: