Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find that the results of the GCLS are so specialcased and gamed and compiler-optimized at this point that they basically don't reflect anything even remotely near reality.

For example, Ocaml still seems to be the language I can go to for the best performance with high level code. C++ still seems to be somewhat slower.

It's one thing to implement an ackerman function in C++ and another entirely to make a real working application.



I see your point but would argue the foundations of a language should be simple enough to test in a fashion like this. Perhaps this particular test isn't good because numbers are skewed from optimizations that don't reflect the languages abilities. A full application will have many, probably more, design decisions made that aren't necessarily a reflection of the language too.


How can "C++ still seems to be slower" if C (or C++) is essentially a macro-assembler, i.e. you're free to hand-optimize any particular bottleneck.

Every time I hear someone say (or show a benchmark) where "C++ is slower than C" it always comes down to some STL usage instead of flat arrays or something (which is still valid C++)


Sure, but the whole point of using a higher level language is to use some of those higher level features. It seems somewhat... unfair to argue that C++ "can be fast" so long as you basically restrict it to its C subsets.

I want the code to be both high level and fast. A language is of no use to me as a performance target if in order to get performance with it you have to use a tony subset of it.

Sure, every language has fast and slow ways of doing things




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: