I find it amusing that there isn't a single word about runtime performance in the whole essay.
In the end, when I code in C++, it's because I believe my code will run faster and generally use less resources, and I'm ready to accept the sacrifices it takes (in terms of my own time) to get there. Does it make me someone for whom software is about "doing it in a certain way" ? Probably. If that certain way is "performance" for problems where it matters, I'm even proud of it.
C++ programmers don't come to Go because it turns out Go doesn't shine on problems where C++ was the answer. Simple as that, and there's nothing wrong with that, as long as Go found its niche and solves some problems relevant to him. I'm not sure I see why Rob Pike needs to be so dismissive.
What gets me about MOST of the stuff I read on criticising c++ is the complete disregard for runtime performance. I write software that has to consume billions of messages a day per instance. I've done it in several languages for different purposes. At the end of the day if you implement the same algorithm in c++ it will be faster. I don't care what you think you can do in node.js, but you are wrong. When you have limited resources there is no option save for c and c++. It isn't a big deal, that is currently how we get things done.
People fucking around with distributed web applications are so immune from the specific cost of adding a line of code because rarely is what they are writing executed repeatedly in a tight loop and thus they lose the ability to quickly measure performance impact
One programmer costs ~ 300 cores. (Bay Area salary with benefits vs. AWS EC2 pricing)
If your code consumes more than 300 cores, you should care about performance. If your code consumes less, you should care about productivity.
Adding cores is easy while adding programmers is hard. Cores scale linearly while programmers don't. So I set the guideline at 1000 cores. Do not make performance-productivity tradeoffs below that.
What on earth makes you think that all problems can be scaled out to as many cores as you like? This is exactly the web developer mindset that people are referring to elsewhere in this thread.
Well you're trading one kind of B.S. for another kind of B.S.
There's a lot of B.S. that comes with C++, and there's an entirely different kind of B.S. involved with writing things in Java + Hadoop.
Personally I stay out of the C/C++ ecosystem as much as I can because threads are never really going to work in the GNU world because you can't trust the standard libraries never mind all the other libraries.
The LMAX Disruptor shows that if you write code carefully in Java it can scream. They estimate that they could maybe get 10% better throughput in C++ at great cost, but the average C++ programmer would probably screw up the threading and make something buggy, and a C++ programmer that's 2 SD better than the mean would still struggle with cache line and other detailed CPU issues.
The difference between the LMAX Disruptor and the "genius grade" C++ I've seen is that the code for the Disruptor is simple and beautiful, whereas you might spend a week and a half just figuring out how to build a "genius grade" C++ program, taking half an hour each pop.
Really, you're trading execution speed for productivity, not "BS for BS" when you use these so-called "web languages". In some cases, there are other concerns such as memory usage or software environment (e.g. trying installing a Java program on a system than doesn't allow JIT compilations).
Some problems can scale out, but only if latency between nodes is low enough and bandwidth is high enough. For example, an MMO server would not function as well if there was a 50 msec ping between nodes. You may or may not have control over that depending on what cloud service you use.
These are real concerns and should not be trivialized as "BS for BS" or "throw more virtualized CPU cores at it". Every problem is different; it should be studied and the best solution for the problem applied.
I'm talking about parallel programming, in general, as a competitor to high-speed serial programming.
In that case it is a matter of one kind of BS (wondering why you don't get the same answer with the GPU that you do with the CPU, waiting 1.5 hours for your C++ program to build, etc.) vs another kind of BS (figuring out problems in parallel systems.)
Not all problems scale out like that, but you can pick the problems you work on.
Java performs well as long as you're CPU bound. But memory is becoming cheap enough to keep substantial parts of a database in memory. Avoiding all that IO translates into enormous performance gains. Unfortunately, in Java (using the Oracle VM) you can't keep a lot of data in memory without getting killed by garbage collector pauses.
The genius of disruptor was in the data structure and access mechanisms, plus the fact that it worked for single producer / single consumer circumstances. It is certainly not an example you can tout for how Java is as fast as C/C++ under all circumstances if you are 'careful'. I think you are just falling prey to confirmation bias w.r.t. 'beauty' of code.
I'd say in some real life situations the gap is less than people think.
Back in the 1990's, when JIT compilation was new, I wrote a very crude implementation of Monte Carlo integration in Java that wasn't quite fast enough to do the parameter scan I wanted. I rewrote the program in C and switched to a more efficient sampling scheme.
When it was all said and done, I was disappointed with the performance delta of the C code. Writing the more complex algorithm in Java would have been a better use of my time.
But there are several things java insists on that are going to cost you in performance in java that are very, very difficult to fix.
1) UTF16 strings. Ever notice how sticking to byte[] arrays (which is a pain in the ass) can double performance in java ? C++ supports everything by default. Latin1, UTF-8, UTF-16, UTF-32, ... with sane defaults, and supports the full set of string operations on all of them. I have a program that caches a lot of string data. The java version is complete, but uses >10G of memory, where the C++ version storing the same data uses <3G.
2) Pointers everywhere. Pointers, pointers and yet more pointers, and more than that still. So datastructures in java will never match their equivalent in C++ in lookup speeds. Plus, in C++ you can do intrusive datastructures (not pretty, but works), which really wipe the floor with Java's structures. If you intend to store objects with lots of subobjects, this will bit you.
As this wasn't bad enough java objects feel the need to store metadata, whereas C++ objects pretty much are what you declared them to be (the overhead comes from malloc, not from the language), unless you declared virtual member functions, in which case there's one pointer in there.
In Java, it may (Sadly) be worth it to not have one object contain another, but rather copy all fields from the contained object into the parent object. You lose the benefits of typing (esp. since using an interface for this will eliminate your gains), but it does accelerate things by keeping both things together in memory.
3) Startup time. It's much improved in java 6, and again in java 7, but it's nowhere near C++ startup time.
4) Getting in and out of java is expensive. (Whereas in C++, jumping from one application into a .dll or a .so is about as expensive as a virtual method call)
5) Bounds checks. On every single non-primitive memory access at least one bounds check is done. This is insane. "int[5] a; a[3] = 2;" is 2 assembly instructions in C++, almost 20 in java. More importantly, it's one memory access in C++, it's 2 in java (and that's ignoring the fact that java writes type information into the object too, if that were counted, it'd be far worse). Java still hasn't picked up on Coq's tricks (you prove, mathematically, what the bounds of a loop variable are, then you try to prove the array is at least that big. If that succeeds -> no bounds checks).
6) Memory usage, in general. I believe this is mostly a consequence of 1) and 2), but in general java apps use a crapload more memory than their C++ equivalents (normal programs, written by normal programmers)
7) You can't do things like "mmap this file and return me an array of ComplicatedObject[]" instances.
But yes, in raw number performance, avoiding all the above problems, java does match C++. There actually are (contrived) cases where java will beat C++. Normal C++ that is. In C++ you can write self-modifying code that can do the same optimizations a JIT can do, and can ignore safety (after proving to yourself what you're doing is actually safe, of course).
Of course java has the big advantage of having fewer surprises. But over time I tend to work on programs making this evolution : python/perl/matlab/mathematica -> java -> C++. Each transition will yield at least a factor 2 difference in performance, often more. Surprisingly the "java" phase tends to be the phase where new features are implemented, cause you can't beat Java's refactoring tools.
Pyton/Mathematica have the advantage that you can express many algorithms as an expression chain, which is really, really fast to change. "Get the results from database query X", get out fields x,y, and z, compare with other array this-and-that, sort the result, and get me the grouped counts of field b, and graph me a histogram of the result -> 1 or 2 lines of (not very readable) code. When designing a new program from scratch, you wouldn't believe how much time this saves. IPython notebook FTW !
Hadoop and the latest version of Lucene come with alternative implementations of strings that avoid the UTF16 tax.
Second, I've seen companies fall behind the competition because they had a tangled up C++ codebase with 1.5 hour compiles and code nobody really understand.
The trouble I see with Python, Mathematica and such is that people end up with a bunch of twisty little scripts that all look alike, you get no code reuse, nobody can figure out how to use each other's scripts, etc.
I've been working on making my Java frameworks more fluent because I can write maintainable code in Java and skip the 80% of the work to get the last 20% of the way there with scripts..
"What on earth makes you think that all problems can be scaled out to as many cores as you like"
It certainly can't. And for example you can't do that on an app that runs on a phone, for example.
But, when possible, this is the cheapest way to do it.
Not to mention other cases where it's the "only" way to do it (like, CPU heavy processing, video processing, simulations, etc). A smart developer can help with a %, but with limited returns.
For example, Facebook invested on their PHP compiler since their server usage would only increase, and the resources for that gain (in terms of people) are more or less constant.
I'm sorry, but I can't outsource the computation of real time ultrasound denoising to EC2. Nor can I do the work of my LTE radio modem on EC2. Clouds and scaling out on clusters are great answer to a certain set of problems, but far from a panacea.
I'm pretty sure ultrasound denoising does not run on > 10000 cores. The point was more of "cloud this, cloud that" solves many types of problems but real-time is not one of them. Cf: why you can't play a game on the cloud by sending JPG screenshots of the rendered scene 60 times per second while polling a joystick at 120 Hz and send that to the cloud.
I don't know why you bothered to provide a link and no commentary. This proves that people have attempted it, not that it is common nor that it produces equivalent results. I'm well aware that for some games, it can work "ok". Let me know when you get uncompressed 2K video at 60 FPS and <16 msec input response.
You suggested that it couldn't be done for games, so I gave you a link showing you that it can and has been done. There are number of reasons why that isn't a popular way to play games, but it's not primarily a technical issue.
You certainly could pull off this architecture in a setting with a good LAN connection. Though I'm not really arguing that it's necessarily a great way to go.
Yeesh, way to mince words. Yes, it "can" be done... I guess what I meant by "can't" was "provides a poor experience to the point where is generally unacceptable and therefore not a solution and hence the impossibility of the solution could be just as easily expressed as `can't be done [right now]`".
>There are number of reasons why that isn't a popular way to play games, but it's not primarily a technical issue.
It is entirely technical. Everyone who tried it shit all over it because it was a terrible experience, entirely due to limitations of internet connectivity (both latency and bandwidth).
I suppose the fact that many users have shitty internet connections is a technical issue. But for users with a high bandwidth low latency connection (FiOS, LAN, dedicated fiber) there's really no technical reason it can't work quite well.
The primary reasons these services didn't take off is that many users have shitty connections and almost everyone has a fast enough computer.
> One programmer costs ~ 300 cores. [...] If your code consumes less [than 300 cores], you should care about productivity. Adding cores is easy while adding programmers is hard. [...] Do not make performance-productivity tradeoffs below [1000 cores].
Computing time is cheap these days, but this kind of math doesn't make any more sense than comparing feet to miles per hour. Programming time saved is a one-time gain, whereas the performance loss in continuous. Let's say you write code for a single core, you spend 10 hours instead of 20 by accepting a 50% slowdown, and manage to compensate by adding an extra core. Depending on how long this code runs, there will be a point at which the ongoing costs of an extra core surpass the one-time saving of 10 hours' programming.
If all you need is a working prototype then sure, performance shortcuts may be worthwhile. (Although you can't calculate trade-offs as suggested). But for long-term production systems they will always start hurting at some point.
Unless you never make changes to your software, development time is as continuous as run time. As someone pointed out below, the fact that Google finds value in Go seems to point to there being enough of a cost in development time that they're willing to sacrifice run time to reduce it.
That said, what makes sense for Google doesn't necessarily make sense for the rest of us.
Yeah, as I said it depends on how long the code ends up running. For sections of the code that you end up changing all the time you'll have a much higher proportion of developer time to "running core" time, so you obviously can reduce costs more on the productivity side. But there's no simple, 300-cores-per-developer math for it.
Generally I'd say it depends on the component. There are tons of components that never change, at least in my apps. Splitting them off and moving them to java or C++ provides gigantic gains.
In practice, I think a lot of programmers simply don't know how to call C/C++ from python, even though it has become so much easier since ctypes. Thus doing this is derided as a waste of time, dangerous and whatever. You'll soon see that doing this has other advantages (like type safety).
Not all code is server-side code that can be addressed by elastic computing. Most C++ programmers work on desktop, mobile, and embedded programs. In such a domain, it is very likely that your code will be running on more than 10,000 cores on launch day (and with little or no intercommunication between the cores).
> One programmer costs ~ 300 cores. (Bay Area salary with benefits vs. AWS EC2 pricing)
That is gibberish. If I work at a company and we have 5 machines with 12 cores apiece already in place that is what I have to make do with. We don't all live in an elastic world.
Further to that the scaling of large single computations across cores is a costly and often pointless exercise.
Yeah, companies do that all the time. Spend 3 months of a good engineer's time to save a few thousand bucks in hardware because the budget allocation is fixed.
In the long run, I think companies that value human capital appropriately will win.
A lot of problems are going to require quite a bit of engineering to scale to 300 cores...
I mean, if you're just serving some simple webpage, it's easy to just throw servers at it. But if you want to implement say a distributed k-means, the algorithm is different than for the single-threaded case. Not everything is easily scalable...
Yeah, companies do that all the time. Spend hundreds of thousands of dollars on hosting and hardware costs to save a few hours of a programmer's time because they blindly believe in the completely unfounded "truth" you are parroting.
Some places are so inelastic that developers get hand-me-down laptops from sales people who couldn't sell, or when they buy a new machine from Dell they get one with two cores.
So.... they pay you a salary right? So long as they don't prevent it... you can take some of the money your company allocates to salary, and reallocate it to buying a few cheap VMs.
I was working as an enumerator for the U.S. Census in the year 2000 and one of the people on my team realized that for 2 hours worth of pay she could buy office supplies that would save us all (and the government) 60 hours worth of work.
She was stressed because there wasn't any official channel for us to buy office supplies other than the stuff they sent us.
I told her to buy the office supplies and say that she worked another 2 hours; this was breaking the rules but this did not strike me as at all unethical.
Now, not long after the 2008 crunch I was getting pissed about how long builds took on my cheap laptop (on which I was running both the client and server sides of a complex app).
Getting a better machine from management was out of the question, but I liked other aspects of the job, and 2009 wasn't the best time to go job seeking. So I bought myself a top end desktop computer and three inexpensive monitors.
When I left that company they wanted to buy the machine off me so as to keep all the proprietary code and data on it, but as things worked out, the value of my own proprietary code and data on that machine was worth a lot to me so I kept it, and fortunately things never went to court.
This type of decision has risks (for instance, you don't want to be the guy who loses a machine with social security numbers on it and forces his employer to pay for credit monitoring for 70,000 people) but it can be the right thing to do sometimes.
I am surprised that they let you use your own machine. I am also surprised that they didn't get the code and data from you (or take you to court), as most employment contracts state that all work done by the employee is considered company property.
you're profitsharing, right? so if you go around the company's self limiting policies, and make more profit by using your salary... then it's a net win to you.
What if I'm distributing my code to thousands or millions of users who are going to be running it dozens of times a day? Suddenly, spending an hour to make it run just a few seconds faster looks like a very good trade-off.
Depending on how you value things, being able to spend fewer clock cycles may be more "environmentally responsible" in the long run, though data centers do run a pretty tight ship. ;)
You don't seriously think that do you? People struggle with concurrency and parallelism all the time. Tons of problems we have no way to scale up with more CPU cores, it is an open research problem to try to find ways to do so. Pretending that you can just buy fast is a huge mistake that costs millions of dollars.
The number of places in most code where you're running a tight loop a jillion times is vanishingly small. In those cases you're usually better off writing the vast majority of the code in a language that increases developer productivity while writing the occasional C/C++ module when serious single-threaded crunching is required. This is why so many modern higher level languages offer C bindings. The world isn't all or nothing. And this thought isn't specific to Go. The main language could just as easily be Java, Python, PHP, or even (eek) Ruby.
I think a lot people who are doing (successful) distributed web apps - are learning that performance actually does matter - and it can be painful to optimise poor architectural decisions.
People who 5 years ago were sermonising to me on premature optimisation are now telling me they're rewriting something in a faster language or how many millisecs they've saved using some new profiling snake oil they've just bought.
(whilst my background is in assembler/C, I now prefer to use prematurely optimised Ruby where possible just for the maintainability and the magic)
I think the lesson to learn is that you have to ensure you don't have barriers to optimization as well.
Knuth's original statement on premature optimization presupposed that the cost of optimizing later isn't really that much higher than optimizing now. I mean, obviously it's going to be somewhat harder - code is harder to read than to write, and fixing a performance failure is going to take longer than getting it right the first time. But still, that's not a humongous cost and there are so very many opportunities for optimization that chasing them without performance data is a waste of time.
But when you have an actual barrier to optimizing? Then you've got a real problem. If you're committed to a technology that has a hard performance ceiling and no easy way to break the slow parts out into another technology? That's a different problem.
It seems evident that Knuth's advice about premature optimization can be indiscriminately applied when you're talking about code for an individual component... but should be considered carefully against the potential costs when thinking about large architecture.
People who quote Knuth as aphorism are rarely as smart as Knuth. The optimization quote is great, but it's usually deployed as an appeal to authority to justify a preconceived, ideological decision -- God knows, I've done this.
Building software is hard, and it's not at all surprising (or reprehensible) that people will seek out shortcuts to winnow their decision space.
> The optimization quote is great, but it's usually deployed as an appeal to authority to justify a preconceived, ideological decision -- God knows, I've done this.
Most people I've known have used it as shorthand for "decisions based on optimization concerns require some justification that the optimization is important to the specific application and warrants the cost that come with it"; it's not an appeal to authority (indeed, the authority behind it is almost never referenced, just the bare quote), and its usually, IME, a defense against a preconceived, ideological decision (it works poorly to justify a preconceived, ideological decision, because it doesn't have carry much weight if the person proposing the optimization has any basis for making the claim that the proposed optimization isn't premature; as it shouldn't.)
I think the key to that quote is the agile mindset: the thing you think you're building might not be the thing that will actually meet the customer's need. So regular validation of a "good enough" product will be a better use of time than making the first prototype screamingly fast.
But optimizing later can also be the smarter thing to do. Especially when you're in the early stages, you don't know what direction your product/company will take. Optimization is a happier problem to have than say, trying to find enough people to sustain your business.
With no disrespect intended to the OP, this is an interesting comment considering that he came from Google, which is one of the world's largest beneficiaries of optimized performance. (If there's one place where run time trumps human time...)
I find it funny that when I had to switch to C++ I found it slow compared to C and Object-Oriented Pascal.
There are also some tradeoffs when switching languages. Some people would never make the move from C to C++ citing unnecessary complexity (e.g. Linus Torvald).
This is essentially a question of having the right tool for the job. Switching from C++ to Go is probably a good decision some cases and a bad one in others. But one should not remain so anchored in one language that won't be able to consider some other, probably better, option.
> "In the end, when I code in C++, it's because I believe my code will run faster"
Completely agreed and exactly my point. I am not moving to Go because is solving problems that I don't have.
Pike said: "That way of thinking just isn't the way Go operates. Zero cost isn't a goal, at least not zero CPU cost."
So Go's philosophy is to make some compromises for the sake of making programmer's life easier. C++ is the other way around, there is a cost the programmer has to pay in other to have code abstractions without impact the performance.
I don't think anyone "loves" to code in C++, but people tend to like the performance that C++ provides and are eager to pay the cost of programming in a monster language that no simple person fully understand.
I love coding in C++. The feeling of being so close to the metal that your actions are directly controlling the machine is something that was lost immediately when I moved to Java. It even more of a disconnect in higher level languages.
I have sort of the opposite feeling: I love that in C++ I can create very nice abstractions while being fairly confident that the compiler will transform them into code with C-like performance. That is, as long as you know what to avoid (namely: heap allocations and unnecessary copying).
This is the thing. This is what gets lost. C++ was designed with the premise that the language allows you to use it both in a C-like context and a Java-like context.
The problem is that some of the baggage of the former interferes with the latter. C++ is danged ugly in some places in ways that could be expressed cleanly in higher-level languages.
That's what C++ coders want - C++ with modules and real generics and fast compile-times and ways to cleanly implement and use garbage-collection if you so choose. A language that lets you say incredibly complicated and detailed things, and then lets you abstract them away into a library or a class or some other unit of organization that lets another coder use your stuff without absorbing the complexity.
All these "C++ replacements" assume that the solution is to gut out all the C-like stuff. What they should be looking to do is giving developers clean alternatives to the C-like stuff, including ways for the hardcore developers to provide their own clean alternatives to the C-like stuff. Don't give us garbage collection, give us the tools to write libraries that provide clean garbage collection to a developer.
>"This is what gets lost. C++ was designed with the premise that the language allows you to use it both in a C-like context and a Java-like context."
I got lost with the "Java-like context" thing. If I am not mistaken, by the time that C++ was designed there no language with a GB other than Lisp dialects. Jav
a would appear like 12 years after, so I am not sure what you mean with Java-lik
e context.
As for the design goals of C++, I think was actually to create a superset of C t
hat allow to give abstraction facilities that C doesn't have (starting with OOP but right now way more than that)
Exactly. That is what is lost in most of these debates. In order to move forward in the design of systems programming languages, we must first recognize what makes C++ great, in spite of its deep flaws.
True, but append "while retaining access to my favourite high-level abstractions like classes, generic containers and algorithms" and now it is no longer true.
That's the difference: in C++ you get the abstractions -- just like in Java -- but you don't pay for them.
For the record: I used Java extensively in my previous job. It's a language I love, and I do think there are many areas where its speed is sufficient. But I'm also quite fond of C++, and there is this nice feeling of "woah, it doesn't really get faster than this, and it's still readable and elegant!" :)
>"That's the difference: in C++ you get the abstractions -- just like in Java -- but you don't pay for them"
That's not enterally true, you do pay for them, not in performance but in productivity. C++ is a complex language and even Alexandrescu touched this subject in Going Native 2013, he said something like "C++ is a language for experts"
* Performance doesn't matter for a large set of systems we write.
* Performance is far more a product of the programmer than the language you use (within reason, naturally. Don't do Video decoders in Python :)
* Large-scale systems where there is no single tight loop anywhere are way different beasts. Their performance stems largely from fast RTT in the edit-compile-test cycle and programmer productivity. Not from fast native code execution.
* To some problems, latency or sustainability matters more than throughput.
> I find it amusing that there isn't a single word about runtime performance in the whole essay.
It most certainly is. In fact, it's the answer to the opening question, "Why does Go, a language designed from the ground up for what what C++ is used for, not attract more C++ programmers?" You might not have noticed it because he only refers to the performance issue as a "Zero (CPU) cost" mentality.
> C++ is about having it all there at your fingertips. I found this quote on a C++11 FAQ:
>
> The range of abstractions that C++ can express elegantly, flexibly, and at zero costs compared to hand-crafted specialized code has greatly increased.
>
> That way of thinking just isn't the way Go operates. Zero cost isn't a goal, at least not zero CPU cost. Go's claim is that minimizing programmer effort is a more important consideration.
There is a lot of crap in C++ which doesn't help with performance at all: class hierarchy with all the inheritance/friends/virtual/constructors/destructors, godawful syntax, template model, exception handling, 101 ways to do the same thing, lack of package/module handling etc.
Rob Pike addresses those in the article.
Yes, Go is slower but it's not slower because of fixing those errors, it's slower because it introduces different features suitable for its niche (GC, reflection etc)
Rust takes similar approach to avoid C++ design mistakes and doesn't sacrifice performance in doing so.
Those mistakes (mainly the whole "object model" thing) is what he is saying in the article. Pointing out performance of Go isn't really an argument against his point.
I tested both extensively lately and the performance was exactly the same (under Intel Compiler and Visual Studio) so probably those compilers found a way to optimize it. They both are much slower than hand coded version anyway so it doesn't really matter (link to implementation which beats standard qsort/std::sort performance by 2x/3x times (at least on my data): http://www.ucw.cz/libucw/doc/sort.html).
If your only, or even main criteria was performance you would write in assembly language. The fact that you don't, means that Rob probably nails it with this comment:
"C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way."
That's absurd. The law of diminishing returns comes into play, and for most cases, assembly is just not worth it (whereas C or C++ is). Rust seems like another interesting take at that sweet spot, but it seems like in Go this was an afterthought (though to be fair, the performance benefits of C / C++ are often cargo-culted even when that kind of performance is completely irrelevant, so Pike still has some valid points).
Someone mentioned a while ago how a common pattern of comments on programming forums is to take the most literal and uncharitable interpretation of a post, and reply to explain how this extreme view is mistaken.
I've never seen anyone defend the idea that writing something in C++ will necessarily make it high-performance and I would be very, very surprised if that's what the OP meant.
what a coincidence- I was thinking about this this A.M. I was conceptualizing a flowchart. Something like: 1. Observation 2. Massive overgeneralization of observation and posting to Internet. 3. Refutation of massive overgeneralization. 4. Insistence on narrow range of circumstances in which observation is true. 5. Announcement of extraordinary exceptions. 6. Anecdote in which even that exception was inadequate. 7. Call for advanced structural frameworks to encompass vastly differing perspectives. 8. Recollection of failed attempt at structure in 1976 (Optional: suggestion to live in geodesic homes). 9. Platitudes about how the details don't matter anyway. 10. Violent exasperation that details won't matter, including extreme hypothetical circumstances. 11. Shifting of blame to political figures and/or youth and their gadgets.
Well written code in any language will almost always outperform poorly written code in another language. Python being 50x slower than C++ doesn't matter if you are using an exponential imlpementation of a linear problem.
The thing, IME, is that someone writing code in C++ is more likely to be aware that they are using an approach with suboptimal O. In a high-level language you can just as easily write a routine with bad O, especially if you don't understand the PL's underlying data structures. So then you get bad algorithm performance + bad Python performance.
That's what I meant. For most cases Go's performance over Python or even some messy C++ offers little use. But in some rare cases that we need to push the very limit of hardware, C++/C/assembly is more preferable than Go.
you should switch to Assembly then. you can get even better performance than C++, and hey, if you don't mind sacrificing your own time to make the program run faster, it seems like the right thing to do.
It's a trade-off. Assembly is an order of magnitude less productive, and it doesn't offer significant benefits. Outside of your critical path, you might actually hurt your runtime performance unless you really, really know your instruction set. When people say 'C trades speed for ease', they don't mean they want to design an ASIC to make their operation blazing fast. They mean C is at a sweet spot of developer productivity and execution time, for their specific application.
As much as the original statement was hyperbole, it's often overlooked by people that the arguments they make to explain their choices often lead to different results than they expect if actually followed.
If your work consists of writing extremely demanding code WRT performance, then it probably is useful to stop and think for certain projects, or portions of projects, whether it's worth going to assembly.
Similarly, if you find any utility in going lower in the language stack, it might be worth going higher for some projects or portions of projects.
Whether you actually use anything else is up to you, but blind adherence to a specific language is limiting, and may well be detrimental to the work you are trying to accomplish.
(you in this context is general, not applied to the parent specifically)
I was being snarky, but not trolling. I was very much trying to get at the same point you're getting at.
There is a trade-off between developer productivity and runtime performance. The author said that Go is appealing to people coming from a Python or Ruby background, which is a strong indicator that Go has fundamentally gotten the developer productivity side of the equation right.
So we should try and think about how the equation balances here. Is Go an order of magnitude slower than C++? Don't know. I think its not, though. My impression (which might be wrong, so please correct me if necessary) is that Go programs run approximately as fast as C++ programs that are written using C++ automatic memory features like smart pointers. Is Go an order of magnitude faster to develop than C++? Again, I'm not sure, but my perception is that it probably is. Python devs wouldn't be interested in it otherwise.
In average, C++ is three times faster than Go in all the tests. These benchmarks aren't definitive but you can assume that in general the difference in performance is much less than an order of magnitude.
Python3 is in average 20 times slower than Go. And I love Python but Go isn't that much harder to use. It's quite a nice language, actually.
So, the exodus from Python to Go is very understandable, you gain a lot of performance without sacrificing much. And I think there's room for performance improvements in Go, perhaps in a couple of years C++ developers will make the transition. But I think that the real C++ killer is Rust. Time will tell.
Not at all. Go gives (a lot of) the advantages of python and not many of the costs. I would say that Go dominates python, giving the advantages of that language with few of the costs. Anything you can do in python can relatively easily be done in Go. Replacing dynamic typing with reflection seems to work pretty well.
It does not similarly dominate C++. C/C++ cannot be replaced by something like go for the very simple reason that it wouldn't work. Go itself depends on a C runtime and a C compiler written in C, even ignoring the operating system it has to run on (which also cannot run without a C/C++/assembly core). The same goes for languages like Java, Erlang, Haskell, ... (Java is particularly bad, since it has a huge complicated runtime which is almost completely C++)
After all, who will garbage-collect the garbage collectors ? (this is a simplification, the garbage collector is a major problem, but not the only one. There's dozens)
* I read a lot of compiler-produced assembly -- these days it's often surprisingly clever. I don't think most developers could beat them with hand-coding even if they were willing to spend 10x-100x the time in development and debugging.
* As other people on the thread have mentioned: it's all about the memory. In a world where a cache miss is going to cost ~100 cycles the actual instruction stream isn't even your biggest concern -- programming for performance increasingly means counting cache lines. I can do that just as well with C++ code as I could with assembly, at least as long as the compiler provides things like prefetch primitives, etc.
There are still some super fast-path areas where expert assembly can beat a compiler (crypto algorithms, ...) but it just isn't attractive for most projects no matter how much developer time you're willing to spend.
Just depends on the problem -- people do use assembly all over the place when the performance speedup is warranted, for example video encoding (e.g. x264).
Even Go uses assembly in places -- for example, bytes.IndexByte.
The point is that C++ still has real performance benefits over Go, and it's not just used out of cussedness or backwards-thinking. (Though Go can be pretty close. I think it's a shame that didn't go w/ a simple-refcounter + weak references approach to GC.)
That simply isn't the case any more. The GCC and the LLVM can, for the majority of cases, write far faster assembly than the average assembly programmer. Naive C or C++ is faster than naive Go, but naive assembly is probable slower than even naive Go (the Go compiler produces optimised assembly, just not as well optimised as the assembly produced by C compilers).
To restate, C++ programmers are doing C++ because they are trapped in a corner world where they must suffer for speed. Go's architects think they've got a band of problems where ... you just don't have to suffer so much. "no, me, I still need to suffer?"
I strongly suspect that Rob Pike understands the importance of performance.
a) he is old enough to respect resources
b) Google programs need performance
In the end, when I code in C++, it's because I believe my code will run faster and generally use less resources, and I'm ready to accept the sacrifices it takes (in terms of my own time) to get there. Does it make me someone for whom software is about "doing it in a certain way" ? Probably. If that certain way is "performance" for problems where it matters, I'm even proud of it.
C++ programmers don't come to Go because it turns out Go doesn't shine on problems where C++ was the answer. Simple as that, and there's nothing wrong with that, as long as Go found its niche and solves some problems relevant to him. I'm not sure I see why Rob Pike needs to be so dismissive.