Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go performance from version 1.0 to 1.22 (benhoyt.com)
57 points by ingve on April 13, 2024 | hide | past | favorite | 18 comments


Go seems like a really great balance between speed, ease of code and safety.

I keep trying to get into it, but I keep bumping into half finished/abandoned libraries for basically anything I try to do.

Tried to use it for machine learning: couldn't find a parquet library that supported compression and after going around that couldn't find any good lib for Dataframes and even after that, couldn't find anything good enough for gradient boosting.

Tried to use it to manage some GPG encrypted utils I use for work and create a TUI. Well, the GPG lib was outdated and wouldn't do what was needed.

I'm eventually going to try a go at go (pun intended) again, but I'll be more weary this time arround.


To a certain extent, you’re trying to do things that are not strengths for the language ecosystem.

The available libs and its intended purpose line up nicely when you go to create little system utilities, tools, backends, etc. You generally have to define your own data structures and implement your own algorithms.

For the above activities, I’d go to Python. To create a tool from the ground up, Go.


No bias against Python but it's an odd suggestion in a thread about performance. I'd imagine devs springing for Go aren't picking it because of its massive ecosystem, but because they want the performance it offers. Especially devs commenting on a thread about its performance benefits.


Edgedb use python, you can have perfotmant c code wrapped in python


Yeah, but then it is C code wrapped in Python, not Python.


I don't have a ton of Go experience (intentionally), but I don't find it very good for ease of code or safety.

It's a simple language to a fault, but I would not call it easy. There are so many footguns in the language itself and the standard library.

I just reviewed a PR that added a bunch of `defer response.Body.Close()` calls after http requests to fix leaks. I don't know why Go is allergic to RAII, but I hate it.

I also find it very hard to read. There's so much boilerplate sprinkled throughout everything, it's hard to follow what code is actually doing.

While I am very in favor of a good static type system, I think dynamically typed languages like Ruby or Python win over Go by any metric except speed.


I think those are worthwhile criticisms ... right up until you said "Ruby or Python win over Go by any metric except speed".

As someone who has written a ton of Python and a ton of Go (and likes both languages), I still prefer Python for small scripts due to its terseness and beautiful syntax.

Side note: Go's static typing is so much better than Python's type annotations. Python typing is difficult to use and a mess: it's constantly changing ("improving"), Pyright and MyPy never agree, Pyright doesn't agree with itself between versions, and much more. The bolted-on, late-in-the-game nature of Python's typing really shows.

One place where I have a real side-by-side comparison of the two languages solving the same problem is: I wrote a streaming websocket client with both Go and Python. Go's was so much easier to reason about due to the simplicity and ubiquity of io.Reader and io.Writer. Compare that to the mess that is Python "file-like objects" (https://docs.python.org/3/library/io.html). Should I use BufferedIOBase, or TextIO, or maybe RawIOBase, and what do I override in my subclass? It was quite painful.

My take is that, in addition to its speed, Go wins on its type system, I/O interfaces, concurrency, package management, and test/benchmark tooling. Python wins in its beautiful syntax, [list|dict|set|generator] comprehensions, and quantity of 3rd party libraries.


If you don't count learning curve (as it's a static "ease of use" cost), Haskell also fits the bill.

I've used both Haskell and Go extensively and tbh they are more similar than different. They each could afford to absorb the other's strengths.


Which Go strengths would you recommend Haskell absorb?


There's nothing quite like interfaces in Haskell.

Or rather, there's a bunch of ways to encode them. But even so, the Go structural typing there is nice.

In industry, I have found this means Haskellers (especially beginners who just do that big Haskell book) don't program against interfaces. Which isn't a scalable way to write a large software project and results in a lot of unnecessary coupling, which in turn means compile times don't stay fast.

The Go GC is also pretty nice as far as latency-focused ones go. But Haskell already got a latency-oriented GC.


> There's nothing quite like interfaces in Haskell.

What are the differences you care about between programming against an interface in go and programming against a typeclass in Haskell?

From my industrial Haskell experience, programming against a typeclass is natural and common, and the differences that come to mind (no need to declare instances, pervasive ability to fallibly downcast) don't seem to support a claim that Haskellers "don't program against instances" except in the useless sense (that I don't think you intend) that what they program against aren't exactly go interfaces.


Hm well there just isn't a simple way to do it? Your equivalent options are

- Record of functions

- mtl classes

- extensible effects

- other weird hand-rolled things

I work in a normal boring "ReaderT IO" backend-y codebase and it's like pulling teeth to write neat code against interfaces. Doubly so to get less experienced Haskellers to do it.

Tbh if we used proper extensible effects (any of them), that would qualify and accomplish what I'm talking about. But even seasoned "simple" Haskellers push back on that for being to "fancy."


I think there's a somewhat common failing in Haskell that because a larger vision is possible, smaller solutions get ignored.

Extensible effects (or doing tricky things with a monad stack, or...) promises to let you intercept (approximately) everything your function might do. Awesome. That's probably a thing you'd like in go when you say a function accepts an io.Writer.

It's not what you get in go, when you say your function accepts an io.Writer.

If your code mostly operates in `ReaderT ... IO`, then the simplest typeclass that's equivalent to io.Writer is a typeclass with a function that operates in that same `ReaderT ... IO`.


The problem with the TC approach is substituting implementations for testing ends up being annoying compared to Go. And substituting multiple different interfaces doesn't compose well (this is essentially the mtl problem).

I tend to just record-of-functions everything which is essentially Go interfaces. It does work very well! As functions args usually do :)

But those aren't espoused heavily enough by "authorities" so I find less experienced Haskellers don't lean into them or feel comfortable working with them.


> And substituting multiple different interfaces doesn't compose well (this is essentially the mtl problem).

I think you're talking here about abstracting over the underlying monad, with a different implementation for testing than for production. That can work well[1], but it's not what I was talking about.

With my Writer (nb: io., not Control.Monad.) example, how does Haskell make it harder? I'm not talking about replacing the underlying monad, but to define an interface in terms of that underlying monad.

Fleshing it out, something like:

    main :: IO ()
    main = flip runReaderT () $ do
        -- with Handle
        writeStrings stdout
  
        -- with IOVar
        var <- liftIO $ newMVar ([] :: [ByteString])
    writeStrings var
    liftIO $ print =<< takeMVar var

    writeStrings :: Writer w => w -> ReaderT r IO ()
    writeStrings w = do
        write w $ pack "oh hi\n"
        write w $ pack "uh\n"
        write w $ pack "bye then\n"

    class Writer w where
        write :: w -> ByteString -> ReaderT r IO ()

    instance Writer Handle where
        write w bs = liftIO $ hPut w bs
    
    instance Writer (MVar [ByteString]) where
        write w bs = liftIO $ modifyMVar_ w $ \ bss -> pure (bs:bss)



[1] A digression: Taking that approach, I think it's generally most comfortable to use a unique base type for each test (or natural clusters of them) and implement just the interfaces you need with an eye to the particular tests. The biggest problem I've seen is a tendency to push too much into that environment and thereby make too much implicit, which can make it confusing to distinguish and lead to things like the confused deputy problem.


If you like record-of-functions then you might like my new effect library Bluefin, where effects are passed around at the value level. Here's the documentation about how you use record-of-functions:

https://hackage.haskell.org/package/bluefin-0.0.4.2/docs/Blu...


Surprised recent versions 1.18+ didn't see a jump in improvement, but might explain sumloop improvement for 1.19

I checked, they use switch on opcodes in vm.go. So would expect a recent improvement, but probably only <5%, & I didn't look close enough to see if awk is one of those languages where instruction dispatch matters less (like how APL tends to avoid issues since array ops avoid having dispatch in tight loops, or how Python avoids instruction dispatch overhead when using numpy)

Indeed, they have a comment saying array of functions is faster for older versions of go: https://github.com/benhoyt/goawk/blob/158232a76856e12d680a53...

For VMs Go had a problem for large switch statements: it would always use binary search instead of a jump table. This caused gopher-lua & go-lua to both take the route of having an array of functions which they call on to dispatch instead

A couple years ago this was fixed: https://go-review.googlesource.com/c/go/+/357330

I measured a small perf improvement switching gopher-lua to switch: https://github.com/yuin/gopher-lua/pull/479

edit: looking closer, not sure `compiler.CallBuiltin` is doing them a favor anymore. They seem to have a lot of double dispatch operations (like `compiler.AugAssignField`) which would be less impactful with binary search switch


Performance of the reference implementation, not the language.

It would have been interesting to have gccgo as comparison point, even though it only supports up to 1.18.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: