Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand how you're getting <2s for that awk result. I'm testing on slightly older hardware, for example I get 4.6s and 11.9s for the optimized and simple go versions taken from the git repo.

But when I also get:

# time cat kjvbible_x100.txt | tr "[:upper:] " "[:lower:]\n" | awk '{count[$1]++} END {for (word in count) print count[word], word}' | sort -hr > results2.txt

real 0m23.174s user 0m23.309s sys 0m1.234s

So my result is 10x slower than yours.

What are you running this on and where do I get one?



Oh gosh, sorry to burst your bubble but it's a 2011 Mac Mini :-P

  2.3 GHz Intel Core i5
  8 GB 1333 MHz DDR3
  Intel HD Graphics 3000 512 MB
  macOS High Sierra 10.13.6 (17G14042)
  512 GB PLEXTOR PX-512M5Pro SSD (Get Info says I installed it July 2, 2011 but it might be a clone of another drive)
<rant>

I really like it, but will probably have to sell it because it has various software failures, like sometimes one of my displays won't turn on or goes black and I have to restart. That bug seems to be fixed on newer macOSs like the one on an Intel MacBook Pro I use for work, but Apple artificially sunsets their hardware by preventing newer versions of macOS from being installed and not back-porting bug fixes to previous macOSs. Since pretty much all computers today are Turing-complete, that feels.. disingenuous.

Computers haven't gotten appreciably faster for roughly 15 years since R&D funding shifted to mobile in 2007 and Moore's Law ended. All that matters today is whether we are using an SSD and how wide the memory bus is, since speed there hasn't changed much either, just latency. And Apple's not the only one treading water. PCs often suffer from mismatched hardware, so maybe an Intel i9 gets installed on a logic board with a memory bus too slow to recruit it. I built a gaming PC a few years back and I may have inadvertently underpowered it by putting most of the budget into the RTX 2070. Since video cards can't do the everyday workloads we're discussing, I mostly consider them a waste of time and mourn what might have been had CPUs kept improving instead.

Apple's Arm M1 is a logical progression off of Intel, but I can't really endorse it, since they chose a relatively complex architecture where a big dumb array of cores would have been more scalable. If some indie brand comes along and builds one of the 1000+ core CPUs I've blabbered on about, I can't say that I'll have much sympathy for the current big players.

Due to all of that, I perceived computers in 2010 as being roughly 1000 times slower than they could/should be had they kept up with Moore's Law, and computers in 2020 as being roughly 1000000 times slower (the ratio of GPU to CPU FLOPs for example). It doesn't help that stuff like Spotlight and Safari eagerly take 100+% CPU or that basically all PCs are bogged down with either spyware or the daemons that supposedly find and remove spyware (thank you M$). Or that we don't have the network computing that Sparc had in the 1990s, where all of the computers on the LAN were available for additional cores seamlessly. Just slow on top of slow on top of slow under surveillance capitalism yay!

</rant>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: