Hacker Newsnew | past | comments | ask | show | jobs | submit | newman314's commentslogin

It appears that Gurman has been the one pumping the timeline. He's now clarified that Apple has only ever said 2026 with nothing beyond that.

You can use one of these to act as a local personal WWVB station amongst other things.

See https://github.com/hzeller/txtempus and https://github.com/GOTO-GOSUB/Txtempus-Passive-Antenna


Very neat project! Is it yours?

Nope but a cool project nonetheless

Quite, I love the esthetic of the watch stand. The exposed lacquered wire makes it look really exotic, and in general the execution is top notch. It is really tempting to throw stuff like this together and call it a day when it works (that's more my style...), so I have a lot of respect for people that can finish these things to the point where they not only work but also look good.

There is no support for ed25519 host keys (confirmed using ssh-audit). Would be nice to have though.

As an aside, you should use ssh-audit to get recommendations for what to disable as far as less than ideal options/configs go.



I currently use Banktivity which is OK. Would love to hear from any others that have used Banktivity and migrated to something else. Ideally, there should be OFX support.


Missed a chance to call this swiftamp instead and avoid namespace collision.


Or swamp to be shorter


machamp


Isn't that a Pokeman?


Seems fishy


Agreed. I also wonder why they chose to test against a Mac Studio with only 64GB instead of 128GB.


Hi, author here. I crowd-sourced the devices for benchmarking from my friends. It just happened that one of my friend has this device.


FYI you should have used llama.cpp to do the benchmarks. It performs almost 20x faster than ollama for the gpt-oss-120b model. Here are some samples results on my spark:

  ggml_cuda_init: found 1 CUDA devices:
    Device 0: NVIDIA GB10, compute capability 12.1, VMM: yes
  | model                          |       size |     params | backend    | ngl | n_ubatch | fa |            test |                  t/s |
  | ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
  | gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |          pp4096 |       3564.31 ± 9.91 |
  | gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |            tg32 |         53.93 ± 1.71 |
  | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | CUDA       |  99 |     2048 |  1 |          pp4096 |      1792.32 ± 34.74 |
  | gpt-oss 120B MXFP4 MoE         |  59.02 GiB |   116.83 B | CUDA       |  99 |     2048 |  1 |            tg32 |         38.54 ± 3.10 |


Is this the full weight model or quantized version? The GGUFs distributed on Hugging Face labeled as MXFP4 quantization have layers that are quantized to int8 (q8_0) instead of bf16 as suggested by OpenAI.

Example looking at blk.0.attn_k.weight, it's q8_0 amongst other layers:

https://huggingface.co/ggml-org/gpt-oss-20b-GGUF/tree/main?s...

Example looking at the same weight on Ollama is BF16:

https://ollama.com/library/gpt-oss:20b/blobs/e7b273f96360


I see! Do you know what's causing the slowdown for ollama? They should be using the same backend..


Dude, ggerganov is the creator of llama.cpp. Kind of a legend. And of course he is right, you should've used llama.cpp.

Or you can just ask the ollama people about the ollama problems. Ollama is (or was) just a Go wrapper around llama.cpp.


Was. They've been diverging.


Now this looks much more interesting! Is the top one input tokens and the second one output tokens?

So 38.54 t/s on 120B? Have you tested filling the context too?


Yes, I provided detailed numbers here: https://github.com/ggml-org/llama.cpp/discussions/16578


Makes sense you have one of the boxes. What's your take on it? [Respecting any NDAs/etc/etc of course]


Curious to how this compares to running on a Mac.


TTFT on a Mac is terrible and only increases as the context increases, thats why many are selling their M3 Ultra 512GB


So so many… eBay search shows only 15 results, 6 of them being ads for new systems…

https://www.ebay.com/sch/i.html?_nkw=mac+studio+m3+ultra+512...


Does anyone know what was used to produce the graphs?


Do you mean charts? If so, it's Datawrapper: https://www.datawrapper.de/charts

One of the quite expensive paid plans, as the free one has to have "Created with Datawrapper" attribution at the bottom. I would guess they've vibe-coded their way to a premium version without paying, as the alternative is definitely outside individual people's budgets (>$500/month).


Inspecting the page, I can see some classes "dw-chart" so I looked it up and got to this: https://www.datawrapper.de/charts. Looks a bit different on the page, but I think that's it.


it is indeed Datawrapper like other posters have said. It works well for interactivity on a static blog like Hugo.


I saw a TikTok of someone saying that farmers are not stupid (due to the wide variety of skills to successfully farm) and were just betting on Trump not actually going through with tariffs.

It's hard to have any sympathy for such cynical behavior while simultaneously asking for handouts. Especially since the same people probably voted against others getting social services.


It also hurts when I drop the iPad mini on my face. In fact, I was considering getting a Pro Max to replace both a iPhone Pro and iPad mini combo but figured it might too big of a compromise.

I wonder if anyone has successfully gone down this path.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: