Hacker Newsnew | past | comments | ask | show | jobs | submit | tgrowazay's commentslogin

> Their program offers far less money for the old phone than selling it used on ebay. Why would anyone use it?

It sets the price floor and provides liquidity, so the phone doesn’t go into a trash bin instead.


> You think the billionaires are going to allow themselves to be taxed heavily enough to support UBI

They will have no choice. Proletariat must not be hungry and agitated. Free legal MJ for everyone!


A gramme a day...

Keeps the doctor away?

It’s from a book called Brave New World.

If you want some light reading: 1984, brave new world, and atlas shrugged will mostly get you caught up on current events.


Something is up globally.

VAG sold 71 Audi Q4 E-tron in whole Q4 in the US. Only three Q8 E-trons. 220 Q6 and 248 VW ID.4 .

Best VAG EV seller for Q4 is Porsche Taycan at 1,672 cars.

Total US EV sales Q4 across all manufacturers is 234,171


Audi stopped Q8 e-tron production in early 2025. I don't know how much allocation the US has had of the semi-replacement (S)Q6, and A6 was not launched at all.

Q4 is a bit weird, since it's just a more expensive ID. 4, and not exactly more premium. Actually less premium feel than the sister car Skoda Enyaq, but that's not available in the US.

They're a bit out-of-phase with BMW and Mercedes right now, who just opened the books on their new platform cars. Perhaps you could argue it was bad timing with the Q6 being a bit of an "inbetweener", but the PPE platform was delayed, to be fair.


The US market is extremely regressive due to the changing regulatory environment. I fully expect new ICE cars without catalytic converters in the near future.

This is not representative of the rest of the world.


They might be happy that they can keep making V8s, but they have to know any future administration could easily outlaw any design that goes too far backwards. Such a car will also not be able to be sold anywhere else in the world. Heck, by the time they design, tool, and produce such a beast it could already be too late.

Well, just leave the catalytic converter out in the US variants of cars and pocket the difference. That's a far simpler change.

I don't want that, but given everything else that's going on, it wouldn't be a surprise.


The demand for EVs is crashing across the board. Porsche for example is now in dire straits because they had promised to make the 718 only as EV and with demand going down, they'll revamp the platform and get ICE 718s back.

Unless the feds can take California out of the regulatory picture, I don't expect major steps backward. Almost half the country adopts California's emissions standards.

Now we need a usable website that uses this API to show latest layoffs in, for example, CA.

LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.

DDR4 tops out about 27Gbs

DDR5 can do around 40Gbs

So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.


DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.

Yes.

In general systems usually have PCIE version with bandwidth better than RAM of that system.

For example a system with DDR4 (27Gbs) usually has at least PCIE4 (32Gbs at 16x).

But you can bottleneck that by building a DDR5 (40Gbs) system with PCIE4 card.


yeah, actually, I'm bottlenecked af since my mobo got pcie3 only :(

Channels matter a lot, quad channel ddr4 is going to beat ddr5 in dual channel most of the time.

Four channels of DDR4-3200 vs two channels of DDR5-6400 (four subchannels) should come out pretty close. I don't see any reason why the DDR4 configuration would be consistently faster; you might have more bank groups on DDR4, but I'm not sure that would outweigh other factors like the topology and bandwidth of the interconnects between the memory controller and the CPU cores.

Faster than the 0.2tok/s this approach manages

Should be active param size, not model size.

Yes, you’re right.

LLama 3.1 however is not MoE, so all params are active.

For MoE it is tricky, because for each token you only use a subset of params (an “expert”) but you don’t know which one, so you have to keep them all in memory or wait until it loads from slower storage, potentially different for each token.


From wiki:

> Rebecca Beth Bauer-Kahan (née Bauer; born October 28, 1978) is an American attorney and politician who has served as a member of the California State Assembly from the 16th district since 2018. A member of the Democratic Party, her district extends from Lamorinda to the Tri-Valley region of the San Francisco Bay Area. She has been described as a women's rights advocate.


It is safety, regulatory and noise nightmare.

As opposed to what, nuclear energy, airplane traffic, people directing 2 ton vehicles?

Get over your bs.


You could buy a house and a private jet with these if you held just one more year.

On the Lex podcast, it seemed that Peter liked Mark Zuckerberg more than Sam Altman.

Maybe it was a move to make Sam come with overwhelming offer.


According to artificial analysis ranking, GLM-5 is at #4 after Claude Opus 4.5, GPT-5.2-xhigh and Claude Opus 4.6 .


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: