Hacker Newsnew | past | comments | ask | show | jobs | submit | rtpg's commentslogin

LA proper seems to have a density of 3000/km^2 according to Wikipedia

A perhaps more interesting use case is the utsunomiya light rail. Utsunomiya has a density of around 1200/km^2.

What they ended up doing was building a new tram with exactly one line. The main thing they did was make sure the tram comes frequently, including off peak.

End result is people rely on the tram line and the tram is making good money, being operationally profitable (still gotta pay back construction costs of course).

Utsunomiya is obviously not exactly greater LA, but Utsunomiya has on average 2.25 cars per household[0]. It has traffic issues and people feel the need to own a car. And yet the tram line is finding success because transportation is a local issue, not a global one!

You can solve for transportation issues in crowded areas. Few reasonable people are lamenting that you don't have a train between madison, WI and Chicago every 15 minutes. Many are simply lamenting that even at a local level PT in many places is leaving a lot on the table despite there being chances of success!

Smaller focused PT has proven itself to work time and time again, and compounds on other PT projects in the area.

[0]: https://www.pref.tochigi.lg.jp/english/intro/overview.html


> California spent 15yrs trying to build a high speed train and failed.

It has to be said: even in Japan train projects are multi decade projects.

Is Cali HSR stopped? I can imagine it being slow but I wonder if it's 10x slower or "merely" 3x slower.


I wonder if California high speed rail will ever surpass quadcopter personal vehicles in passenger miles per year. I know which way I'd bet for the year 2040.

I really worry about a bunch of people going over to codeberg. Site's already super slow, but apparently it's quite nice when self-hosted

Anyone who is able to just plop a forgejo instance on their own machines... please do that if possible!


The world-class script munging capabilities and rapid prototyping capabilities of C, combined with the durable performance of your favorite scripting language. A match made in heaven for operational scripts

you don't have to name your forge after the VCS it's based off of.

> After a $75 million fundraising round led by U.S. venture firm Benchmark in May 2025, Manus shut its China offices in July, laying off dozens of employees. It then moved its operations to Singapore.

The company itself was based in mainland China less than 12 months ago.


Yeah if an American tech firm had been working in the US for 5 years and then tried to close all US offices down and move its IPs and tech to a different country so that it can sell out to Alibaba or Bytedance, I'm sure the US would react in the same exact way.

The sinophobia in this thread is ridiculous. Whether you agree or disagree with what China is doing, nothing is happening that wouldn't also happen in the US.


What sinophobia? I haven't seen anyone here talking shit about chinese indivuduals. Or are they trashing China (the nation-state)? If we count that as xenophobia, is any unjust criticism of USA "americanphobia"? If so, fine, but I'd rather not anthropomorphize megacorps with monopolies on violence.

You don't think it's possible that Japanophobia could fuel criticisms of Nintendo? Or that Russophobia could fuel criticisms against a Russian company?

You don't need to anthropomorphize anything.


One thing is to say that some of the comments may be motivated by xenophobia (could be true, but I'm no psychic, and you likely aren't either), another is to use the criticisms as proof of xenophobia.

Since you mentioned Nintendo, do you think the many criticisms(fair or otherwise) against Nintendo and their games are due to Japanophobia?

I think its fairly clear that, when talking bad about China, they are focusing on its goverment, not its people nor ethnicities.


> books are cheaper than a Doordash meal or a computer game we buy and never finish. Would the average person really read more books if they were $4.99 instead of $29.95?

As a data point I'm reading some series I enjoyed the first 2 volumes of. I just picked up the next 7 ones because they were there and each of em were ~$5. Wouldn't have done that if they were $30, and I'm not guaranteed to get to the end!


Japan and France to me stand out as places where pop culture-y books are really fairly priced. And both of these are places where there are established printing formats that don't try to make the books huge.

Walking around in an Australian bookstore at least I am still a bit flabbergasted by how everything is printed to be huge, everything a slightly different size, lots of paperbacks with glossy covers etc.

Not that I think this is a "cost of materials" thing in itself. But it all compounds on itself to where now a bookstore is huge to have just some random nonsense, and people will probably buy 2 instead of 3 books.

I agree that books are probably not "too expensive", I just wish that the mass market paperbacks would be smaller more straightforward and less of a precious little item.

To anyone interested in this stuff and in Tokyo(... well, Saitama), the Kadokawa Culture Museum [0] is ... probably the biggest building commemorating a publishing house in the world? The pictures don't do it justice, the building is ginormous.

But in it there's a bit of a (corporate approved) history of Kadokawa built into the museum. Their core thing that found them success: standardising a small pocketbook format for printing their books, having almost everything print to that size, with the same font etc, and selling it at a low enough price that college students could buy more books than they could ever read.

Printing all your cheap stuff in A6 sizes mean you can have a _loooot_ of books at home before worrying about much.

[0]: https://maps.app.goo.gl/G5U9S1dit2KJvEQVA


I can confirm that French paperbacks are in a league of their own, my almost weekly purchases at the French bookstore here in Bucharest are a example of that (never visited Japan, but a French friend of mine who’s also a book rat and who staid in Tokyo for about a year told me about the same you’re saying about them). On the other hand I could never understand the Anglos’ infatuation with a book not being serious enough if it’s not hardback, maybe a reflection of their castle-owning days, when one had enough space to store them. I’m kidding, but only by half.

I’d also want to show my appreciation for Italian publishers, for some of them, at least, the quality of their some of their books can be quite high (Laterza and Einaudi from the top of my head, but there are others, too).


> lots of paperbacks with glossy covers etc.

Glossy cover lamination is actually cheaper than matte lamination.

If you meant more fancier finishing like spot UV or foil-stamping, ignore what I said.


yeah I was thinking of the foil stamping etc... maybe it just looks fancier to me (and hence why they do it I guess??)

Japanese paperbacks tend to use dust covers instead. Dunno if that's cheaper or not, but it seems like it.


Are you writing code that gets reviewed by other people? Were code reviews hard in the past? Do your coworkers care about "code quality" (I mean this in scare quotes because that means different things to different people).

Are you working more on operational stuff or on "long-running product" stuff?

My personal headcanon: this tooling works well when built on simple patterns, and can handle complex work. This tooling has also been not great at coming up with new patterns, and if left unsupervised will totally make up new patterns that are going to go south very quickly. With that lens, I find myself just rewriting what Claude gives me in a good number of cases.

I sometimes race the robot and beat the robot at doing a change. I am "cheating" I guess cuz I know what I want already in many cases and it has to find things first but... I think the futzing fraction[0] is underestimated for some people.

And like in the "perils of laziness lost"[1] essay... I think that sometimes the machine trying too hard just offends my sensibilities. Why are you doing 3 things instead of just doing the one thing!

One might say "but it fixes it after it's corrected"... but I already go through this annoying "no don't do A,B, C just do A, yes just that it's fine" flow when working with coworkers, and it's annoying there too!

"Claude writes thorough tests" is also its own micro-mess here, because while guided test creation works very well for me, giving it any leeway in creativity leads to so many "test that foo + bar == bar + foo" tests. Applying skepticism to utility of tests is important, because it's part of the feedback loop. And I'm finding lots of the test to be mainly useful as a way to get all the imports I need in.

If we have all these machines doing this work for us, in theory average code quality should be able to go up. After all we're more capable! I think a lot of people have been using it in a "well most of the time it hits near the average" way, but depending on how you work there you might drag down your average.

[0]: https://blog.glyph.im/2025/08/futzing-fraction.html [1]: https://bcantrill.dtrace.org/2026/04/12/the-peril-of-lazines...


You hinted at an aspect I probably haven't considered enough: The code I'm working on already has many well-established, clean patterns and nearly all of Claude's work builds on those patterns. I would probably have a very different experience otherwise.

I legit think this is the biggest danger with velocity-focused usage of these tools. Good patterns are easy to use and (importantly!) work! So the 32nd usage of a good pattern will likely be smooth.

The first (and maybe even second) usage of a gnarly, badly thought out pattern might work fine. But you're only a couple steps away from if statement soup. And in the world where your agent's life is built around "getting the tests to pass", you can quickly find it doing _very_ gnarly things to "fix" issues.


I’ve seen ai coding agents spin out and create 1_000 line changesets that I have to stop before they are 10_000. And then I look at the problem and change one line instead.

This is it right here. Claude loves to follow existing patterns, good or bad. Once you have a solid foundation, it really starts to shine.

I think you're likely in the silent majority. LLMs do some stupid things, but when they work it's amazing and it far outweighs the negatives IMHO, and they're getting better by leaps and bounds.

I respect some of the complaints against them (plagiarism, censorship, gatekeeping, truth/bias, data center arms race, crawler behavior, etc.), but I think LLMs are a leap forward for mankind (hopefully). A Young Lady's Illustrated Primer for everyone. An entirely new computing interface.


We noticed this and spent a week or two going through and cleaning up tests, UI components, comments, and file layout to be a lot more consistent throughout the codebase. Codebase was not all AI written code - just many humans being messy and inconsistent over time as they onboard/offboard from the project.

Much like giving a codebase to a newbie developer, whatever patterns exist will proliferate and the lack of good patterns means that patterns will just be made up in an ad-hoc and messy way.


You haven't answered the question though. Are your code peer reviewed? Are they part of client-facing product? No offense, I like what you are doing, but I wouldn't risk delegation this much workload in my day job, even though there is a big push towards AI.

> My personal headcanon: this tooling works well when built on simple patterns, and can handle complex work. This tooling has also been not great at coming up with new patterns, and if left unsupervised will totally make up new patterns that are going to go south very quickly. With that lens, I find myself just rewriting what Claude gives me in a good number of cases.

I've been doing a greenfield project with Claude recently. The initial prototype worked but was very ugly (repeated duplicate boilerplate code, a few methods doing the same exact thing, poor isolation between classes)... I was very much tempted to rewrite it on my own. This time, I decided to try and get it to refactor so get the target architecture and fix those code quality issues, it's possible but it's very much like pulling teeths... I use plan mode, we have multiple round of reviews on a plan (that started based on me explaining what I expect), then it implements 95% of it but doesn't realize that some parts of it were not implemented... It reminds me of my experience mentoring a junior employee except that claude code is both more eager (jumping into implementation before understanding the problem), much faster at doing things and dumber.

That said, I've seen codebases created by humans that were as bad or worse than what claude produced when doing prototype.


There are definitely slightly annoying variants of this of "ah the program does its job in 200ms but takes 5s to shutdown timing out trying to send telemetry data". Especially annoying on CLI programs.

I have been unpleasantly surprised by several programs outright crashing when not being able to send telemetry data consistently. Though this has usually been when the connection is a bit odd and it is able to send through _some_ stuff but then crashes when it fails later.


ran into this flavor once with a different tool, not gh. our deploy job was consistently about 8s longer than it should've been, turned out a fire-and-forget telemetry POST wasn't actually fire-and-forget when the endpoint got slow. NO_PROXY plus blackholing the host fixed it, but probably the kind of thing you shouldn't have to find via flame graph.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: