Hacker Newsnew | past | comments | ask | show | jobs | submit | maxnevermind's commentslogin

Any publicly available evidence to back that up? There have been post-exit blog posts from OpenAI employees on HN before and it did sound like the only black magic they use there is that many employees work 16 hrs a day during launch of new features. I know that some current Claude Code devs are doing interviews where they claim that they use Claude Code extensively but they clearly have a conflict of interest while they are still employed at Anthropic, so it would be like asking a barber if you need a haircut.

Look at the number of features (PRs) being pushed by these companies.

I wonder if this ends up like Tesla - China copies it and makes it cheaper - GG if not protected by huge tariffs/bans. It seems like US these days is just a testing ground for new tech that later scaled further and optimized in China. Are there any hard moats protecting SpaceX from that?

> I wonder if this ends up like Tesla

Tesla ended up here because Musk got bored with it and was distracted by his shiny new toy: Twitter. And running a large car manufacturer requires skill, competence and careful attention, all of which Musk lacks.

Also on any given day Musk in deep in the K hole.


maybe the rocket science?

Does anybody know what their Enterprise offering actually offers? I read through it but still don't get it.


Yeap, I recently came to realization that is useful to think about LLMs as assumption engines. They have trillions of those and fill the gaps when they see the need. As I understand, assumptions are supposedly based on industry standards, If those deviate from what you are trying to build then you might start having problems, like when you try to implement a solution which is not "googlable", LLM will try to assume some standard way to do it and will keep pushing it, then you have to provide more context, but if you have to spend too much time on providing the context, then you might not save that much time in the end.


> verified with human use

Quality of that verification matters, people who might use AI tend to cut corners. This does not completely solve problem with AI slop imo and solution quality. You ask Claude Code to go and implement a new feature in a complex code base, it will, the code might even work, but implementation might have subtle issues and might be missing the broader vision of the repo.


> people who might use AI tend to cut corners

People do this all the time too, and is one source for the phrase "tech debt"

It's also a biased statement. I use Ai and I cut fewer corners now because the Ai can spam out that boring stuff for me


I was trying to find some more context on this but all I could find is that Rob Pike seems to care a lot about efficiency of software/hardware and against bloat which is expressed in his work on Golang and in related talks about it.


Were you flexible with re-location or you were looking just at you current region or it is that bad no matter your flexibility?


I was also surprised by that. It is relatively cheap to measure as you can just buy BP monitor and do it yourself at home. Considering that high BP is very often asymptomatic, I, for example, even feel better with high BP, many people walking around accumulating damage for years. Not to mention it also goes with a baggage of other side-effects like increased chances of a stroke and kidney failure. For some reason it hits differently when you go eat something salty or drink coffee or get all stressed out for now reason and then see increased BP with your own eyes. That was what motivated me to stick to a better diet, cut caffeine and chill out.


NIMBY is in the past, now it is a BANANA - Build Absolutely Nothing Anywhere Near Anyone


Does very large context significantly increase a response time? Are there any benchmarks/leader-boards estimating different models in that regard?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: