Hacker Newsnew | past | comments | ask | show | jobs | submit | raducu's commentslogin

>high level of motivation and self discipline to go 3 times a week.

I don't really have the time for gym but going feels so good so I can see why someone who does have the time might go 7 days a week.


People really differ, don't we?

I can only imagine what it would feel like to enjoy working out!


> Without weighing on the validity of their hypothesis that one or both sides found the other“especially attractive”

I get that it's survivor bias and all, but modern racial preference also paints a clear picture, I don't understand why we are so against this hypothesis that male homo sapiens did not particularly like the female neanderthal (I can clearly see why as any modern male would).

We found neanderthal fossils with sapiens DNA (afir it was something like 7% so not sterile hybrids, but a few generations after the hybridisation). I don't think we have ANY evidence for non viability of male sapiens + female neanderthal non-viability, we just don't like the fact that this viability proves the asymetri.

Perhaps because modern psyche loves to picture males as sexual brutes and women as these higher wonderful rosy elves and this "shocking" neanderthal(i.e. "beastly") preference goes strongly against this meme?

Why would it be so inconcievable that the male part of homo sapiens drove the sexual selection for the more "refined" features of the species and the preference for intelligence of women was not intrinsic but partially "forced" -- i.e. warbrides and all -- so it would make perfect sense that some homo sapiens women would be attracted to the physical strength cues of male neanderthals, just like... gasp... modern women are?


Ancestral Neanderthal Y-DNA was completely replaced by an incursion of Sapiens Y-DNA long before they(/we?) went extinct, so your whole theory of "we ain’t hitting that" is not very convincing to say the least.

"DNA deserts" likely indicate spots where there were issues with hybrid viability and not some half-disguised fantasy.


> don't understand why we are so against this hypothesis that male homo sapiens did not particularly like the female neanderthal

Maybe the documentary 101 sexual accidents might enlighten you.

> (I can clearly see why as any modern male would).

What is a "modern male" ?


> All other things being equal, if your opponent engages actively in hiding among medical and press workers as a type of guerrilla warfare, then the reality does become this.

So let me check this reasoning: if there was a single US soldier in the WTC towers, the 9/11 attacks were justified because the soldiers were hiding among the civilians?

Or if Hamas killed a single israelian soldier in their horrendous attacks in private homes, then it's justified because there were soldiers in those houses?

Or if the israelian reservists have their weapons at home and can be called upon directly from home to action, does that mean Iran or Hamas are justified at flattening residential buildings in Israel because those could host soldiers?


You've collapsed two meaningfully different things into one: 'soldiers exist near civilians' and 'soldiers deliberately operate from within protected populations as a systematic tactic.' Your three examples all illustrate the first. I was describing the second. These are not the same argument, and treating them as equivalent doesn't advance the discussion.


I have conflated those two, but my main point is the monstrous, one-sided destruction Israel has caused in Gaza is a clear proof Israel has gone way, way, way into the genocide territory and not just into the "hamas fighters were hiding among the civilians and after considering the international laws for such cases SOME civilians were killed".

Israel demonstrated complete disregard for human life for the sake of expediency to say in a gentle way, but in a harsher way, you could say the aftermath and details that are emerging point to malicious collective punishment.


The scale of the destruction doesn't retroactively validate the tactics that made it more likely. 'It got very bad' is not a justification for abandoning the framework that might have contained it.

If anything it's an argument against it.


> Don't get me started on the thinking tokens.

Claude provides nicer explanations, but when it comes to CoT tokens or just prompting the LLM to explain -- I'm very skeptical of the truthfulness of it.

Not because the LLM lies, but because humans do that also -- when asked how the figured something, they'll provide a reasonable sounding chain of thought, but it's not how they figured it out.


> Gemini also frequently gets twisted around, stuck in loops, and unable to make forward progress.

Yes, gemini loops but I've found almost always it's just a matter of interrupting and telling it to continue.

Claude is very good until it tries something 2-3 times, can't figure it out and then tries to trick you by changing your tests instead of your code (if you explicitly tell it not to, maybe it will decide to ask) OR introduce hyper-fine-tuned IFs to fit your tests, EVEN if you tell it NOT to.


I haven't used 3.1 yet, but 3.0 Pro has been frustrating for two reasons:

- it is "lazy": I keep having to tell it to finish, or continue, it wants to stop the task early.

- it hallucinates: I have arguments with it about making up API functions to well known libraries which just do not exist.


> let myself atrophy, run on a treadmill forever, for something

You're lucky to afford the luxury not to atrophy.

It's been almost 4 years since my last software job interview and I know the drills about preparing for one.

Long before LLMs my skills naturally atrophy in my day job.

I remember the good old days of J2ME of writing everything from scratch. Or writing some graph editor for universiry, or some speculative, huffman coding algorithm.

That kept me sharp.

But today I feel like I'm living in that netflix series about people being in Hell and the Devil tricking them they're in Heaven and tormenting them: how on planet Earth do I keep sharp with java, streams, virtual threads, rxjava, tuning the jvm, react, kafka, kafka streams, aws, k8s, helm, jenkins pipelines, CI-CD, ECR, istio issues, in-house service discovery, hierarchical multi-regions, metrics and monitoring, autoscaling, spot instances and multi-arch images, multi-az, reliable and scalable yet as cheap as possible, yet as cloud native as possible, hazelcast and distributed systems, low level postgresql performance tuning, apache iceberg, trino, various in-house frameworks and idioms over all of this? Oh, and let's not forget the business domain, coding standards, code reviews, mentorships and organazing technical events. Also, it's 2026 so nobody hires QA or scrum masters anymore so take on those hats as well.

So LLMs it is, the new reality.


This is a very good point. Years ago working in a LAMP stack, the term LAMP could fully describe your software engineering, database setup and infrastructure. I shudder to think of the acronyms for today's tech stacks.


And yet many the same people who lament the tooling bloat of today will, in a heartbeat, make lame jokes about PHP. Most of them aren't even old enough to have ever done anything serious with it, or seen it in action beyond Wordpress or some spaghetti-code one-pager they had to refactor at their first job. Then they show up on HN with a vibe-coded side project or blog post about how they achieved a 15x performance boost by inventing server-side rendering.


Highly relevant username!


I try :)


Ya I agree it's totally crazy.... but, do most app deployments need even half that stuff? I feel like most apps at most companies can just build an app and deploy it using some modern paas-like thing.


> I feel like most apps at most companies can just build an app and deploy it using some modern paas-like thing.

Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.

But obviously that’s not Serious Business™ and won’t give you zero downtime and high availability.

Though tbh most mid-size companies would also be okay with Docker Swarm or Nomad and the same software clustered and running behind HAProxy.

But that wouldn’t pad your CV so yeah.


> Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.

That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC. How did we get to the point where applications by default came with all of this shit?


When I was in primary school, the librarian used a computer this way, and it worked fine. However, she had to back it up daily or weekly onto a stack of floppy disks, and if she wanted to serve the students from the other computer on the other side of the room, she had to restore the backup on there, and remember which computer had the latest data, and only use that one. When doing a stock–take (scanning every book on the shelves to identify lost books), she had to bring that specific computer around the room in a cart. Such inconveniences are not insurmountable, but they're nice to get rid of. You don't need to back up a cloud service and it's available everywhere, even on smaller devices like your phone.

There's an intermediate level of convenience. The school did have an IT staff (of one person) and a server and a network. It would be possible to run the library database locally in the school but remotely from the library terminals. It would then require the knowledge of the IT person to administer, but for the librarian it would be just as convenient as a cloud solution.


I think the 'more than one user' alternative to a 'single EXE on a single computer' isn't the multilayered pie of things that KronisLV mentioned, but a PHP script[0] on an apache server[0] you access via a web browser. You don't even need a dedicated DB server as SQLite will do perfectly fine.

[0] or similarly easy to get running equivalent


> but a PHP script[0] on an apache server[0] you access via a web browser

I've seen plenty of those as well - nobody knows exactly how things are setup, sometimes dependencies are quite outdated and people are afraid to touch the cPanel config (or however it's setup). Not that you can't do good engineering with enough discipline, it's just that Docker (or most methods of containerization) limits the blast range when things inevitably go wrong and at least try to give you some reproducibility.

At the same time, I think that PHP can be delightfully simple and I do use Apache2 myself (mod_php was actually okay, but PHP-FPM also isn't insanely hard to setup), it's just that most of my software lives in little Docker containers with a common base and a set of common tools, so they're decoupled from the updates and config of the underlying OS. I've moved the containers (well data+images) across servers with no issues when needed and also resintalled OSes and spun everything right back up.

Kubernetes is where dragons be, though.


> That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC

I doubt that.

As software has grown to solving simple personal computing problems (write a document, create a spreadsheet) to solving organizational problems (sharing and communication within and without the organization), it has necessarily spread beyond the .exe file and local storage.

That doesn't give a pass to overly complex applications doing a simple thing - that's a real issue - but to think most modern company problems could be solved with just a local executable program seems off.


It can be like that, but then IT and users complain about having to update this .exe on each computer when you add new functionality or fix some errors. When you solve all major pain points with a simple app, "updating the app" becomes top pain point, almost by definition.


> How did we get to the point where applications by default came with all of this shit?

Because when you give your clients instructions on how to setup the environment, they will ignore some of them and then they install OracleJDK while you have tested everything under OpenJDK and you have no idea why the application is performing so much worse in their environment: https://blog.kronis.dev/blog/oracle-jdk-and-openjdk-compatib...

It's not always trivial to package your entire runtime environment unless you wanna push VM images (which is in many ways worse than Docker), so Docker is like the sweet spot for the real world that we live in - a bit more foolproof, the configuration can be ONE docker-compose.yml file, it lets you manage resource limits without having to think about cgroups, as well as storage and exposed ports, custom hosts records and all the other stuff the human factor in the process inevitably fucks up.

And in my experience, shipping a self-contained image that someone can just run with docker compose up is infinitely easier than trying to get a bunch of Ansible playbooks in place.

If your app can be packaged as an AppImage or Flatpak, or even a fully self contained .deb then great... unless someone also wants to run it on Windows or vice versa or any other environment that you didn't anticipate, or it has more dependencies than would be "normal" to include in a single bundle, in which case Docker still works at least somewhat.

Software packaging and dependency management sucks, unless we all want to move over to statically compiled executables (which I'm all for). Desktop GUI software is another can of worms entirely, too.


When I come into a new project and I find all this... "stuff" in use, often what I later find is actually happening with a lot of it is:

- nobody remembers why they're using it

- a lot of it is pinned to old versions or the original configuration because the overhead of maintaining so much tooling is too much for the team and not worth the risk of breaking something

- new team members have a hard time getting the "complete picture" of how the software is built and how it deploys and where to look if something goes wrong.


That was on NBC.


> notice is that there's always been oscillations

There's always been oscilations, true, but the rate o change and trend on those oscilations is the real issue.


Happy to be shown where I can learn more about this different rate of change and trend which sets our current climate change apart from the rest of Earth's history.


Almost anywhere where the measurements behind climate science is being discussed. Just pay attention to the x axis on the plots.


It seems like you won't have any trouble finding that yourself if you really wanted to. This "I'm just asking questions" mode you're in can be considered a type of trolling called "sealioning".

More details here: https://en.wikipedia.org/wiki/Sealioning


More bad faith interpretations.

Here is at least something tangible. https://en.wikipedia.org/wiki/Global_surface_temperature#Glo...

On this page can be found the following graphic https://upload.wikimedia.org/wikipedia/commons/thumb/c/ca/EP...

On that graphic -- under the heading 'Ice cores (from 800,000 years before present)' in case the link gets truncated -- one can observe regular peaks in temperature that took place before the current one. I'm happy to be explained what caused them, as it could not have been human industrial activity.

That's it. I'm open to dialogue but won't entertain any more lazy dismissals and unfair characterization.


> Once you use up the entire internets worth of stack overflow responses and public github repositories you run into the fact that these things aren't good at doing things outside their training dataset.

I think the models have reached that human training data limitation a few generations ago, yet they stil clearly improve by various other techniques.


> Claude is still just like that once you’re deep enough in the valley of the conversation

My experience is claude (but probably other models as well) indeed resort to all sorts of hacks once the conversation has gone for too long.

Not sure if it's an emergent behavior or something done in later stages of training to prevent it from wasting too many tokens when things are clearly not going well.


> That's just a different bias purposefully baked into GPT-5's engineered personality on post-training.

I want to highlight this realization! Just because a model says something cool, it doesn't mean it's an emergent behavior/realization, but more likely post-training.

My recent experience with claude code cli was exactly this.

It was so hyped here and elsewhere I gave it a try and I'd say it's almost arrogant/petulant.

When I pointed out bugs in long sessions it tried to gaslight me that everything was alright, faked tests to prove his point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: