Hacker Newsnew | past | comments | ask | show | jobs | submit | manmal's commentslogin

> dig in your heels when confronted with overwhelming dissent

Of course, the majority is always right and we should yield to it right away /s


One heuristic for spotting when you might be wrong is that you hold a very uncommon belief.

It COULD be that you are correct and the world is crazy, but its far more likely that you are the one who is missing something. It's always worth stopping to double check when this happens.

Perhaps more importantly, if you do happen to be right when everyone else is wrong its important to determine your goals.

Is it more important to be right, or to be happy? If the answer is the latter then its sometimes best to just let people continue being wrong for the sake of being social. Nobody likes to be told they're wrong, so is "correctness" worth more than that person's feelings? Very oten it is not.


> Nobody likes to be told they're wrong

I like to be told I'm wrong. While it is true that I am a nobody it means I'm about to learn something.


I don't really think you like it, but maybe you will like this.

> I like to be told I'm wrong.

I believe you, but in my own experience I've met more people who say this than who mean this.

Usually it's situational. People might genuinely like to be wrong when the novelty is fun or useful, for example in lab work or in low stakes classwork. However, they despise it with politics, their job, or anything else that might have actual consequences in their lives.


A lot of people believe this about themselves, but yes, like you suspect, they don't mean it when it counts.

> sometimes best to just let people continue being wrong for the sake of being social

There's almost no time when it's better to try to convince somebody they're wrong. It won't help you, and it won't work anyway, so it won't help them either.

Sure if you're somebody's doctor, and even then you have to pick your battles.


Ever heard of tribalism and echo chambers? Wrongness being a function of number of dissidents is a terrible heuristic, in contrast to determining the lies and falsehood based on the soundness of the argument or logic.

Also, when a population group is large enough (e.g. entire world), it's quite likely a crazily-held belief is shared by other people, or people who would at least nod in agreement.


the thing with uncommon beliefs is not that they are likely wrong. but that digging in your heels is surely going to fail, regardless of who is actually right.

so your suggested response is the right approach, but it doesn't end there. you can try find a common belief and build up your argument from there. peoples opinions can be changed if you take the time to learn how their opinions are formed and present them with the opportunity to consider alternative ideas. ideally in such a way that they discover the truth on their own.

a key component is that unity enables change. it is better to be wrong but united, than right and divided. if we are united (and thus stay friends) then we can learn from being wrong and change direction. if we are divided then changing direction is difficult.


I think there are also different definitions of "digging in your heels".

What most people do is just whine and repeat themselves because they don't understand all the ways they're being misunderstood. They lack self-awareness because they lack sufficient experience hearing and digesting the arguments from the other sides. This is a missed opportunity.

What people should do instead is leverage their self-awareness once they have the spotlight and "magically know" which concerns to address when they are given that brief window of rebuttal. It's hard to get attention, so they must strike when the iron is hot. It takes a lot of experience, and most never get to that level. Repetition signals to everyone else they don't really know what they're talking about.

The majority of the audience may actually be on your side agreeing with you, but they won't stick their neck out for the truth if they know they're less informed and less experienced than you, yet even you still failed. They have no chance to do any better, so they just shut the fuck up. Everyone languishes. Your point is noted, but not winning. All you did was paint a target on your back for the next time you say anything. People would rather be winning than right. Agreeing with you once doesn't mean they side with you.


> Is it more important to be right, or to be happy?

Im going out on a limb here, but I'd say intelligent people will tell you - without a doubt - being right. Because being happy is a perception and always a transitive state. There's nothing holding you from being both right and happy.

> Nobody likes to be told they're wrong

Thats actually a southern european way of looking at things; Its a cultural trace that varies a lot by region. Pointing flaws in plans is actually something I saw as worthy of an appraisal in Germany.

Also, I always tell people when I think they are wrong. I no longer insist or argue, just point out what lead me to the conclusion; you don't want to be in the blast radius of a deaf manager, an incompetent colleague or a delusional partner. Win-win.


I’m going to ignore the whole socially-agreeable aspect.

Take a thousand subjects. I’m going to be wrong about 990 of them. Because I know just enough about it to think I might have a clue.

You could probably read up on something for five hours and have a better opinion on it than most people that you meet.

How many things are just passively received opinion? And what kind of signal is that? Oh no, all the Jacks and Jones disagree with me.

On the other hand there are some cases where you can go down some dark rabbit hole and gain false knowledge and education. Maybe studying political science or something.


>One heuristic for spotting when you might be wrong is that you hold a very uncommon belief.

this is only the case in a 'wisdom of the crowd' world where people hold uncorrelated, authentic, self-formed opinions. If you're in a world of mass opinion and mania where ideas spread virally it ceases to be an indicator. In that environment its not truth that determined popularity of a belief, but how transmittable they are. In a world where gigantic companies produce sociality being anti-social in the most literal sense is a very real survival and truth-finding strategy.

And of course it's more important to be right than happy. Happiness decoupled from truth is nihilism. If that's the goal start doing heroin at ten in the morning and retreat into the VR world of your choice.

As Cormac McCarthy said in his last book: “You would give up your dreams in order to escape your nightmares and I would not. I think it's a bad bargain.”


> If you're in a world of mass opinion and mania where ideas spread virally it ceases to be an indicator.

Not really. It continues to be an indicator, just a less reliable one. As I said, it's one heuristic. It increases your probability of being right more than it decreases it, but it isn't an absolute rule.

Fundamentally, science itself relies on this heuristic to some extent. The idea that an experiment be reproducible is essentially the idea that the majority of testers should agree on observed reality. You just have to be careful not to conflate opinion with observed fact, or to treat it as more than a heuristic evaluator.

> Happiness decoupled from truth is nihilism.

Not at all.

You do not need to be correct to be happy, and there is no correlation at all between your ability to correctly understand the world and your capacity (or worthiness) to experience joy or to help others experience it. You are allowed to be wrong and happy, or apathetic and happy, or ignorant and happy, or even nihilistic and happy.

> If that's the goal start doing heroin at ten in the morning and retreat into the VR world of your choice.

There's more than one type of happiness. The kind you describe is hedonic. The other type is referred to as Eudamonic, and it comes from connection, service, and a sense of purpose.

You'll never get to experience the second type if other people don't want to be around you because you've decided that your own narrow perspective is the One True Perspective (TM).

Don't get me wrong, I reject post-modernity and the horrifying idea that there is no objective truth. I just also reject the idea that any of us are valid arbiters of that truth, or that we must know the truth before being allowed to experience happiness.


> You are allowed to be wrong and happy

Nobody said you can't. They said the happiness is "decoupled from truth", which isn't ideal if we care about objective health of a society.

Your position seems to imply support for society-level submission to religious dogma. There's no point ignoring actual examples of all these ideas.

Hold an "uncommon belief"? According to you, it's a sign you're wrong. "the world isn't crazy, it's you who's missing something"... and you even say "let people continue being wrong for the sake of being social."

I don't think you meant to express support for strict religious rule and population submission, but that's how I'm reading it.

Your argument supports those who seek submission from the population. You don't require objective truth to play a role in happiness. You have found value in submission that serves to neutralise dissent. Dissent when coming from the few, isn't worth your time. Peg those few dissenters as "probably wrong" and call it a day.


A lot of folks seem to be interpreting my heuristic is if it were a hard and fast rule here. Thats not what a heuristic is.

A heuristic is a mental shortcut that allows for quick decisions based on "most likely" outcomes. Its a statistical tool.

In my case I said it should be enough of an indicator for you to double check your work, not that you were automatically wrong. I stand by that.

That said, you're getting into epistemology now, and its important to differentiate between the observed facts and the biased interpetation of them.

I mentioned before that reproducibility in this way is important to science, and the reason it works with science is that we decouple the observation from the interpetation.

When most people observe that 2+2=4 and you get 5 it is likely that you're wrong. You should invest the time to double check your work.

If 50% of the world then tortures that observation through convoluted and error filled reasoning until they can interpret it to mean magic sky daddy wants you to let the priest touch your special no-no place you should ignore them.

Observed reality being in agreement is a much more reliable heuristic than agreement of interpretation, which is often filled with bias.

> They said the happiness is "decoupled from truth", which isn't ideal if we care about objective health of a society...

Happiness is an emotion. Imagine if I'd claimed society should only be allowed to feel lust on Tuesdays. This is no different. You are allowed to feel however you'd like to feel whenever you'd like to feel it.

Making up arbitrary rules about when you're "allowed " or "deserve" to feel good will make you miserable, and you don't have the right to tell the rest of us we have to abide by your misery making emotion rules.

You might consider asking yourself where this idea that you must meet specific prerequisites before being allowed to feel specific feelings came from, and then seriously considering whether it has merit, or if it even actually works.

Are you also restricting when you're "allowed" to feel negative emotions? How does that work? Are you really able to just... not be sad if you dont deserve it? Do you always feel happy when you DO deserve it?

I think the Protestant ethos is soo deeply embedded in you that you might not even know its there.


I think I can speak for most people with niche subjects of interest when I say that the commonly held beliefs on said niche subject tend to be pretty bad.

Ever heard the phrase "pick your battles"?

May the bridges I burn light the hills I die on.

You don’t have to accept their conclusions, but they don’t have to accept yours either. You can walk away

Sometimes you just have to implement them :)

Can I be your friend please.

Also this document is basically just how I act, or how I would still act if I was less self-aware; some combination of the two.

I suspect the author may have written this partly as a self-critique.


I think that would also have busted cache all the time, and uncached requests consume usage limits rapidly.

> Today, in ~5 minutes I can do a literature review that would have taken me easily 10+ hours five years ago.

And it will not yield the same outcome you would have had. Your own taste in clicking links and pre-filtering as you do your research, is no longer being done if you outsource this. I‘m guilty of this myself. But let’s not kid ourselves.

I’ve had GPT Pro think 40 minutes about the ideal reverse osmosis setup for my home. It came up with something that would have been able to support 10 houses and cost 20k. Even though I did tell it all about what my water consumers are and that it should research their peak usage. It just failed to observe that you can buffer water in a tank.

There‘s a reason they let you steer GPT-Pro as it goes, now.


I don't claim using AI is the same as doing it yourself. My point is that AI capabilities are much more extensive than "fancy search". By giving a metric and an example I hoped to make that point without getting into hair-splitting.

I wouldn’t call that hair-splitting. I’m saying, it’s not a real literature review, but even fancier search.

Words hint at concept space, which is messy and interconnected. I think a charitable reading can understand the difference between "powerful search, kind of like Google as of 2020, or Lexus-Nexus" and LLM-AI chatbot interfaces... I would hope. But I've been developing software since the 1980s so I can't speak for the newer generations who might not have a quadruple decade view. I've been in meetups in San Francisco around 2018, where people were excited to find multimodal reasoning in early days proto-language models. There have been qualitatively noticeable historical shifts. We don't have to agree on the exact labels used, but what LLM's enable is different enough from e.g. ElasticSearch of 2020 to call out.

You might be surprised how well 5.3-codex follows your instructions. When it hits a wall with your request, it usually emits the final turn and says it can’t do it.

Why is it not useful? Input token pricing is the same for 4.7. The same prompt costs roughly 30% more now, for input.

The idea is that smarter models might use fewer turns to accomplish the same task - reducing the overall token usage

Though, from my limited testing, the new model is far more token hungry overall


Well you‘ll need the same prompt for input tokens?

Only the first one. Ideally now there is no second prompt.

Are you aware that every tool call produces output which also counts as input to the LLM?

Are you aware that a lot of model tool calls are useless and a smarter model could avoid those?

Are you aware that output tokens are priced 5x higher than input tokens?


> a lot of model tool calls are useless

That’s just wrong. File reads, searches, compiler output, are the top input token consumers in my workflow. None of them can be removed. And they are the majority of my input tokens. That’s also why labs are trying to make 1M input work, and why compaction is so important to get right.

Regarding output - yes, but that wasn’t the topic in this thread. It’s just easier to argue with input tokens that price has gone up. I have a hunch the price for output will go up similarly, but can’t prove it. The jury’s out IMO: https://news.ycombinator.com/item?id=47816960


This has no bearing on my comment. The point is that a better model avoids dozens of prompts and tool calls by making fewer CORRECT tool calls, with the user needing no more prompts.

I’m surprised this is even a question; obviously a better prompter has the same properties and it’s not in dispute?


That's valid, but it's also worth knowing it's only one part of the puzzle. The submission title doesn't say "input".

Common sense isn’t a language pattern. I doubt this will ever work w/ LLMs.

The models that we are paying to generate tokens are already not really just LLMs, as anyone studying language models ten years ago (or someone who describes them as "next token predictors") would understand them. Doing a bunch of reinforcement learning so that a model performs better at ssh'ing into my server and debugging my app is already realllly stretching the definition of "language pattern".

I think when we do get AI that can perform as well as a human at functionally all tasks, they will be multi-paradigm systems; some components will not resemble anything in any commercial system today, but one component will be recognizably LLM-like, and act as an essential communication layer.


Why don’t you do the planning yourself? It’s very likely to be a better plan.

Why don’t you switch to codex? The grass is greener here. Do use 5.3-codex though, 5.4 is not for coding, despite what many say.

Anthropic in general is miles ahead in “getting work done”, and its not just me on the team. Theres a lot of paper cuts to work through to be truly generic in provider

I did try out codex before claude went to shit and it was good, even uniquely good in some ways, but wasnt good enough to choose it over claude. Absolutely when claude was bad again it would have been better, but thats hindsight that I should have moved over temporarily.


You can also rent a cloud GPU which is relatively affordable.

Or an autoresearch minimizing render times.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: