Hacker Newsnew | past | comments | ask | show | jobs | submit | Ms-J's commentslogin

People have absolute freedom of expression.

"If you went to a restaurant and it had Confederate flags and pro-slavery memorabilia on the walls, would you think: “Well, that’s just their political view, I don’t have to share it to eat here?”

Yes? If you go to the southern part of the United States, there are many restaurants with Confederate memorabilia and Confederate flags on the back of truck windows.

Some trucks even have hairy testicles hanging off the hitch haha!


If people get gender-affirming care for their trucks, that's their own business, but no, no I will not eat in a place with a Confederate flag.

I find the idea of venerating an ideology that held that it was ok to hold human beings in bondage from the moment of their birth to their death to be abhorrent.


It is absolutely your right to express your self by not going to these places.

That is the beauty of freedom. You make the choice.


> People have absolute freedom of expression.

And that icludes not using x. And it includes criticising, mocking or talking about what x owner does.


Yes, exactly.

I'm looking for an agent, but thanks.

I forget the issue with Gpt4all as some have blended together when they weren't suitable for me.


My bad I misread your post. If GPT4all didn't work out, go with Aider. It’s a CLI tool, doesn't have a UI trying to proxy requests to a dev's server. You just point it at your local model (via Ollama or vLLM) and it stays in its lane. Since it’s Python-based, you can grep the source code to confirm there are no hidden update pings. If that's not for you, and you need the IDE experience, pick Continue. It’s the only one that handles air-gapped setups properly. You can manually install the .vsix file and kill all telemetry in the config.json. Unlike OpenCode, it doesn't try to be it’s just a bridge between your code and your model server. OpenCode failed because it’s basically "cloud-first" pretending to be local. Aider and Continue are actually built for what you want.

This was such a useful post, thank you have an upvote! I forgot about Aider.

I was looking into it but got distracted with other work, do you know if it does have any update checks or telemetry? I will check the source but I could miss something so I definitely want to ask people who have used it.

I think I also looked into Continue very briefly. I'm glad you put thos notes about it being more the IDE experience. Also for this one, does it come with instructions on a Github page or something on how to kill all spying/telemetry?

Thanks again!


Thanks for the recommendation, I took a look and maybe you can answer a few questions that I couldn't find a clear answer when doing some quick searching.

Regarding local models, can it use them? I found this discussion:

https://github.com/charmbracelet/crush/discussions/775

I didn't appreciate the meow maintainer's attitude converting it into a discussion and ignoring the issue even to this day.

"It does have internal telemetry and such (including updating its list of external models it can use) that can be turned off in the crush.json configuration file."

Is there a page or guide which explains the telemetry and any internet connected settings?

Forgot to add, I use Linux.


My google-fu is failing me at the moment to cite sources, but here's an example ~/.config/crush/crush.json file (based on my own) showing the options to remove telemetry and provider auto updates, and the connection info to connect to a localhost model on an OpenAI-compatible endpoint:

{ "$schema": "https://charm.land/crush.json", "options": { "disable_provider_auto_update": true, "disable_metrics": true }, "providers": { "ollama": { "name": "Local Models", "base_url": "http://localhost:11434/v1", "api_key": "nunya", "type": "openai-compat", "models": [ { "name": "Qwen 3.5 Local", "id": "qwen-3.5-35b-planning", "cost_per_1m_in": 0.01, "cost_per_1m_out": 0.01, "context_window": 131072, "think": true, "default_max_tokens": 5120, "supports_attachments": true } ] } } }

...or not, thanks to formatting. I can't even search for help formatting this text box, because of HN's nature haha


That helps a lot being able to see an example, thanks!

I don't know why all of these tools make it so hard to find the info to disable the telemetry/spying it's not just this one.

Regarding the formatting, I have no idea haha but there is a small "help" button on the bottom right next to the comment. Yes yes, I'm sure it won't help much.

Alternatively, possibly asking an LLM might help. It was able to link me the other day to a comment between a user complaining to the mod about the posting cool down period. I was able to learn that it can be disabled per account.


Session was Australian based which means they would have to do all sorts of horrible things when asked by the government, such as even letting police impersonate users...

I just checked and they claim to have moved their infra to Switzerland.

There are many other issues, some I've forgotten about since I would never trust it in the first place. They also require a phone number even!

Seeing them go, I feel neutral. It's always good to have more anonymity software, just not this for me.


https://www.theguardian.com/australia-news/2024/nov/05/sessi... they moved more than their infra

> They also require a phone number even!

"You don’t need a mobile number or an email to make an account with Session." - https://getsession.org/faq#identity-protection


From your link, it explains that they have to Switzerland:

"The developer of Session, an encrypted messaging app, has moved operations to Switzerland as ‘being in Australia just threatened our credibility as a privacy tool’."

What else in particular are you talking about?

With the phone number, I may have not remembered correctly for this particular software. If I could edit my comment, I would add a note.

But when going to the FAQ link I remembered how bad this piece of software was especially promoting cryptocurrency. I would never want a messenger to promote crypto, such a "Signal"

Edit: used different quote from the article


> They also require a phone number even!

No? Where did you get this from? I have used the app and was never asked anything. I was given an id I could share with others and that's it. Very simple. I wish more apps had this easy onboarding process.


No legal mechanism with such breadth exists in Australia. There was a great deal of overblown media reporting but the law [0] makes it explicitly clear that any request that requires a "systemic weakness", "systemic vulnerability" or anything of the like is null and void. Those terms are defined [1]. Note that it doesn't say the government can't request such a thing, it says that such a request "has no effect". It's simply dead on arrival.

My understanding is that the government could compel Facebook to publish a version of WhatsApp with a special mode that sends all messages to the police if the user ID is 1234567. This introduces a vulnerability but it is limited to one specific person. If your user ID is not 1234567, you're completely unaffected.

However my understanding is that the government cannot compel Facebook to compel a version of WhatsApp that, when it receives a special message, silently starts sending plaintext copies of every other message it receives to the police. Such a mechanism would be a systematic weakness that affects people other than those for which a warrant has been issued, so the notice would "have no effect".

The government could also not compel a source-available app with verifiable builds to stop distributing them so that it can add a secret user ID branch like the one I mentioned above for WhatsApp.

[0]: https://classic.austlii.edu.au/au/legis/cth/consol_act/ta199...

[1]: https://classic.austlii.edu.au/au/legis/cth/consol_act/ta199...


"No legal mechanism with such breadth exists in Australia." No.

See: https://lowendbox.com/blog/australian-police-will-soon-have-...

"These new warrant powers include:

1. Data disruption 2. Expansion of targeted devices to include all devices a suspect uses or might use 3. Account takeovers"

Australia is extremely draconian.


And a Five Eyes member.

Posted this earlier from a throwaway since my account wasn't able to reply for some odd reason and it was marked as dead:

Hello Jason!

I want to first thank you for all of your hard work developing Wireguard.

If I can find someone who is willing to put their name on it to help I definitely will, the problem is the spy agencies don't want your project to exist. It makes it harder to put resources to this. I've worked in security departments of certain companies and saw everything you could imagine.

Same for Mounir over at Veracrypt.

Both of you are developing some of the most important software that exists today.

Keep doing what you are doing by keeping everything in the open. User trust almost doesn't exist for these type of projects. Any hint of an issue would wipe that out in seconds.

This leads me to one question I do have for you zx2c4:

Why does Wireguard attempt to contact your servers and auto update on Android with no toggle to turn this off? It's a threat to everyone. Maybe it also does this on other platforms but I haven't tested them all.

I can think of reasons as to why you did this, none nefarious, but still it would be nice if you included that option so I don't have to patch each update to turn this off.

Thanks.


Any American's I've spoke to either are so sick of wars and of course don't want this or they actively oppose it.

The only people you find wanting this war is israelis and their kind. They sit back and relax while having their blackmail controlled, ancient, American politicians do all of the dirty work while sending their sons and daughters to die for isreal.


Sorry but I just dont buy this argument.

All Americans I have met had the same discourse: "I am ashamed, it's a pity Trump is in power, it's hard for us too, we don't support him", etc. I am rather sick of it.

A democracy is not an "us versus them" system, it's a closed loop. One cannot hide behind "these imbeciles votted for him and I am held hostage by their ignorance". Pros and antis Trump are equally responsible for his election.

Maybe if the US was not such an individualistic country, with growing educational and wealth inequality, half the population wouldn't have voted for exploding the status quo.

Politicians are no more corrupt than the population not impeaching them.

The US is basically in a streak of blatantly stealing resources of other countries, mafia style, and we are long past the point where the population can argue "we didnt know, we thought they had weapons of mass destruction, I am so against it".


Why isn't Iran doing more? It seems like they are pandering to the USA when they have the moral high ground.

Moral high ground? They lost it long ago when they were hanging people for being gay and sponsoring terrorist groups.

First thing is something US wants to do and they've done the other a lot.

Anthropic and ClosedAI are some of the biggest bullshitters in the industry.

The is no moat, no special "capability" and when the time comes when we can run these models on our own, they will be cheap SaaS gimmicks marketed to corporate and making more slop pictures for social media.


Z.ai and their GLM models are pretty low quality.

I've been testing it for awhile now since it seemed to have potential as a local model.

With this new update it still cannot parse simple, test PDFs correctly. It inconsistently tells me that the value in the name field in the document is incorrect, and has the name reversed to put the last name first. Or that a date is wrong as it's in the past/future, when it is not. Tons of fundamental errors like that.

Even when looking at the thinking process there are issues:

I used a test website for it to analyze and it says that the sites copyright year states 2026 which is in the future and to investigate as it could be an attack, but right after prints today's correct date.

I'm in the process of trying to get it uncensored. Hopefully that will create some use out of z.ai

Edit: by the way, which is the best uncensored model at the moment?


I'e been using their models pretty much daily for the past 2 months to work on the codebase of a very complex B2B2C platform written in an unusual functional language (F#) with an angular frontend.

I also use Claude premium daily for another client, and i use Codex. and i can tell you that GLM5 is at this point much more capable than Claude and Codex for complex backend end work, complex feature planning, and long horizon tasks. One thing i've noticed is that it is particularly good at following instructions and guidelines, even deep into the execution of a plan.

To me the only problem is that z.ai have had trouble with inference : the performance of their API has been pretty poor at times. It looks like this is an hardware issue related to the Huawei chips they use rather than an issue with the model itself. The situation has been substantially improving over the past few weeks.

GLM5.1, GLM5-Turbo and GLM5v are at this point better than Opus, Codex, Gemini and other claude source models. We have reached a major turning point. To me, the only closed source model still in the game is codex as it is much faster at executing simple tasks and implementing already created plans.

Try GLM5v for your PDF work, it's their last generation vision model that has been released a couple of days ago.


Does anyone have inside info on what these Huawai chips look like? I know Google has a Torus architecture unlike Nvidias fully connected one. Maybe it’s a similar architectural decision on the huawai chips that leads to bottlenecks in serving?

https://www.huawei.com/en/news/2026/3/mwc-superpod-ai

>For AI computing, the Atlas 950 SuperPoD, powered by UnifiedBus, integrates 64 NPUs per cabinet and can scale up to 8,192 NPUs, delivering superior performance for large-scale AI training and high-concurrency inference.


Plenty of other providers that offer much faster inference on GLM-5.1. Friendli, GMICloud, Venice, Fireworks, etc. And can be deployed through Bedrock already as well. Will probably be available generally in Bedrock soon, I would guess.

better than Opus? not even close. after struggling thru server overload for the past couple hours i finally put 5.1 thru the paces and it's....okay. failed some simple stuff that Sonnet/Opus/Gemini didn't. failed it badly and repeatedly actually. this was in typescript, btw. not sure if i'll keep the subscription or not

[flagged]


I appreciate that it's not working for your use case but it's unfortunate that you dismiss the experience of others. And i am not chinese, I am European. Thanks for your feedback anyway.

I tried Gemini 3.1 pro once to implement a previously designed 7-phase plan. it only implemented a quarter of the plan before stopping, the code didnt even compile because half of the scaffolding was missing. it then confidently said everything was done.

Codex and GLM didnt have any issue following the exact same plan and getting a working app. So I would argue Gemini is the failure here.


Sounds like you two are taking pass each other. PDF work is a specific niche that according to you it fails, the other person say it's good at coding.

Scroll down to my other comment, I've used it specifically for coding as well.

"It couldn't even debug some moderately complicated python scripts reliably."


“GLM5…better than Opus, Codex, Gemini…”

What wild claim to make. Unsupported by benchmarks, unsupported by the consensus of the community, no evidence provided.

Sounds like in another comment here even the GLM5 team concedes they are behind the frontier wrt tool calling, do you know something they don’t?


I know my use case and my personal experience :) i am not trying to pretend that it is the best in benchmarks, just sharing my experience so people know that some folks are having a very good experience with GLM models, compared to the competition.

My only goal is to encourage people to try it out so they can see if it moves the needle for them, because there are fair chances that it will. I am not trying to start a flamewar or something.


It’s not a flame war, and you’re not just sharing your experience and encouraging others to try it out.

You’re making a claim, and I’m pointing out that it’s unsubstantiated and not consistent with any other source of data, including that internal to the company that makes the model.

I hope you can see that that’s different than saying it’s worked well for me


Sometimes we STEM folks are way too rigid, I obviously meant "IN MY OPINION, GLM models are at this point superior to...".

I do not think that anyone who read my comment understood it differently. But I grant you this point, this is just my opinion based on my personal experience not the result of a scientific study.

Once this is said, i wasn't submitting a scientific paper for preprint, just posting my opinion on an internet forum.

Not sure why you are making such a big deal out of it, especially for something for which people can decide within minutes if it works for them or not. And I haven't seen you nitpick on other people saying that all Chinese models are garbage incapable of doing even the most basic task, without quoting any study. This kind of scrutiny tends to be one-sided.

Edit: and regarding what the z.ai team is saying about their models, just check their Discord and the articles they link there. They themselves say that their latest models have leading performance on a number of aspects. It is misleading to suggest that the authors of the model are not proudly saying that their models have best in class performance.


FWIW, my experience is the same. Paired with opencode it has been excellent to me.

So you're saying it's pretty low quality because it failed specifically to parse PDFs?

I do not know if it is good, because I have not tested it yet, but the most recent uncensored model is:

https://huggingface.co/trohrbaugh/gemma-4-31b-it-heretic-ara...

which was produced immediately after Google released their new Gemma 4 model.


Completely agree with this statement "Z.ai and their GLM models are pretty low quality." I have been trying out and it's kind of useless compare to SOTA models.

I do not doubt your experience, but such statements should always be qualified by specifying the kind of tasks for which you have tried the models.

For all existing models, including for all SOTA models, you can find contradictory statements, that they suck and that they are great.

It is very likely that all these statements are true simultaneously, because each model may succeed for some tasks and fail for others, so without specifying the tested tasks any claim that a model was good or bad is worthless.


I still use GLM 4.7 for well defined coding tasks. I never got 5.0 to work satisfactorily, it felt like a hosting problem (z.ai) where it would work for a while then, for whatever reason, it couldn't respond to the context any more - but that's just a hunch.

I had no such trouble with 4.7 and find it fast and productive. Haven't tried 5.1; am using openAI models for coding most of the time.


Same here.

Z.ai seem to promote 4.7 for smaller tasks, 5.1 for larger tasks (similar to Anthropic's recommendation for usage of Haiku and Sonnet/Opus models).

5.1 works for me already in the most economical basic paid tier ("lite coding plan"), unlike first release of v5 (5.0 ?)


I hit this as well. It just seems to hang and process for ages.

Try lowering thinking level with GLM-5.1, to me that seems to have an impact on mitigating the blocking behaviour.

Hmm I'll try that, but OpenCode shows me the thinking and it's not even doing that. I'm just getting no tokens from it at all.

I don't agree. I think their models are pretty good. The company's infrastructure though seems to be so so.

>by the way, which is the best uncensored model at the moment?

There are no such models, depending on your definition of censorship. If you're referring to abliteration and similar automated techniques, they're snake oil.


That is absolutely not the case. Try HauHauCS's Qwen 3.5 models. They don't refuse anything, and they don't lose a noticeable amount of capability.

Refusal training is only one part of censorship (hence "depending on your definition"). Most permanent biases baked in by the devs are impossible to correct automatically.

Thanks for the tip.

That user has produced other interesting models, including aggressively uncensored variants which claim 0 refusals.

I will definitely try it as there can never be too many uncensored AI implementations.


From what I gather qwen is currently the undisputed local LLM king.

It is very true. Look at the Ukraine war for a current example.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: