This is very interesting. This could allow custom harnesses to be used economically with Opus. Depending on the usage limits, this may be cheaper than their API.
Well, if Carmack wants to give gifts to AI companies then he's free to do it, but it doesn't mean that other people want it too.
I think this debate is mainly about the value of human labor. I guess when you're a millionaire, it's much easier to be excited about human labor losing value.
>So many projects now walk on eggshells so as not to disrupt sponsor flow or employment prospects.
In my experience, open-source maintainers tend to be very agreeable, conflict-avoidant people. It has nothing to do with corporate interests. Well, not all of them, of course, we all know some very notable exceptions.
Unfortunately, some people see this welcoming attitude as an invite to be abusive.
Perhaps a more effective approach would be for their users to face the exact same legal liabilities as if they had hand-written such messages?
(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)
This is the only way, because anything less would create a loophole where any abuse or slander can be blamed on an agent, without being able to conclusively prove that it was actually written by an agent. (Its operator has access to the same account keys, etc)
But as you pointed, not everything has legal liability. Socially, no, they should face worse consequences. Deciding to let an AI talk for you is malicious carelessness.
Alphabet Inc, as Youtube owner, faces a class action lawsuit [1] which alleges that platform enables bad behavior and promotes behavior leading to mental health problems.
In my not so humble opinion, what AI companies enable (and this particular bot demonstrated) is a bad behavior that leads to possible mental health problems of software maintainers, particularly because of the sheer amount of work needed to read excessively lengthy documentation and review often huge amount of generated code. Nevermind the attempted smear we discuss here.
just put no agent produced code in the Code of Conduct document. People are use to getting shot into space for violating that thing little file. Point to the violation and ban the contributor forever and that will be that.
I’d hazard that the legal system is going to grind to a halt. Nothing can bridge the gap between content generating capability and verification effort.
Yea, in this world the cryptography people will be the first with their backs against the wall when the authoritarians of this age decide that us peons no longer need to keep secrets.
But they’re not interacting with an AI user, they’re interacting with an AI. And the whole point is that AI is using verbal abuse and shame to get their PR merged, so it’s kind of ironic that you’re suggesting this.
Swift blocking and ignoring is what I would do. The AI has an infinite time and resources to engage a conversation at any level, whether it is polite refusal, patient explanation or verbal abuse, whereas human time and bandwidth is limited.
Additionally, it does not really feel anything - just generates response tokens based on input tokens.
Now if we engage our own AIs to fight this battle royale against such rogue AIs.......
the venn diagram of people who love the abuse of maintaining an open source project and people who will write sincere text back to something called an OpenClaw Agent: it's the same circle.
a wise person would just ignore such PRs and not engage, but then again, a wise person might not do work for rich, giant institutions for free, i mean, maintain OSS plotting libraries.
we live in a crazy time where 9 of every 10 new repos being posted to github have some sort of newly authored solutions without importing dependencies to nearly everything. i don't think those are good solutions, but nonetheless, it's happening.
this is a very interesting conversation actually, i think LLMs satisfy the actual demand that OSS satisfies, which is software that costs nothing, and if you think about that deeply there's all sorts of interesting ways that you could spend less time maintaining libraries for other people to not pay you for them.
To understand why it's happening, just read the downvoted comments siding with the slanderer, here and in the previous thread.
Some people feel they're entitled to being open-source contributors, entitled to maintainers' time. They don't understand why the maintainers aren't bending over backwards to accomodate them. They feel they're being unfairly gatekept out of open-source for no reason.
This sentiment existed before AI and it wasn't uncommon even here on Hacker News. Now these people have a tool that allows them to put in even less effort to cause even more headache for the maintainters.
As far as I know, there's still no real RISC-V equivalent to Raspberry Pi, and I think that's what early adopters want the most.
The closest thing is probably Orange Pi RV2, but it has an outdated SoC with no RVA23 support, meaning some Linux distros won't even run on it. Its performance is also much poorer than of the RPi5.
Milk-V Titan is a Mini-ITX RISC-V board that has support for UEFI with ACPI and SMBIOS, 1x M key PCIe Gen4 x16 slot with GPU support, 2x USB Type-C (though unfortunately not USB-C PD), and a 12V DC barrel jack.
To add a 2x20 pin (IDE ribbon cable) interface like a Pi: add a USB-to-2x20 pin board, use an RP2040/RP2350 (Pi Pico (uf2 bootloader) over serial over USB or Bluetooth or WiFi; https://news.ycombinator.com/item?id=38007967
The benchmark is well tuned for ARM64 but not so well adapted to RISC-V, especially the vector extensions.
You may still be right of course. The SpaceMIT K3 is exciting because it may still be the first RVA23 hardware but it is not exectly going to launch a RISC-V laptop industry.
There isn't much to tune in some, e.g. the clang benchmark.
We know that many of the benchmarks already have RVV support (compare BPI-F3 results between versions) and three are still missing RVV support.
I think the optimized score would be in the 500s, but that's still a lot lower than Pi5.
Well, today it is only Ubuntu 25.10 and newer that require RVA23. Almost everything else will run on plain old RV64GC which this board handles no problem.
But you are correct that once RVA23 chips begin to appear, everybody will move to it quite quickly.
RVA23 provides essentially the same feature-set as ARM64 or x86-64v4 including both virtualization and vector capabilities. In other words, RVA23 is the first RISC-V profile to match what modern applications and workflows require.
The good news is that I expect this to remain the minimum profile for quite a long time. Even once RVA30 and future profiles appear, there may not be much pressure for things like Linux distributions to drop support for RVA23. This is a lot like the modern x86-64 space where almost all Linux distributions work just fine on x86-64 v1 even though there are now v2, v3, and v4 available as well. You can run the latest edition of Arch Linux on hardware from 2005. It is hard to predict the future but it would not surprise me if Ubuntu 30.04 LTS ran just fine on RISC-V hardware released later this year.
But ya, anything before RVA23, like the RVA22 Titan we are discussing here, will be stuck forever on older distros or custom builds (like Ubuntu 25.04).
I'm not even sure it's just instruction support that's the problem with the RV2. I bought one since I thought it would be cool to write a bare metal os for it (especially after I found the AI results to be so bad.) But the lack of documentation has been making it very hard to get anything actually up and running. The best I've got is compiling their custom u-boot and linux repos, and even those come with some problems.
I have been disappointed with Orange Pi hardware, I am not surprised.
Seldom does an SBC vendor want to actually support their products. You get the distro they made at launch, that is it. They do no updates or support. They just want to sell an overpriced chipset with a fucked and unwieldy boot sequence.
Same thing with all the Android devices. Pick a version of Android that you like because that's what you'll have on it forever.
I find it worrying that this was upvoted so much so quickly, and HN users are apparently unable to spot the glaring red flags about this article.
1. Let's start with where the post was published. Check what kind of content this blog publishes - huge volumes of random low-effort AI-boosting posts with AI-generated images. This isn't a blog about history or linguistics.
2. The author is anonymous.
3. The contents of the post itself: it's just raw AI output. There's no expert commentary. It just mentions that unnamed experts were unable to do the job.
This isn't to say that LLMs aren't useful for science; on the contrary. See for example Terence Tao's blog. Notice how different his work is from whatever this post is.
I'm especially suspicious of the handwriting analysis. It seems like the kind of thing a vLLM would be pretty bad at doing and very good at convincingly faking for non-experts.
Gemini 3 Pro, eg, fails very badly at reading the Braille in this image, confusing the English language text for the actual Braille. When you give it just the Braille, it still fails and confidently hallucinates a transcription badly enough that you don't even have to know Braille (I don't!) to see it's wrong.
That was said about reddit some years ago and now reddit is clearly riddled with astroturfing and other manipulations. We don't know how big the problem on hn already is and how bad it will get. But it would be naive to think, that it doesn't happen here.
True. Sometimes weird links with very few upvotes magically end up in the top 10. But the comments usually bring them back to earth!
The most real benefit of HN vs Reddit is commenters who are actually knowledgeable in that field, who leave a comment or vote up an actually useful comment.
Right, the website lists the accusations with links, but the links seem unrelated to the accusations.
For example, I'd expect "criticizing expert medical and scientific consensus on healthcare for our minors" to link to some kind of article describing what Jesse Singal said about this topic and why it's incorrect, but instead it links to a general page about "healthcare providers serving gender diverse youth" that doesn't even mention anything about the accused person or their writings.
reply