>So because some projects can absorb some PRs of a certain size, all projects of should be able to absorb PRs of that same size?
Your argument has nothing to do with AI and more to do with PR size and 'fire and forget' feature merges. That's what the commenter your responding to is pointing out.
And my entire point is that LLM-generated feature requests are strongly correlated with high risk merge requests / pull requests, to which the commenter made no meaningful argument against. Instead the commenter chose to focus on the size of the PR and say “well I’ve seen it in the wild”.
The way to get around this without getting all the LLM influencer bros in an uproar is to come up with a system that allows open source libraries to evaluate the risk of a PR (including the author’s ability to explain wtf the code does) without referencing AI because apparently it’s an easily-triggered community.
I think that's the only shot at progress since it can address the general problem instead of trying to special-case unenforceable rules that you hope the lowest quality people follow.
For example, a 3000+ line PR with no communication beforehand is already a low quality PR before AI. And it's one of the most annoying contributions to deal with since you have to basically tell them "sorry but all that work you did isn't acceptable". Yet they probably did all of it in earnest.
Presumably you already have a policy where you accept random PRs for small tweaks like doc fixes, but you don't want unsolicited PRs that make substantial changes. So a rule against AI doesn't change anything there.
And if you saw an uptick in large unsolicited PRs, then surely the solution is to update the process like disallow PRs that don't link to an issue.
>Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs.
If you look back and think about what your saying for a minute, it's that low effort PRs are bad.
Using an LLM to assist in development does not instantly make the whole work 'low effort'.
It's also unenforceable and will create AI witch hunts. Someone used an em-dash in a 500 line PR? Oh the horror that's a reject and ban from the project.
2000 line PR where the user launched multiple agents going over the PR for 'AI patterns'? Perfectly acceptable, no AI here.
> Using an LLM to assist in development does not instantly make the whole work 'low effort'.
Instantly? No, of course not.
I do use LLMs for development, and I am very careful with how I use it. I throughly review the code it generated (unless I am asking for throwaway scripts, because then I only care about the immediate output).
But I am not naive. We both know that a lot of people just vibe code the way through, results be damned.
I am not going to fault people devoting their free time on Open Source for not wanting to deal with bullshit. A blanket ban is perfectly acceptable.
The logic is fine, but hit and runs just became a lot easier to get away with then no? Especially with tinted windows being so prevalent you very well might not even be able to give a description at all of the driver, and they can just later say they found their car like that.
Probably a lot of other issues arise from that. If your car gets towed for being illegally parked, what if you just say you didn't park it there? Seems like a similar violation to a red light ticket.
Hit and run is different; the car is insured, regardless of the driver. If criminal, they will interview to see if the owner was driving, who else had access to the car, and so on.
>I ask because I feel like if we don't do something, the trajectory is that ~every website and app is going to either voluntarily or compulsorily do face scans, AI behavior analysis, and ID checks for their users, and I really don't want to live in that world.
The only reason they'd _have_ to do that is government laws making them do so. When the law is vague around what age verification is, if one company decided to do ID verification, now any site that doesn't might not be doing 'enough' in the eyes of the law (it'd come down to a court case if not specifically defined).
Though it may seem more convenient to just do it at the os level (though really the browser level would make more sense with a required header/cookie no?), I'd be shocked if you don't see it expanded in the future to be more than a checkbox.
The 9/9 is actually crazy, and then they posted about it as if they found something? What they did was find a major issue in their own process and then told the world about it, that just doesn't seem right.
It would seem their service identifies only phishing sites as legitimate ones. It would seem 100% of sites they deem legitimate are phishing sites. Incredible.
The deep scan detected all phishing sites correctly with the unfortunate tagging of legit sites as phishing too. I imagine their code looks something like isPhishing = true.
I don't really think this is every possible to stop fully, your essentially trying to jailbreak the LLM, and once jailbroken, you can convince it of anything.
The user was given a bunch of warnings before successfully getting it into this state, it's not as if the opening message was "Should I do it?" followed by a "Yes".
This just seems like something anti-ai people will use as ammunition to try and kill AI. Logically though it falls into the same tool misuse as cars/knives/guns.
Why is the finder the way it is? Is it actually easier to use than (whatever the normal file browser windows and linux uses is called) if all you ever use is macs?
Most of the other quirks I can work around (though the default alt tab behavior not picking up windows of the same app is an insane default) but the finder is just unusable.
As much as this saddens me I think its because most computer users these days never think about files. Everything we do on a day to day basis exists as database records, either in sqlite databases hidden away in application data directories, or in the databases behind a million SaaS products. Music is done in Apple Music, photos are managed in iPhoto, and so and so forth.
In which way are other GUI “finder-equivalents” better? I’m not invested either way, but I’m quite curious. It would be a great biz opportunity to make an aftermarket replacement if there is huge gap.
>Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"?
>How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need?
The department of defense in particular has a law on the books allowing them to force a company to sell them something. They generally are more than willing to pay a pretty penny for something so it hardly needs used, but I'd be shocked if any country with a serious military didn't have similar laws.
So your right when it comes to private citizens, but the DoD literally has a special carve out on the books.
A lawsuit challenging it would have actually been insane from anthropic because they would have had to argue "we're not that special you can just use someone else" in court.
A more clear example would be, what would you expect to happen if Intel and amd said our chips can't be used in computers that are used in war.
buts it not a national emergency. its not a time of war. and there is a different between demanding to be customer, and demanding that you change your products because they would like them to be a different way. that is actual conscription.
for many decades, the DoD has used a carrot to get what they want. this is a stick.
Hence why banning AI contributions is meaningless, you literally only punish 'good' actors.
reply