Not necessarily. Would you respond the same if the previous person said, "Was this built using an IDE" or "What qualifications do you have to write this software"?
Shit code can be written with AI. Good code can also be written with AI. The question was only really asked to confirm biases.
As someone who has worked in projects with hundreds of seemingly trivial dependencies which still manage to produce a steady stream of security notices, "What qualifications do you have to write this software" seems like an entirely reasonable, far too seldom asked question to me.
I dont automatically dismiss ai slop but when its obvious this was barely reviewed and sloppily committed with broken links 404ing or files missing from git, then it is slop.
Using llm as a tool is different from guiding it with care vs tossing a one sentence prompt to copy localstack and expecting the bot to rewrite it for you, then pushing a thousand file in one go with typos in half the commit message.
Longevity of products comes from the effort and care put into them if you barely invest any of it to even look at the output, look at the graveyard of "show hn" slop. Just a temporary project that fades away quickly
There are no code commits. The commits are all trying to fix ci.
The release page (changelog) is all invalid/wrong/useless or otherwise unrelated code changes linked.
Not clearly stating that it was AI written, and trying to hide the claude.md file.
The feature table is clearly not reviewed, like "Native binary" = "Yes" while Localstack is no. There is no "native" binary, it is a packed JVM app. Localstack is just as "native" then. "Security updates Yes" .. entirely unproven.
I'll have a much harder time convincing my company to try out such a tool if it's AI slop than when there's a group of people behind it.
I'll happily use it for personal development stuff if I ever decide to try cloud stuff in my free time, but it's hardly an alternative to established projects like LocalStack for serious business needs.
Not that any of it should matter to the people behind this project of course, they can run and make it in whatever way they want. They stand to lose nothing if I can't convince my boss and they probably shouldn't care.
I pay $25 for my backup 5G internet - but unlike a mobile plan, it's actually unlimited at 300mbps, and I don't have to resort to TTL shenanigans and such to use it for my whole network. It's just plugged into one of the ports on my router, and provides it with real public IPv4. Ran it for a few days when the fiber dropped out and consumed 200GB without complaint from either myself or the ISP.
It's much safer to export a key one time and import it into a new machine, or store it in a secure backup, than to keep it just hanging out on disk for eternity, and potentially get scooped up by whatever malware happens to run on your machine.
Any malware capable of exfiltrating a file from your home folder is also capable of calling the export command and tricking you into providing biometrics.
As someone who actively maintains old rhel, the development environment is something you can drag forward.
The biggest problem is fixing security flaws with patches that dont have 'simple' fixes. I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.
> I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.
If you can find the patches, it's fun to tweak them in the most conservative way possible to apply to the old code base.
However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer and no isolated patch. What do you do then? Rebase to get the alleged fix? You can't even tell if the vulnerability was present in the previous version.
> However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer
There are known exploited vulnerabilities without PoC? TIL and that doesn't sound fun at all indeed.
Distribution maintainers who do the backports do not necessarily have access to this kind of information. My impression is that open sharing of in-the-wild exploits isn't something that happens regularly anymore (if it ever did), but I'm very much out of the loop these days.
And access to the reproducer is merely a replacement for lack of public vulnerability-to-commit mapping for software that has a public version control repository.
It used to happen, I'd say less than 5% of the total vuln reproducers are probably 'shared' at this point.
At last count I'd written close to 2000 reproducers and approx 400 of those were local privesc for product security.
Security teams are usually highly discouraged from sharing exploits/reproducers as they have leaked in the past. My spectre/meltdown ended up on the web and someone else took credit, sad.
The unfortunate problem is that, the more popular software is, the more it gets looked at, its code worked on. But forked branches as they age, become less and less likely to get a look-at.
Imagine a piece of software that is on some LTS, but it's not that popular. Bash is going to be used extensively, but what about a library used by one package? And the package is used by 10k people worldwide?
Well, many of those people have moved on to a newer version of a distro. So now you're left with 18 people in the world, using 10 year old LTS, so who finds the security vulnerabilities? The distro sure doesn't, distros typically just wait for CVEs.
And after a decade, the codebase is often diverged enough, that vulnerability researchers, looking at newer code, won't be helpful for older code. They're basically unique codebases at that point. Who's going through that unique codebase?
I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog. So one might ask, would you use software which is a security concern, let's say a http server or what not, if only 18 people in the world looked at the codebase? Used it?
And are around to find CVEs?
This is a problem with any rarely used software. Fewer hands on, means less chance of finding vulnerabilities. 15 year old LTS means all software is rare.
And even though software is rare, if an adversary finds out it is so, they can then play to their heart's content, looking for a vulnerability.
> I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog.
Likewise, the number of black hats searching for vulnerabilities in these versions is probably zero, since there isn't a deployment base worth farming.
Unless you're facing something targeted at you that an adversary is going to go to huge expense to try to find fresh vulnerabilities specifically in the stack you're using, you're probably fine.
I agree with your sentiment that no known vulnerabilities doesn't mean no vulnerabilities, but my point is that the risk scales down with the deployment numbers as well.
And always keeping up with the newest thing can be more dangerous in this regard: new vulnerabilities are being introduced all the time, so your total exposure window could well be larger.
If no one is posting CVE that affects these old Ubuntu versions then Canonical doesn’t have to fix them. I realize that’s not your point, but it almost certainly is a part of Canonical’s business plan for setting the cost of this feature.
The Pro subscription isn’t free and clearly Canonical think they will have enough uptake on old versions to justify the engineering spend. The market will tell them if they’re right soon. It will be interesting to watch. So far it seems clear they have enough Pro customers to think expanding it is profitable.
You typically need to maintain much newer C++ compilers because things from the browser world can only be maintained through periodic rebases. Chances are that you end up building a contemporary Rust toolchain as well, and possibly more.
(Lucky for you if you excluded anything close to browsers and GUIs from your LTS offering.)
The referenced Bluetooth bug on the Github readme seems like a pretty good reason. "We don't want to work around or deal with the bugs on other platforms" seems like a reasonable position.
This bug prevents injecting magic handshake that enables all features. It wouldn't be relevant if Apple didn't block these features in the first place.
Btw it's not some magic feature set they spent years to research. Sub $60 Soundcores have most of them if not all.
How was USB C on Apple a “mess” when it came out on the iPhone 14? It supported all of the standard USB protocols - video, networking, mass storage, audio etc.
Yeah, I don't think the connector criticism of Apple really stands up to any scrutiny. 30-pin was strictly better than USB-based solutions when it came out, as was Lightning. They supported both of those for a very long time and kept tons of iDevice accessories around the world functioning.
Yes because Apple has a monopoly on - Bluetooth headphones you can use with Android devices??
Do console makers have to make sure that their accessories work with other consoles? Do TV manufacturers have to ensure their remotes work with other TVs?
And no you never had to buy Apple branded or licensed charging cables.
finance apps will soon automate that kind of things, and most probably the phone app will have it before the web version
there's resource rot it seems, desktop/web have less than the phone, and it shows
hp printer/scanner app is way leaner than anything they've ever released on windows (not saying much but still), same for my bank app it's a bit faster, and better designed (features and ui)
reply