Hacker Newsnew | past | comments | ask | show | jobs | submit | asteroidburger's commentslogin

Does it matter?

It does to the person who asked the question.

Whether their concerns are driven by curiosity, ethics, philosophy, or something else entirely is really immaterial to the question itself.


Not necessarily. Would you respond the same if the previous person said, "Was this built using an IDE" or "What qualifications do you have to write this software"?

Shit code can be written with AI. Good code can also be written with AI. The question was only really asked to confirm biases.


As someone who has worked in projects with hundreds of seemingly trivial dependencies which still manage to produce a steady stream of security notices, "What qualifications do you have to write this software" seems like an entirely reasonable, far too seldom asked question to me.

Sure, but that seems like quite a high gatekeeping bar for a test suite.

What test suite?

I dont automatically dismiss ai slop but when its obvious this was barely reviewed and sloppily committed with broken links 404ing or files missing from git, then it is slop.

Using llm as a tool is different from guiding it with care vs tossing a one sentence prompt to copy localstack and expecting the bot to rewrite it for you, then pushing a thousand file in one go with typos in half the commit message.

Longevity of products comes from the effort and care put into them if you barely invest any of it to even look at the output, look at the graveyard of "show hn" slop. Just a temporary project that fades away quickly

The commits are sloppy and careless and the commit messages are worthless and zero-effort (and often wrong): https://github.com/hectorvent/floci/commit/1ebaa6205c2e1aa9f...

There are no code commits. The commits are all trying to fix ci.

The release page (changelog) is all invalid/wrong/useless or otherwise unrelated code changes linked.

Not clearly stating that it was AI written, and trying to hide the claude.md file.

The feature table is clearly not reviewed, like "Native binary" = "Yes" while Localstack is no. There is no "native" binary, it is a packed JVM app. Localstack is just as "native" then. "Security updates Yes" .. entirely unproven.


I'll have a much harder time convincing my company to try out such a tool if it's AI slop than when there's a group of people behind it.

I'll happily use it for personal development stuff if I ever decide to try cloud stuff in my free time, but it's hardly an alternative to established projects like LocalStack for serious business needs.

Not that any of it should matter to the people behind this project of course, they can run and make it in whatever way they want. They stand to lose nothing if I can't convince my boss and they probably shouldn't care.


I pay $25 for my backup 5G internet - but unlike a mobile plan, it's actually unlimited at 300mbps, and I don't have to resort to TTL shenanigans and such to use it for my whole network. It's just plugged into one of the ports on my router, and provides it with real public IPv4. Ran it for a few days when the fiber dropped out and consumed 200GB without complaint from either myself or the ISP.

How long until “first principles” is a meme like “considered harmful”? Or are we there already?


"from first principles" has been a common phrase in science and philosophy for a long time: https://en.wikipedia.org/wiki/First_principle


Sure, but that's not the way it's being used by your daily twitter/X poster.


Ok, we've removed first principles from the title above.


It's good that MLT did cancel them, but there's still a ton up that way. Mill Creek, Lynnwood, Marysville, just for a few examples.


For me, s/FreeBSD/Debian/. Same reason.


Who is the "we" here? I've poked around a few pages on this fellow's site, and apparently haven't found the right one to answer that.


Where he works... Or perhaps more accurately his co-workers. which is the domain. The university of Toronto.


It's much safer to export a key one time and import it into a new machine, or store it in a secure backup, than to keep it just hanging out on disk for eternity, and potentially get scooped up by whatever malware happens to run on your machine.


Any malware capable of exfiltrating a file from your home folder is also capable of calling the export command and tricking you into providing biometrics.


Not necessarily; "read file" is very different from "execute command." The biometrics part is a substantial lift as well.


You're not adding new features and such like that. Just patching security vulnerabilities in a forked branch.

Sure, you won't get the niceties of modern developments, but at least you have access to all of the source code and a working development environment.


As someone who actively maintains old rhel, the development environment is something you can drag forward.

The biggest problem is fixing security flaws with patches that dont have 'simple' fixes. I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.


> I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.

That sounds like a fun job actually.


If you can find the patches, it's fun to tweak them in the most conservative way possible to apply to the old code base.

However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer and no isolated patch. What do you do then? Rebase to get the alleged fix? You can't even tell if the vulnerability was present in the previous version.


> However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer

There are known exploited vulnerabilities without PoC? TIL and that doesn't sound fun at all indeed.


Distribution maintainers who do the backports do not necessarily have access to this kind of information. My impression is that open sharing of in-the-wild exploits isn't something that happens regularly anymore (if it ever did), but I'm very much out of the loop these days.

And access to the reproducer is merely a replacement for lack of public vulnerability-to-commit mapping for software that has a public version control repository.


It used to happen, I'd say less than 5% of the total vuln reproducers are probably 'shared' at this point.

At last count I'd written close to 2000 reproducers and approx 400 of those were local privesc for product security.

Security teams are usually highly discouraged from sharing exploits/reproducers as they have leaked in the past. My spectre/meltdown ended up on the web and someone else took credit, sad.


This guy backports.


The unfortunate problem is that, the more popular software is, the more it gets looked at, its code worked on. But forked branches as they age, become less and less likely to get a look-at.

Imagine a piece of software that is on some LTS, but it's not that popular. Bash is going to be used extensively, but what about a library used by one package? And the package is used by 10k people worldwide?

Well, many of those people have moved on to a newer version of a distro. So now you're left with 18 people in the world, using 10 year old LTS, so who finds the security vulnerabilities? The distro sure doesn't, distros typically just wait for CVEs.

And after a decade, the codebase is often diverged enough, that vulnerability researchers, looking at newer code, won't be helpful for older code. They're basically unique codebases at that point. Who's going through that unique codebase?

I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog. So one might ask, would you use software which is a security concern, let's say a http server or what not, if only 18 people in the world looked at the codebase? Used it?

And are around to find CVEs?

This is a problem with any rarely used software. Fewer hands on, means less chance of finding vulnerabilities. 15 year old LTS means all software is rare.

And even though software is rare, if an adversary finds out it is so, they can then play to their heart's content, looking for a vulnerability.


> I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog.

Likewise, the number of black hats searching for vulnerabilities in these versions is probably zero, since there isn't a deployment base worth farming.

Unless you're facing something targeted at you that an adversary is going to go to huge expense to try to find fresh vulnerabilities specifically in the stack you're using, you're probably fine.

I agree with your sentiment that no known vulnerabilities doesn't mean no vulnerabilities, but my point is that the risk scales down with the deployment numbers as well.

And always keeping up with the newest thing can be more dangerous in this regard: new vulnerabilities are being introduced all the time, so your total exposure window could well be larger.


If no one is posting CVE that affects these old Ubuntu versions then Canonical doesn’t have to fix them. I realize that’s not your point, but it almost certainly is a part of Canonical’s business plan for setting the cost of this feature.

The Pro subscription isn’t free and clearly Canonical think they will have enough uptake on old versions to justify the engineering spend. The market will tell them if they’re right soon. It will be interesting to watch. So far it seems clear they have enough Pro customers to think expanding it is profitable.


You typically need to maintain much newer C++ compilers because things from the browser world can only be maintained through periodic rebases. Chances are that you end up building a contemporary Rust toolchain as well, and possibly more.

(Lucky for you if you excluded anything close to browsers and GUIs from your LTS offering.)


The referenced Bluetooth bug on the Github readme seems like a pretty good reason. "We don't want to work around or deal with the bugs on other platforms" seems like a reasonable position.


This bug prevents injecting magic handshake that enables all features. It wouldn't be relevant if Apple didn't block these features in the first place.

Btw it's not some magic feature set they spent years to research. Sub $60 Soundcores have most of them if not all.


This is anti-competitive measures, it's not like it's the only thing they do like this. The charging cables were the same.


If this is about lightning, what connector do you think apple should have used? USB-C came out long after lightning.


> USB-C [Aug 2014] came out long after lightning [Sep 2012].

1 year 11 months :)

The old 30-pin connector before Lightning came from 2003.

Meanwhile it took until 2023 for iPhones to use USB-C.

    30-pin     2003 - 2012 (2014)
    Lighting   2012 - 2023 (2025)
    USB-C AAPL utter mess -
               iPhone 2023 -
    USB-C RoW  2014 -


How was USB C on Apple a “mess” when it came out on the iPhone 14? It supported all of the standard USB protocols - video, networking, mass storage, audio etc.


The transition to USB-C was spread out across product lines over many years, hence "mess". The iPhone is on a separate line in the table.


Yeah, I don't think the connector criticism of Apple really stands up to any scrutiny. 30-pin was strictly better than USB-based solutions when it came out, as was Lightning. They supported both of those for a very long time and kept tons of iDevice accessories around the world functioning.


Yes because Apple has a monopoly on - Bluetooth headphones you can use with Android devices??

Do console makers have to make sure that their accessories work with other consoles? Do TV manufacturers have to ensure their remotes work with other TVs?

And no you never had to buy Apple branded or licensed charging cables.


The camera on the back of the phone actually helps quite a bit with said data entry.


I normally ingest csv files exported from my bank - then have to manually tag and relate them (like internal transfers).

I have a bunch of scripts to help and wrote a custom web scraper to pull the data, automating much of this, but much is still quite manual.


finance apps will soon automate that kind of things, and most probably the phone app will have it before the web version

there's resource rot it seems, desktop/web have less than the phone, and it shows

hp printer/scanner app is way leaner than anything they've ever released on windows (not saying much but still), same for my bank app it's a bit faster, and better designed (features and ui)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: