& I have thus far made a large portion of my living off of fixing bad code "later".
… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".
So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.
I think we'll see companies increasingly adopting the X approach: charged tiers for 'fewer' ads. With no actual guarantee as to the absolute quantity of ads, just 'fewer, relative to the people who aren't paying as much'. We're basically on a downward slope where not seeing ads is going to get steadily more and more expensive over time.
PyPI only supports 2FA for sign-in. 2FA is not a factor at all with publishing. To top it off, the PyPA's recommended solution, the half-assed trusted publishing, does nothing to prevent publishing compromised repos either.
No. I was one of the "lucky" ones forced to use 2FA from the beginning.
I also wrote the twine manpage (in debian) because at the time there was even no way of knowing how to publish at all.
Basically you enable 2FA on your account, go on the website, generate a token, store it in a .txt file and use that for the rest of your life without having to use 2FA ever again.
I had originally thought you'd need your 2FA every upload but that's not how it works.
Then they have the trusted publisher thing (which doesn't and won't work with codeberg) where they just upload whatever comes from github's runners. Of course if the developer's token.txt got compromised, there's a chance also his private ssh key to push on github got compromised and the attackers can push something that will end up on pypi anyway.
Remember that trusted publishing replaces GPG signatures, so the one thing that required unlocking the private key with a passphrase is no longer used.
python.org has also stopped signing their releases with GPG in favour to sigstore, which is another 3rd party signing scheme somewhat similar to trusted publisher.
edit: They deny this but my suspicion is that eventually tokens won't be supported and trusted publishing will be the only way to publish on pypi, locking projects out of using codeberg and whatever other non-major forge they might wish to use.
A stolen PyPI token was uses for the compromized litellm package. I wouldn't be surprized if tokens will be decommissioned in the aftermath of these recent hijackings. That wouldn't prevent these attacks as you mentioned SSH keys were stolen (and a Github token in the case of litellm). It would be a way for PyPA to brush off liability without securing anything.
I’ll bypass the technical inaccuracies in this comment to focus on the main important thing.
> Then they have the trusted publisher thing (which doesn't and won't work with codeberg) where they just upload whatever comes from github's runners.
There’s no particular reason it wouldn’t work; it’s just OIDC and Codeberg could easily stand up an IdP. If you’re willing to put the effort into making this happen, I’d be happy (as I’ve said before) to review any contributions towards this end.
(The only thing that won’t work here is imputing malicious intent; that makes it seem like you have a score to settle rather than a genuine technical interest in the community.)
> The era of purposefully frustrating humans is over. The Chinese open source model running on the box under my desk can pass the Turing Test. When you call, e-mail, text, or show me an ad, you’ll never know if it’s me or my model seeing it.
But at some point, you're going to want to do something, like, e.g., buy something. Then you're right back to the problem in the opening quote:
> things take time, patience runs out, brand familiarity substitutes for diligence, and most people are willing to accept a bad price to avoid more clicks.
& we're already seeing AI used to do this. E.g., Amazon listings where product photos are AI generated. (… not that many product photos weren't "bad photoshop of product onto hot sexy model who is obviously not using our product" before … but now it's AI!) Whereas before someone would have had to spend a modicum of time badly using Photoshop, now AI can just churn out the same fraudulent result in a fraction of the time.
Now, if I have a problem with a product, instead of just calling a number, browsing a phone tree, getting put on hold, and finally having to struggle to get some human to understand the basic logistics of "I paid for X, I did not get X, I demand X or refund", I get to do all that but with the extra step of "forced engagement with an AI that is incapable of actually solving my problem". (This somehow still manages to apply even when the problem is seemingly trivial enough that I find myself thinking "… this actually should be something an AI can do" but inevitably, no, the AI is "sorry", it cannot do that.)
And besides, calls, emails, etc. are already handled without AI: I (and everyone I really care about) have either allowlisted all inbound comms, or abandoned the medium altogether. Moreover, any communications medium is useful because it is not infested with spam, and will eventually be destroyed by spam. At least until we grow laws for mediums like phone/email, maybe named things like "Do Not Call" or "CAN-SPAM" and those laws are enforced. But the GOP has no interest in enforcing any level of consumer protection, so here we are.
No it wasn't, it just was the display. My commented example in this thread states that in every device your are running Zork I-III or any z-machine v3 compatible game it's actually hosting the interpreter and the game itself, from the Game Boy to an smartphone, a PC, an old PDA...
I am confused; did you ever actually email anyone about the vuln? The AI suggests emailing security emails multiple times, but as I'm reading the timeline, none of the points seem to suggest this was ever done, only that a blog post was made, shared on Reddit, and then indirectly, the relevant parties took action.
I've updated the timeline to clarify I did in fact email them. I’m not yet at the point of having Claude write my emails for me, in fact it was my first one sent since joining the company 10 months ago!
While I'd love to take your tack, unfortunately, I find that if I actually want the fix, I have to become their unpaid engineer.
Which is ridiculous, because at the same time my company is paying a separate support fee, large enough to literally employ a dedicated engineer for my company!
I will do the work for them (typically paid for by my employer) iff I can expect them to fix it.
Blackbox debugging is a PITA, which is part of why I prefer open source, but it is what it is... If something is broken, and I can get it fixed by putting in the time to get a good report, and etc and they fix the thing, then I'll do it.
But if they don't fix the stuff, I have no shortage of things to fix myself.
I mean, yes, I am bored. The emperor has no clothes.
I "tried" Claude the other day. It gave me 3 options for choosing, effectively, an API to call an AI. The first were sort of off limits, b/c my company… while I think we have a Claude Pro Max Ultra+ XL Premium S account, it's Conway's Law'd. But, oh, I can give it a vertex API key! "I can probably probably get one of those" — I thought to myself. The CLI even links you to docs, and … oh, the docs page is a 404. But the 404's prose misrepresents it as a 500.
Maybe Claude could take a bit of its own medicine before trying to peddle it on me?
We're on like our 8th? 9th? Github review bot. Absolutely none of them (Claude included) is seemingly capable of writing an actual suggestion. Instead of "your code is trash, here's a patch" I get a long-form prose explanation of what I need to change, which I must then translate into code. That's if it is correct. The most recent comment was attached to the wrong line number: "f-string that does not need to be an f-string on line 130" — this comment, mind you, the AI has put on line 50. Line 130? "else:" — no f-strings in sight.
… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".
So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.
reply