Hacker Newsnew | past | comments | ask | show | jobs | submit | koolba's commentslogin

So it reads the packets and replaces the byte sequences at the kernel level? How does that work across packet boundaries?

Secrets are detected before encryption in the user buffer but rewrites happen post encryption in the kernel buffer to be sent on the wire.

packets boundaries are not an issue because detection happen at the SSL write where we have the full secret in the buffer and its position so we can know at rewrite time that the secret is cross 2 packets and rewrite it in 2 separate operations. We also have to update the TLS session hash at the end to not corrupt the TLS frame.


> GP is referring to Canal+ who'd play that one weekly porn movie on saturday evening.

As an Anglophone it counts as taking a 1-credit foreign language class.


> … NJ diners because one saw the birth of Unicode

While it’s possible that Unicode was also conceived at a diner, you’re likely thinking of UTF-8. Unicode was from a decade earlier.

https://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt


Yup! That's what I was thinking about. In fact I did read this right before posting (though I had found it at https://doc.cat-v.org/bell_labs/utf-8_history) but only to validate that it had been in a NJ diner, so I missed my confusion of UTF-8 with Unicode.

I would not make a good fact-checker :(


What’s wrong with express?

> Old server nginx converted to reverse proxy We wrote a Python script that parsed every server {} block across all 34 Nginx site configs, backed up the originals, and replaced them with proxy configurations pointing to the new server. This meant that during DNS propagation, any request still hitting the old IP was silently forwarded. No user would see a disruption.

What was the config on the receiving side to support this? Did you whitelist the old server IP to trust the forwarding headers? Otherwise you’d get the old server IP in your app logs. Not a huge deal for an hour but if something went wrong it can get confusing.


> i presume they wont let you “manage all your AI spend in one place” for free.

Of course they will. In return they get to control who they’re routing requests to. I wouldn’t be surprised if this turns I to the LLM equivalent of “paying for order flow”.


i got shivers thinking about a future ai dynamic pricing and automatic gateway choosing the cheapest provider available

shivers? as in it frightens you? i believe there is no way around tokens being prices like gasoline at the gas station - it changes every hour. Any other system means you are either over- or underspending.

Openrouter already does this, unless I've misunderstood the premise.

They can route between models but you pay the standard rate for whichever model is selected (plus 5% fee). Afaik all current model providers have fixed prices per tokens which don't vary depending on, say, demand or hardware availability.

And also completely meaningless as a credit rating in the context of creditworthiness specifically means the ability to repay. And they can always print dollar bills to do so.

Now whether that $1 in 20 years will buy anything is an entirely different story.


Because that’s what has traditionally allowed western countries to have a wide availability and inventory of goods vs communist economies.

But why does the availability have to be wide? Maybe those stories can do few things, but do them well. Sell staple foods and healthy choices.

Because than people won’t come to your store. People buy where they can purchase the maximum of their shopping cart in a single place.

That is why you have loss leader grocers, where they pull people with dramatic discounts on specific items, but the total cart costs the same


That's not how it used to work. That's still not how it works in my country. I buy my bread from a specialized shop, my cheese from another, and my fresh produce from yet another. I know people who only buy their meat from a butcher (I do it sometimes, but not always).

It really depends on the countries culture

I understand your point of view. But in cities of all sizes, it's easier to not have to do that. For example in NYC, a medium size city, you can easily go do your shopping in multiple places, and not at the same time.

Yes, and some people do that.

Some consumers go to specific stores to purchase specific qualities of brands.

But most do not, especially for convenience products. You get it where you can.


Counter point: China.

Economic viability isn't what led to "wide availability and inventory". No, it's imperialism. It's exploitation of the Global South. It's paying slave wages through subsidiaries in West Africa to cocoa farmers while making sure those countries stay poor, for example.

We also wage economic war on our our anointed enemies like Cuba and then use the inevitable result of that economic warfare as a reason why our system is good.


> The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster

It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Yes, you could look them up or maybe even memorize them. But there’s no way you can make wholesale changes to a layout faster than a machine.

It lowers the cost for experimentation. A whole series of “what if this was…” can be answered with an implementation in minutes. Not a whole afternoon on one idea that you feel a sunk cost to keep.


> It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.


That's a bold assertion without any proof.

It also means you're so helpless as a developer that you could never debug another person's code, because how would you recognize the errors, you haven't made them yourself.


imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?

> What if that process changes and the language you’re reading is a natural one instead of code?

Okay, when that happens, then sure, you don't need to understand the codebase.

I have not seen any evidence that that is currently the case, so my observation that "Continue letting the LLM write your code for you, and soon you won't be able to spot errors in its output" is still applicable today.

When the situation changes, then we can ask if it is really that improtant to understand the code. Until that happens, you still need to understand the code.


The same logic applies to your statement:

> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

Okay, when that happens, then sure, you'll have a problem.

I have not seen any evidence that that is currently the case i.e. I have no problems correcting LLM output when needed.

When the situation changes, then we can talk about pulling back on LLM usage.

And the crucial point is: me.

I'm not saying that everyone that uses LLM to generate code won't fall into "not able to use LLM generated code".

I now generate 90% of the code with LLM and I see no issues so far. Just implementing features faster. Fixing bugs faster.


> The same logic applies to your statement:

>> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

> Okay, when that happens, then sure, you'll have a problem.

It's not exactly the same: how will you know that you are missing errors due to lack of knowledge?

> I now generate 90% of the code with LLM and I see no issues so far.

Well, that's my point, innit? "I see no errors" is exactly the same outcome from "missing the errors that are generated".


You do have a point but as the sibling comment pointed out, the negative eventuality you are describing also has not happened for many devs.

I quite enjoy being much more of an architect than I could compared to 90% of my career so far (24 years in total). I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.

So, I don't see the "soon" in your statement happening, ahem, anytime soon for me, and for many others.


> I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.

This is exactly the opposite experience of sibling, who reports not seeing any issues in the generated code.

You report spotting idiocies, he reports seeing nothing, and you are both making the same argument :-/


Not really. We are both reporting our own anecdotal evidence.

There are common threads though. LLMs do terribly in certain areas. They also do terribly when not supervised well.


What happens when your LLM of choice goes on an infinite loop failing to solve a problem?

What happens when your LLM provider goes down during an incident?

What happens when you have an incident on a distributed system so complex that no LLM can maintain a good enough understanding of the system as a whole in a single session to spot the problem?

What happens when the LLM providers stop offering loss leader subscriptions?


AFAIK everything I use has timeouts, retries, and some way of throwing up its hands and turning things back to me.

I use several providers interchangeably.

I stay away from overly complex distributed systems and use the simplest thing possible.

I plan to wait for some guys in China to train a model on traces that I can run locally, benefitting from their national “diffusion” strategy and lack of access to bleeding-edge chips.

I’m not worried.


> What if that process changes and the language you’re reading is a natural one instead of code?

Natural language is not a good way to specify computer systems. This is a lesson we seem doomed to forget again and again. It's the curse of our profession: nobody wants to learn anything if it gets in the way of the latest fad. There's already a historical problem in software engineering: the people asking for stuff use plain language, and there's a need to convert it to a formal spec, and this takes time and is error prone. But it seems we are introducing a whole new layer of lossy interpretation to the whole mess, and we're doing this happily and open eyed because fuck the lessons of software engineering.

I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.


> But it seems we are introducing a whole new layer of lossy interpretation to the whole mess (...)

I recommend you get acquainted with LLMs and code assistants, because a few of your assertions are outright wrong. Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.

Then you feed that plan to a LLM assistant and your feature is implemented.

I seriously recommend you check it out. This process is far more structured and thought through than any feature work that your average SDE ever does.


> I recommend you get acquainted with LLMs and code assistants

I use them daily, thanks for your condescension.

> I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.

Did you read this part of my comment?

> Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.

I'm not criticizing spec-driven development frameworks, but how battle-tested are they? Does it remove the inherent ambiguity in natural language? And do you believe this is how most people are vibe-coding, anyway?


> Did you read this part of my comment?

Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.

I repeat: LLM assistants have been used to walk users through software requirements specification processes that not only document exactly what usecases and functional requirements your project must adhere to, but also create tasks and implement them.

The deliverable is both a thorough documentation of all requirements considered up until that point and the actual features being delivered.

To drive the point home, even Microsoft of all companies provides this sort of framework. This isn't an arcane, obscure tool. This is as mainstream as it can be.

> I'm not criticizing spec-driven development frameworks, but how battle-tested are they?

I really recommend you get acquainted with this class of tools, because your question is in the "not even wrong" territory. Again, the purpose of these tools is to walk developers through a software requirements specification process. All these frameworks do is put together system prompts to help you write down exactly what you want to do, break it down into tasks, and then resume the regular plan+agent execution flow.

What do you think "battle tested" means in this topic? Check if writing requirements specifications is something worth pursuing?

I repeat: LLM assistants lower formal approaches to the software development lifecycle by orders of magnitude, to the point you can drive each and every single task with a formal SRS doc. This isn't theoretical, it's month's old stuff. The focus right now is to remove human intervention from the SRS process as well with the help of agents.


> Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.

Most people, when told they sound condescending, try to reframe their argument in order to remove this and become more convincing.

Sadly, you chose to double down instead. Not worth pursuing.

> This isn't theoretical, it's month's old stuf

Hahaha! "Months old stuff"!

Disengaging from this conversation. Over and out.


This is not correct. CSS is the style rules for all rendering situations of that HTML, not just your single requirement that it "looks about right" in your narrow set of test cases.

Nobody writing production CSS for a serious web page can avoid rewriting it. Nobody is memorizing anything. It's deeply intertwined with the requirements as they change. You will eventually be forced to review every line of it carefully as each new test is added or when the HTML is changed. No AI is doing that level of testing or has the training data to provide those answers.

It sounds like you're better off not using a web page at all if this bothers you. This isn't a deficiency of CSS. It's the main feature. It's designed to provide tools that can cover all cases.

If you only have one rendering case, you want an image. If you want to skip the code, you can just not write code. Create a mockup of images and hand it off to your web devs.


Eh, I've written so much CSS and I hate it so much I use AI to write it now not because it's faster or better at doing so, just so I don't need to do it.

So AI is good for CSS? That’s fine, I always hated CSS.

> But there’s no way you can make wholesale changes to a layout faster than a machine.

You lost me here. I can make changes very quickly once I understand both the problem and the solution I want to go with. Modifying text is quite easy. I spend very little time doing it as a developer.


> It lowers the cost for experimentation. A whole series of “what if this was…”

Anecdotal, but I've noticed while this is true it also adds the danger of knowing when to stop.

Early on I would take forever trying to get something exactly to whats in my head. Which meant I would spend too much time in one sitting then if I had previously built it by hand.

Now I try to time box with the mindset "good enough".


> Made a second batch of cola syrup without caramel color. It’s much weirder to drink than I expected.

Indeed the 90s were an interesting time: https://youtu.be/2za2IK8FQoM


I wonder if we'd have the same reaction if cola had never been darkened. We wouldn't, right?

I thought of those. I remember drinking some. It tastes like cola but somehow different.

But then again I liked new coke. And that wierd “ok soda” that doesn’t exist anymore.

https://en.wikipedia.org/wiki/OK_Soda


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: