Hacker Newsnew | past | comments | ask | show | jobs | submit | more the_mitsuhiko's commentslogin

> MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code.

My agent writes its own glue code so the benefit does not seem to really exist in practice. Definitely not for coding agents and increasingly less for non coding agents too. Give it a file system and bash in a sandbox and you have a capable system. Give it some skills and it will write itself whatever is neeeded to connect to an API.

Every time I think I have a use case for MCP I discover that when I ask the agent to just write its own skill it works better, particularly because the agent can fix it up itself.


The skill/CLI argument misses what MCP enables for interactive workflows. Sure, Claude can shell out to psql. But MCP lets you build approval gates, audit logs, and multi-step transactions that pause for human input.

Claude Code's --permission-prompt-tool flag is a good example. You point it at an MCP server, and every permission request goes through that server instead of a local prompt. The server can do whatever: post to Slack, require 2FA, log to an audit trail. Instead of "allow all DB writes" or "deny all," the agent requests approval for each mutation with context about what it's trying to do.

MCP is overkill for "read a file" but valuable when you need the agent to ask permission, report progress, or hand off to another system mid-task.


You end up wasting tokens on implementation, debugging, execution, and parsing when you could just use the tool (tool description gets used instead).

Also, once you give it this general access, it opens up essentially infinite directions for the model to go to. Repeatability and testing become very difficult in that situation. One time it may write a bash script to solve the problem. The next, it may want to use python, pip install a few libraries to solve that same problem. Yes, both are valid, but if you desire a particular flow, you need to create a prompt for it that you'll hope it'll comply with. It's about shifting certain decisions away from the model so that it can have more room for the stuff you need it to do while ensuring that performance is somewhat consistent.

For now, managing the context window still matters, even if you don't care about efficient token usage. So burning 5-10% on re-writing the same API calls makes the model dumber.


> You end up wasting tokens on implementation, debugging, execution, and parsing when you could just use the tool (tool description gets used instead).

The token are not wasted, because I rewind to before it started building the tool. That it can build and manipulate its own tools to me is the benefit, not the downside. The internal work to manipulate the tools does not waste any context because it's a side adventure that does not affect my context.


Maybe I'm not understanding the scenario well. I'm imagining an autonomous agent as a sort of baseline. Are you saying the agent says "I need to write a tool", it takes a snapshot, and once it's done, it rewinds to the snapshot but this time, it has the tool it desired? That's actually a really cool idea to do autonomously!

If you mean manually, that's still interesting, but that kind of feels like the same thing to me. The idea is - don't let the agent burn context writing tools, it should just use them. Isn't that exactly what yours is doing? Instead of rewinding to a snapshot, I have a separate code base for it. As tools get more complex, it seems nice to have them well-tested with standardized input and output. Generating tools on the fly, rewinding, and using tools is just the same thing. You even would need to provide some context that says what the tool is and how to use it, which is basically what the mcp server is doing.


> Are you saying the agent says "I need to write a tool", it takes a snapshot, and once it's done, it rewinds to the snapshot but this time, it has the tool it desired? That's actually a really cool idea to do autonomously!

I'm basically saying this except I currently don't give the agent a tool yet to do it automatically because it's not really RL'ed to that extend. So I use the branching and compaction functionality of my harness manually when it should do that.

> If you mean manually, that's still interesting, but that kind of feels like the same thing to me.

It's similar, but it retains the context and feels very naturally. There are many ways to skin the cat :)


I think the path to dependency on closed publishers was opened wide with the introduction of both attestations and trusted publishing. People now have assigned extra qualities to such releases and it pushes the ecosystem towards more dependency on closed CI systems such as github and gitlab.

It was a good intention, but the ramifications of it I don't think are great.


> People now have assigned extra qualities to such releases and it pushes the ecosystem towards more dependency on closed CI systems such as github and gitlab.

I think this is unfortunately true, but it's also a tale as old as time. I think PyPI did a good job of documenting why you shouldn't treat attestations as evidence of security modulo independent trust in an identity[1], but the temptation to verify a signature and call it a day is great for a lot of people.

Still, I don't know what a better solution is -- I think there's general agreement that packaging ecosystems should have some cryptographically sound way for responsible parties to correlate identities to their packages, and that previous techniques don't have a great track record.

(Something that's noteworthy is that PyPI's implementation of attestations uses CI/CD identities because it's easy, but that's not a fundamental limitation: it could also allow email identities with a bit more work. I'd love to see more experimentation in that direction, given that it lifts the dependency on CI/CD platforms.)

[1]: https://docs.pypi.org/attestations/security-model/


> It was a good intention, but the ramifications of it I don't think are great.

as always, the road to hell is paved with good intentions

the term "Trusted Publishing" implies everyone else is untrusted

quite why anyone would think Microsoft is considered trustworthy, or competent at operating critical systems, I don't know

https://firewalltimes.com/microsoft-data-breach-timeline/


> the term "Trusted Publishing" implies everyone else is untrusted

No, it just means that you're explicitly trusting a specific party to publish for you. This is exactly the same as you'd normally do implicitly by handing a CI/CD system a long-lived API token, except without the long-lived API token.

(The technique also has nothing to do with Microsoft, and everything to do with the fact that GitHub Actions is the de facto majority user demographic that needs targeting whenever doing anything for large OSS ecosystems. If GitHub Actions was owned by McDonalds instead, nothing would be any different.)


> This is exactly the same as you'd normally do implicitly by handing a CI/CD system a long-lived API token, except without the long-lived API token.

The other difference is being subjected to a whitelisting approach. That wasn't previously the case.

It's frustrating that seemingly every time better authentication schemes get introduced they come with functionality for client and third party service attestation baked in. All we ever really needed was a standardized way to limit the scope of a given credential coupled with a standardized challenge format to prove possession of a private key.


> The other difference is being subjected to a whitelisting approach. That wasn't previously the case.

You are not being subjected to one. Again: you can always use an API token with PyPI, even on a CI/CD platform that PyPI knows how to do Trusted Publishing against. It's purely optional.

> All we ever really needed was a standardized way to limit the scope of a given credential coupled with a standardized challenge format to prove possession of a private key.

That is what OIDC is. Well, not for a private key, but for a set of claims that constitute a machine identity, which the relying party can then do whatever it wants with.

But standards and interoperability don't mean that any given service will just choose to federate with every other service out there. Federation always has up-front and long-term costs that need to be balanced with actual delivered impact/value; for a single user on their own server, the actual value of OIDC federation versus an API token is nil.


Right, I meant that the new scheme is subject to a whitelist. I didn't mean to imply that you can't use the old scheme anymore.

> Federation always has up-front and long-term costs

Not particularly? For example there's no particular cost if I accept email from outlook today but reverse that decision and ban it tomorrow. I don't immediately see a technical reason to avoid a default accept policy here.

> for a single user on their own server, the actual value of OIDC federation versus an API token is nil.

The value is that you can do away with long lived tokens that are prone to theft. You can MFA with your (self hosted) OIDC service and things should be that much more secure. Of course your (single user) OIDC service could get pwned but that's no different than any other account compromise.

I guess there's some nonzero risk that a bunch of users all decide to use the same insecure OIDC service. But you might as well worry that a bunch of them all decide to use an insecure password manager.

> Well, not for a private key, but for a set of claims that constitute a machine identity

What's the difference between "set of claims" and "private key" here?

That last paragraph in GP was more a tangential rant than directly on topic BTW. I realize that OIDC makes sense here. The issue is that as an end user I have more flexibility and ease of use with my SSH keys than I do with something like a self hosted OIDC service. I can store my SSH keys on a hardware token, or store them on my computer blinded so that I need a hardware token or TPM to unlock them, or lots of other options. The service I'm connecting to doesn't need to know anything about my workflow. Whereas self hosting something like OIDC managing and securing the service becomes an entire thing on top of which many services arbitrarily dictate "thou shalt not self host".

It's a general trend that as new authentication schemes have been introduced they have generally included undesirable features from the perspective of user freedom. Adding insult to injury those unnecessary features tend to increase the complexity of the specification. In contrast, it's interesting to think how things might work if what we had instead was a single widely accepted challenge scheme such as SSH has. You could implement all manner of services such as OIDC on top of such a primitive while end users would retain the ability to directly use the equivalent of an SSH key.


> Not particularly? For example there's no particular cost if I accept email from outlook today but reverse that decision and ban it tomorrow. I don't immediately see a technical reason to avoid a default accept policy here.

Accepting email isn't really the same thing. I've linked some resources elsewhere in this thread that explain why OIDC federation isn't trivial in the context of machine identities.

> The value is that you can do away with long lived tokens that are prone to theft. You can MFA with your (self hosted) OIDC service and things should be that much more secure. Of course your (single user) OIDC service could get pwned but that's no different than any other account compromise.

You can already do this by self-attenuating your PyPI API token, since it's a Macaroon. We designed PyPI's API tokens with exactly this in mind.

(This isn't documented particularly well, since nobody has clearly articulated a threat model in which a single user runs their own entire attenuation service only to restrict a single or small subset of credentials that they already have access to. But you could do it, I guess.)

> What's the difference between "set of claims" and "private key" here?

A private key is a cryptographic object; a "set of claims" is (very literally) a JSON object that was signed over as the payload of a JWT. You can't sign (or encrypt, or whatever) with a set of claims naively; it's just data.


Thank you again for taking the time to walk through this stuff in detail. I think what happened (is happening) with this stuff is a slight communication issue. Some of us (such as myself) are quite jaded at this point when we see a "new and improved" solution with "increased security" that appears to even maybe impinge on user freedoms.

I was unaware that macaroons could be used like that. That's really neat and that capability clears up an apparent point of confusion on my part.

Upon reflection, it makes sense that preventing self hosting would be a desirable feature of attested publishing. That way the developer, builder, and distributor are all independent entities. In that case the registry explicitly vetting CI/CD pipelines is a feature, not a bug.

The odd one out is trusted publishing. I had taken it to be an eventual replacement for API tokens (consider the npm situation for why I might have thought this) thus the restrictions on federation seemed like a problem. However if it's simply a temporary middle ground along the path to attested publishing and there's a separate mechanism for restricting self managed API tokens then the overall situation has a much better appearance (at least to my eye).


I mean, if it meant the infrastructure operated under a franchising model with distributed admin like McD, it would look quite different!

There is more than one way to interpret the term "trusted". The average dev will probably take away different implications than someone with your expertise and context.

I don't believe this double meaning is an unfortunate coincidence but part of clever marketing. A semantic or ideological sleight of hand, if you will.

In the same category: "Trusted Computing", "Zero trust" and "Passkeys are phishing-resistant"


> I don't believe this double meaning is an unfortunate coincidence but part of clever marketing. A semantic or ideological sleight of hand, if you will.

I can tell you with absolute certainty that it really is just unfortunate. We just couldn’t come up with a better short name for it at the time; it was going to be either “Trusted Publishing” or “OIDC publishing,” and we determined that the latter would be too confusing to people who don’t know (and don’t care to know) what OIDC is.

There’s nothing nefarious about it, just the assumption that people would understand “trusted” to mean “you’re putting trust in this,” not “you have to use $vendor.” Clearly that assumption was not well founded.


Maybe signed publishing or verified publishing would have been better terms?


It’s neither signed or verified, though. There’s a signature involved, but that signature is over a JWT not over the package.

(There’s an overlaid thing called “attestations” on PyPI, which is a form of signing. But Trusted Publishing itself isn’t signing.)


Re signed - that is a fair point, although it raises the question, why is the distributed artifact not cryptographically authenticated?

Maybe I'm misunderstanding but I thought the whole point of the exercise was to avoid token compromise. Framed another way that means the goal is authentication of the CI/CD pipeline itself, right? Wouldn't signing a fingerprint be the default solution for that?

Unless there's some reason to hide the build source from downstream users of the package?

Re verified, doesn't this qualify as verifying that the source of the artifact is the expected CI/CD pipeline? I suppose "authenticated publishing" could also work for the same reason.


> why is the distributed artifact not cryptographically authenticated?

With what key? That’s the layer that “attestations” add on top, but with Trusted Publishing there’s no user/package—associated signature.

> Maybe I'm misunderstanding but I thought the whole point of the exercise was to avoid token compromise. Framed another way that means the goal is authentication of the CI/CD pipeline itself, right? Wouldn't signing a fingerprint be the default solution for that?

Yes, the goal is to authenticate the CI/CD pipeline (what we’d call a “machine identity”). And there is a signature involved, but it only verifies the identity of the pipeline, not the package being uploaded by that pipeline. That’s why we layer attestations on top.

(The reasons for this are unfortunately nuanced but ultimately boil down to it being hard to directly sign arbitrary inputs with just OIDC in a meaningful way. I have some slides from talks I gave in the past that might help clarify Trusted Publishing, the relationship with signatures/attestations, etc.[1][2])

> I suppose "authenticated publishing" could also work for the same reason.

I think this would imply that normal API token publishing is somehow not authenticated, which would be really confusing as well. It’s really not easy to come up with a name that doesn’t have some amount of overlap with existing concepts, unfortunately.

[1]: https://yossarian.net/res/pub/packagingcon-2023.pdf

[2]: https://yossarian.net/res/pub/scored-2023.pdf


> imply that normal API token publishing is somehow not authenticated

Fair enough, although the same reasoning would imply that API token publishing isn't trusted ... well after the recent npm attacks I suppose it might not be at that.

> With what key?

> And there is a signature involved,

So there's already a key involved. I realize its lifetime might not be suitable but presumably the pipeline itself either already possesses or could generate a long lived key to be registered with the central service.

> but it only verifies the identity of the pipeline,

I thought verifying the identity of the pipeline was the entire point? The pipeline singing a fingerprint of the package would enable anyone to verify the provenance of the complete contents (either they'd need a way to look up the key or you could do TOFU but I digress). There's value in being able to verify the integrity of the artifacts in your local cache.

Also, the more independent layers of authentication there are the fewer options an attacker will have. A hypothetical artifact that carried signatures from the developer, the pipeline, and the registry would have a very clear chain of custody.

> it being hard to directly sign arbitrary inputs with just OIDC in a meaningful way

At the end of the day you just need to somehow end up in a situation where the pipeline holds a key that has been authenticated by the package registry. From that point on I'd think that the particular signature scheme would become a trivial implementation detail; you stuff the output into some json or something similar and get on with life.

Has some key complexity gone over my head here?

BTW please don't take this the wrong way. It's not my intent to imply that I know better. As long as the process works it isn't my intent to critique it. I was just honestly surprised to learn that the package content itself isn't signed by the pipeline to prove provenance for downstream consumers and from there I'm just responding to the reasoning you gave. But if the current process does what it set out to do then I've no grounds to object.


> So there's already a key involved. I realize its lifetime might not be suitable but presumably the pipeline itself either already possesses or could generate a long lived key to be registered with the central service.

The key involved is the OIDC IdP's key, which isn't controlled by the maintainer of the project. I think it would be pretty risky to allow this key to directly sign for packages, because this would imply that any party that can use that key for signing can sign for any package. This would mean that any GitHub Actions workflow anywhere would be one signing bug away from impersonating signatures for every PyPI project, which would be exceedingly not good. It would also make the insider risk from a compromised CI/CD provider much larger.

(Again, I really recommend taking a look at the talks I linked. Both Trusted Publishing and attestations were multi-year projects that involved multiple companies, cryptographers, and implementation engineers, and most of your - very reasonable! - questions came up for us as well while designing and planning this work.)

> I thought verifying the identity of the pipeline was the entire point? The pipeline singing a fingerprint of the package would enable anyone to verify the provenance of the complete contents (either they'd need a way to look up the key or you could do TOFU but I digress). There's value in being able to verify the integrity of the artifacts in your local cache.

There are two things here:

1. Trusted Publishing provides a verifiable link between a CI/CD provider (the "machine identity") and a packaging index. This verifiable link is used to issue short-lived, self-scoping credentials. Under the hood, Trusted Publishing relies on a signature from the CI/CD provider (which is an OIDC IdP) to verify that link, but that signature is only over a set of claims about the machine identity, not the package identity.

2. Attestations are a separate digital signing scheme that can use a machine identity. In PyPI's case, we bootstrap trust in a given machine identity by seeing if a project is already enrolled against a Trusted Publisher that matches that identity. But other packaging ecosystems may do other things; I don't know how NPM's attestations work, for example. This digital signing scheme uses a different key, one that's short-lived and isn't managed by the IdP, so that signing events can be made transparent (in the "transparency log" sense) and are associated more meaningfully with the machine identity, not the IdP that originally asserted the machine identity.

> At the end of the day you just need to somehow end up in a situation where the pipeline holds a key that has been authenticated by the package registry. From that point on I'd think that the particular signature scheme would become a trivial implementation detail; you stuff the output into some json or something similar and get on with life.

Yep, this is what attestations do. But a key piece of nuance: the pipeline doesn't "hold" a key per se, it generates a new short-lived key on each run and binds that key to the verified identity sourced from the IdP. This achieves the best of both worlds: users don't need to maintain a long-lived key, and the IdP itself is only trusted as an identity source (and is made auditable for issuance behavior via transparency logging). The end result is that clients that verify attestations don't verify using a specific key; the verify using an identity, and ensure that any particular key matches that identity as chained through an X.509 CA. That entire process is called Sigstore[1].

And no offense taken, these are good questions. It's a very complicated system!

[1]: https://www.sigstore.dev


> I think it would be pretty risky to allow this key to directly sign for packages, because this would imply that any party that can use that key for signing can sign for any package.

There must be some misunderstanding. For trusted publishing a short lived API token is issued that can be used to upload the finished product. You could instead imagine negotiating a key (ephemeral or otherwise) and then verifying the signature on upload.

Obviously the signing key can't be shared between projects any more than the API token is. I think I see where the misunderstanding arose now. Because I said "just verify the pipeline identity" and you interpreted that as "let end users get things signed by a single global provider key" or something to that effect, right?

The only difference I had intended to communicate was the ability of the downstream consumer to verify the same claim (via signature) that the registry currently verifies via token. But it sounds like that's more or less what attestation is? (Hopefully I understood correctly.) But that leaves me wondering why Trusted Publishing exists at all. By the time you've done the OIDC dance why not just sign the package fingerprint and be done with it? ("We didn't feel like it" is of course a perfectly valid answer here. I'm just curious.)

I did see that attestation has some other stuff about sigstore and countersignatures and etc. I'm not saying that additional stuff is bad, I'm asking if Trusted Publishing wouldn't be improved by offering a signature so that downstream could verify for itself. Was there some technical blocker to doing that?

> the IdP itself is only trusted as an identity source

"Only"? Doesn't being an identity source mean it can do pretty much anything if it goes rogue? (We "only" trust AD as an identity source.)


> There must be some misunderstanding. For trusted publishing a short lived API token is issued that can be used to upload the finished product. You could instead imagine negotiating a key (ephemeral or otherwise) and then verifying the signature on upload.

From what authority? Where does that key come from, and why would a verifying party have any reason to trust it?

(I'm not trying to be tendentious, so sorry if it comes across that way. But I think you're asking good questions that lead to the design that we arrived at with attestations.)

> I did see that attestation has some other stuff about sigstore and countersignatures and etc. I'm not saying that additional stuff is bad, I'm asking if Trusted Publishing wouldn't be improved by offering a signature so that downstream could verify for itself. Was there some technical blocker to doing that?

The technical blocker is that there's no obvious way to create a user-originated key that's verifiably associated with a machine identity, as originally verified from the IdP's OIDC credential. You could do something like mash a digest into the audience claim, but this wouldn't be very auditable in practice (since there's no easy way to shoehorn transparency atop that). But some people have done some interesting exploration in that space with OpenPubKey[1], and maybe future changes to OIDC will make something like that more tractable.

> "Only"? Doesn't being an identity source mean it can do pretty much anything if it goes rogue? (We "only" trust AD as an identity source.)

Yes, but that's why PyPI (and everyone else who uses Sigstore) mediates its use of OIDC IdPs through a transparency logging mechanism. This is in effect similar to the situation with CAs on the web: a CA can always go rogue, but doing so would (1) be detectable in transparency logs, and (2) would get them immediately evicted from trust roots. If we observed rogue activity from GitHub's IdP in terms of identity issuance, the response would be similar.

[1]: https://github.com/openpubkey/openpubkey


Okay. I see the lack of benefit now but regardless I'll go ahead and respond to clear up some points of misunderstanding (and because the topic is worthwhile I think).

> From what authority?

The registry. Same as the API token right now.

> The technical blocker is that there's no obvious way to create a user-originated key

I'm not entirely clear on your meaning of "user originated" there. Essentially I was thinking something equivalent to the security of - pipeline generates ephemeral key and signs { key digest, package name, artifact digest }, registry auth server signs the digest of that signature (this is what replaces the API token), registry bulk data server publishes this alongside the package artifact.

But now I'm realizing that the only scenario where this offers additional benefit is in the event that the bulk data server for the registry is compromised but the auth server is not. I do think there's some value in that but the much simpler alternative is for the registry to tie all artifacts back to a single global key. So I guess the benefit is quite minimal. With both schemes downstream assumes that the registry auth server hasn't been compromised. So that's not great (but we already knew that).

That said, you mention IdP transparency logging. Could you not add an arbitrary residue into the log entry? An auth server compromise would still be game over but at least that way any rogue package artifacts would conspicuously be missing a matching entry in the transparency log. But that would require the IdP service to do its own transparency logging as well ... yeah this is quickly adding complexity for only very minimal gain.

Anyway. Hypothetical architectures aside, thanks for taking the time for the detailed explanations. Though it wasn't initially clear to me the rather minimal benefit is more than enough to explain why this general direction wasn't pursued.

If anything I'm left feeling like maybe the ecosystems should all just switch directly to attested publishing.


Thanks for replying.

I'm certainly not meaning to imply that you are in on some conspiracy or anything - you were already in here clarifying things and setting the record straight in a helpful way. I think you are not representative of industry here (in a good way).

Evangelists are certainly latching on to the ambiguity and using it as an opportunity. Try to pretend you are a caveman dev or pointy-hair and read the first screenful of this. What did you learn?

https://github.blog/changelog/2025-07-31-npm-trusted-publish...

https://learn.microsoft.com/en-us/nuget/nuget-org/trusted-pu...

https://www.techradar.com/pro/security/github-is-finally-tig...

These were the top three results I got when I searched online for "github trusted publishing" (without quotes like a normal person would).

Stepping back, could it be that some stakeholders have a different agenda than you do and are actually quite happy about confusion?

I have sympathy for that naming things is hard. This is Trusted Computing in repeat but marketed to a generation of laymen that don't have that context. Also similar vibes to the centralization of OpenID/OAuth from last round.

On that note, looking at past efforts, I think the only way this works out is if it's open for self-managed providers from the start, not by selective global allowlisting of blessed platform partners one by one on the platform side. Just like for email, it should be sufficient with a domain name and following the protocol.


I would probably not build an actual app with HTMX but I found it to be excellent for just making a completely static page feel more dynamic. I'm using it on my two blogs and it makes the whole experience feel much snappier and allows me to carry through an animation from page to page.

The amount of custom stuff I needed to add was minimal (just mostly ensuring that if network is gone, it falls back to native navigation to error out).

Examples: https://lucumr.pocoo.org/ and https://dark.ronacher.eu/

I also found Claude to be excellent at understanding HTMX so that definitely helps.


I moved to a AI maintained custom site generator and it’s ideal for my uses. I have full control over everything and nothing breaks me.


I'm not sure if there should be a /s there. AI to me seems to be the antithesis of stable.


The AI only changes things when I want it to, and to my command. It's very stable.


You could also upgrade a static generator when you want to and equally achieve stability.


That's somewhat untrue. Personal software only moves to your constraints. Shared software moves to others' as well. I use Mediawiki for my site (I would like others to be able to edit it) and version changes introduce changes in more than the sections I care about.


They tend to change and when I want to do something that the generator does not do, I either need to hack it in (which might break) or i need to fork the generator.


An AI-maintained tool is a different thing than using AI to generate the site.


When you say "AI maintained", what are you meaning?


The widespread deployment of NAT and VPNs has counter acted the market forces that were assumed to make IPv6 appealing.


> The widespread deployment of NAT and VPNs has counter acted the market forces that were assumed to make IPv6 appealing.

Tell that to everyone who is behind CG-NAT and has issues with (e.g.) video games. Or all the (small(er)) ISPs that have to layout CapEx for translation boxes.


Honestly the games issue might be out of day. Game devs have access to great services to punch through NAT at this point.

Tech finds a way…


Which has led to every game needing a central server running, forcing centralization where p2p used to work great. Also how Skype was able to scale on a budget, something now blocked, forcing you to raise money for more ideas than before. Running a matrix(?) node should be as simple as clicking install and it's just there, next time you're with your friends, nfc tap or whatever and your servers talk to each other directly forever going forward. But nope, there always is a gatekeeper now and they need money and that poisons everything.


I don’t think VOIP was a major factor in game centralization. The big one was selling cosmetics (easily unlock able server-side in community servers), and to some extent being able to police voice chat more. Major game publishers didn’t want to be in the news about the game with the most slurs or child grooming or what not.


Central servers are useful for more than just NAT hole-punching. They’re also great as a centralized database of records and statistics as well as a host for anti-cheating services and community standards enforcement.

Peer to Peer games with no central authority would be so rife with cheating that you’d only ever want to play with friends, not strangers. That sucks!


> Peer to Peer games with no central authority would be so rife with cheating that you’d only ever want to play with friends, not strangers. That sucks!

Back in the the day RtCW had a server anyone could run and you could give out the address:

* https://en.wikipedia.org/wiki/Return_to_Castle_Wolfenstein

There was a server that a ISP / cable company in the southern US ran that I participate in and it was a great community with many regulars.

P2P can be awesome with the right peers.


If you can run your own server then that's still a central server. That still lets a community of people work with a central authority. It's just a different authority from the game's publisher.


In that sense, Mastodon is a centralized service because it's on someone's computer. That's not really what people mean by central. They mean we're increasingly reliant on game companies for networking infrastructure.

Is that all IPV4s fault? I don't think so. But it complicates things


I think you're muddling things up more than they need to be. A peer-to-peer game is one in which players connect directly to each other but neither is the host and there is no dedicated server. Game state is maintained separately on each player's computer and kept in sync by the netcode. Since there is no single source of truth for the game-state, so players are free to cheat by modifying the game's code to lie on their behalf. There is also the side issue of bugs in the game code causing the game-states to become irreparably desynchronized.

All of these issues are solved by having a central server for both players to connect to. Whether that server is owned by the game's publisher or by an open-source community is irrelevant from a technology standpoint. However, the prevalence of IPv4 networks and stateful NAT firewalls is relevant because it privileges those central servers over true peer-to-peer connections.


I don't disagree with you, I just read your comment as deriding people who think hosting their own game servers is meaningful, because it's similar to a company server. Sounds like you didn't mean it that way.


Most people can't run their own server, because they aren't on a public IP!


Cool. You decided you don't care about that, but what if I do?


Don't put words into my mouth! I never said I didn't care about peer to peer networking and peer to peer gaming. I said it sucks if your only option to avoid cheating is to play with friends.

If you only care about gaming with friends, then peer to peer is an excellent way to do that (assuming the game doesn't have any synchronization issues, which some peer to peer games do).


So we acknowledge v4 and CG-NAT are a problem but don't want to use the already available solution because game developers took it upon themselves to DEFEAT NAT :)

That just reminded me of a peer protocol I worked on a long time ago that used other hosts to try to figure out which hosts were getting translated. Kind of like a reverse TOR. If that was detected, the better peering hosts would send them each other's local and public addresses so they could start sending UDP packets to each other, because the NAT devices wouldn't expect the TCP handshake first and so while the first few rounds didn't make it through, it caused the NAT device(s) to create the table entries for itself.

Was it Hamachi that was the old IPX-over-IP tunneling? I'm fairly sure it used similar tricks. IPX-over-IP is also done on DOSBOX, which incidentally made it possible to play Master of Orion 2 with friends in other continents.


> That just reminded me of a peer protocol I worked on a long time ago that used other hosts to try to figure out which hosts were getting translated. Kind of like a reverse TOR. If that was detected, the better peering hosts would send them each other's local and public addresses so they could start sending UDP packets to each other,

Sounds similar to STUN, really.


If that's the VOIP thing, yes, lots of people came to similar methods. That particular thing was for exchanging state, not VOIP or tunneling, so as long as participant groups overlapped it didn't really need a fixed server to be the middle which was handy for our purposes, although long network interruptions could make reconvergence take a while.

Does make me chuckle that so many people had to be working around NAT for so long and then people are like "NAT is way better than the thing that makes us not have to deal with the problem at all." Just had a bit of NAT PTSD remembering an unrelated, but livid argument between some network teams about how a tool defeating their NAT policies was malware. They had overlapping 10.x.y.z blocks, because of course they did :)


I can spin up a NAT puncher today without having to depend on anybody. Can't say the same for IPv6.


Nat hole punching works... most of the time. There are many edge cases and weird/broken networks which you just can't work around in standard ways. You get to see all kinds of broken setups if you work at VoIP providers. That's why everyone will use a central proxy server as the last resource - you'll mostly notice it only because of a higher ping.


Isn't CGnat due to IPv6 use on the mobiles? You could quit and say that's an IPv6 problem that didn't get solved in the IPv6 engineering


IPv6 is used on mobile networks since there aren't enough IPv4 addresses. Some of these mobile networks are so big there aren't even enough private IPv4 addresses for their CG-NAT private side to fit, leaving the only clean solution being NAT64/DNS64.


Why would CGNAT be deployed as a response to IPv6 on mobile? I don't understand the logic there. CGNAT is deployed due to a shortage of publicly routable IPv4 addresses. IPv6 was introduced due to having much larger publicly routable space.


Because the internet as a whole is ipv4. The mobiles are IPv6. The ipv4 internet does not care about any server running on any mobile device.

Thus, CG Nat was invented so that IPv6 could talk to IPv4 and get the information from it.


No, CGNAT (Carrier-Grade NAT - https://en.wikipedia.org/wiki/Carrier-grade_NAT) is an IPv4 only thing. https://www.rfc-editor.org/rfc/rfc6598 specifies they should use 100.64.0.0/10 for it, to avoid conflicting with the pre-existing private-use ranges. IPv6 removes the need for using CGNAT, as each home router is allocated a public IP (rather than a CGNAT IP) on its public link.


Oh so cgnat exists for ipv4 addresses to talk to IPv6 servers? Is that what you are telling me?

Because all of the www is in IPv6, and cgnat actually excuses for ipv4 cable users to use the bedrock internet servers and services?

Bullshit. Cgnat is a hack for ipv6 to talk to the ipv4 universe.

Because if there were magically enough iov4 addresses for mobiles, would cgnat exist? No, it wouldn't.


No, CGNAT has absolutely nothing to do with IPv6. CGNAT is nothing more than ISPs not providing a public IP to the gateway on your LAN (i.e. your router). To avoid conflicts with existing ranges, a new ranges for that purpose was allocated. There are different technologies to enable IPv4<->IPv6, none of which care about the existence of CGNAT.


No, NAT64 was invented so v6-only hosts could access v4-only resources. CGNAT was invented so v4 hosts can have a v4 address without having to purchase limited public address space.


IPv4 addresses are still expensive. NAT is a value add for a lot of cloud platforms.

IPv6 has arguably done more to counteract market forces related to IPv4 address exhaustion.


It's my dream that one day I'll be able to run an AWS VPC that only has IPv6 for the private subnets and then I'll never have to worry about managing the address space or how many IP addresses each ALB consumes.


Thanks for adding pi to it though :)


> The (only?) year of MCP

I like to believe, but MCP is quickly turning into an enterprise thing so I think it will stick around for good.


MCP isn't going anywhere. Some developers can't seem to see past their terminal or dev environment when it comes to MCP. Skills, etc do not replace MCP and MCP is far more than just documentation searching.

MCP is a great way for an LLM to connect to an external system in a standardized way and immediately understand what tools it has available, when and how to use them, what their inputs and outputs are,etc.

For example, we built a custom MCP server for our CRM. Now our voice and chat agents that run on elevenlabs infrastructure can connect to our system with one endpoint, understand what actions it can take, and what information it needs to collect from the user to perform those actions.

I guess this could maybe be done with webhooks or an API spec with a well crafted prompt? Or if eleven labs provided an executable environment with tool calling? But at some point you're just reinventing a lot of the functionality you get for free from MCP, and all major LLMs seem to know how to use MCP already.


Yeah, I don't think I was particularly clear in that section.

I don't think MCP is going to go away, but I do think it's unlikely to ever achieve the level of excitement it had in early 2025 again.

If you're not building inside a code execution environment it's a very good option for plugging tools into LLMs, especially across different systems that support the same standard.

But code execution environments are so much more powerful and flexible!

I expect that once we come up with a robust, inexpensive way to run a little Bash environment - I'm still hoping WebAssembly gets us there - there will be much less reason to use MCP even outside of coding agent setups.


I disagree. MCP will remain the best way to do most things for the same reason REST APIs are the main way to access non local services: they provide a way to secure and audit access to systems in a way that a coding environment cannot. And you can authorize actions depending on the well defined inputs and outputs. You can’t do that using just a bash script unless said script actually does SSO and calls REST APIs but then you just have a worse MCP client without any interoperability.


I find it very hard to pick winners and losers in this environment where everything changes so quickly. Right now a lot of people are using bash as a glue environment for agents, even if they are not for developers.


I think it will stick around, but I don't think it will have another year where it's the hot thing it was back in January through May.


I never quite got what was so "hot" about it. There seems to be an entire parallel ecosystem of corporates that are just begging to turn AI into PowerPoint slides so that they can mould it into a shape that's familiar.


One reason may be that it makes it a lot easier to open up a product to AI. Instead of adding a bad ChatGPT UI clone into your app, you inverse control and let external AI tools interact with your application and its data, thus giving your customers immediate benefits, while simultaneously sating your investors/founders/managers desire to somehow add AI.


MCP or skills? Can a skill negate the need for MCP. In addition there was a YC startup who is looking at searching docs for LLMs or similar. I think MCP may be less needed once you have skills, openapi specs, and other things that LLMs can call directly.


For connecting agents to third-party systems I prefer CLI tools, less context bloat and faster. You can define the CLI usage in your agent instructions. If the MCP you're using doesn't exist as a CLI, build one with your agent.


Totally agree - wrote this over the holidays which sums it all pretty well https://martinalderson.com/posts/why-im-building-my-own-clis...


Polish and that uv gets you entire python interpreters automatically without having to compile or manually install them.

That in retrospective was what made rye temporarily attractive and popular.


Presumably preference of their users. From what I know, other than for cursor, the GUI interfaces are less popular than the TUI ones. Personally I also did not expect that I would really like the TUI experience, but it's hard for me to switch away from it now because it has become so central to my workflow.


It's easier to ship a TUI app cross-platform, the constraints around UI and state are often simpler, and some good libraries/frameworks (e.g. [1][2]) exist to make a modern-looking UX.

[1]: https://github.com/charmbracelet/bubbletea

[2]: https://github.com/Textualize/textual


Interestingly Index::index is also usually not marked as `#[must_use]` in Rust either.


I don't believe you can mark trait methods with #[must_use] - it has to be on the implementation. Not near a compiler to check at the moment.

In the case of e.g. Vec, it returns a reference, which by itself is side-effect free, so the compiler will always optimize it. I do agree that it should still be marked as such though. I'd be curious the reasons why it's not.


This is just my take, but I think historically the Rust team was hesitant to over-mark things #[must_use] because they didn't want to introduce warning fatigue.

I think there's a reasonable position to take that it was/is too conservative, and also one that it's fine.


But it's also not marked at the implementation for HashMap's Index impl for instance.


This didn't seem like a footgun to me, hats["Jim"]; will panic if, in fact "Jim" isn't one of the keys, but what did the hypothetical author expect to happen when they write this? HashMap doesn't implement IndexMut so hats["Jim"] = 26; won't even compile.


Unsure if I misunderstand:

Index returns a reference:

https://doc.rust-lang.org/std/ops/trait.Index.html#:~:text=s...

If you don't use the reference it just ... disappears.

Am I missing something here?


Technically without any optimizations this would result in a stray LEA op or something but any optimizing compiler (certainly any that support Rust) would optimize it out even at low levels of debug settings.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: