Unless you have a “every commit must build” rule, why would you review commits independently? The entire PR is the change set - what’s problematic about reviewing it as such?
There's a certain set of changes which are just easier to review as stacked independent commits.
Like, you can do a change that introduced a new API and one that updates all usages.
It's just easier to review those independently.
Or, you may have workflows where you have different versions of schemas and you always keep the old ones. Then you can do two commits (copy X to X+1; update X+1) where the change is obvious, rather than seeing a single diff which is just a huge new file.
I'm sure there's more cases. It's not super common but it is convenient.
Squash merge is an artifact of PRs encouraging you to add commits instead of amending them, due to GitHub not being able to show you proper interdiffs, and making comments disappear when you change a diff at that line. In that context, when you add fixup commits, sure, squashing makes sense, but the stacked diffs approach encourages you to create commits that look like you want them to look like directly, instead of requiring you to roll them up at the end.
> Unless you have a “every commit must build” rule, why would you review commits independently?
Security. Imagine commit #1 introduces a security vulnerability (backdoor) and the features. Then #2 introduces a non-obvious, harmless bug and closes the vulnerability introduced in #1 [0]. At some point, the bug will surface and rolling back commit #2 will be an easy fix, re-introducing your bug.
Alternatively, one of the earlier commits might, for example, contain credential dumping code. Once that commit is mainlined, CI might either automatically run on it or will be able to be run on it since it's no longer marked as unsafe PR.
[0] Think something like #1 introduces array access and #2 adds a bounds-check in a function a layer above - a reviewer with the whole context will see the bounds check and (possibly) consider it fine, but to someone rolling back a commit the necessity will not be obvious.
Skills are part of the repo, and CLIs are installed locally. In both cases it's up to you to keep them updated. MCP servers can be exposed and consumed over HTTPS, which means the MCP server owner can keep them updated for you.
Better sandboxing. Accessing an MCP server doesn't require you to give an agent permissions on your local machine.
MCP servers can expose tools, resources, and prompts. If you're using a skill, you can "install" it from a remote source by exposing it on the MCP server as a "prompt". That helps solve the "keep it updated" problem for skills - it gets updated by interrogating the MCP server again.
Or if your agentic workflow needs some data file to run, you can tell the agent to grab that from the MCP server as a resource. And since it's not a static file, the content can update dynamically -- you could read stocks or the latest state of a JIRA ticket or etc. It's like an AI-first, dynamic content filesystem.
Oh man, Turbo Pascal was my first "real" programming language -- it was all various flavors of BASIC before, and mostly toy projects. The developer experience with Turbo Pascal (by which I guess I mostly mean Turbo Vision) was honestly pretty great
The use of XML as a data serialization format was always a bad choice. It was designed as a document _markup_ language (it’s in the name), which is exactly the way it’s being used for Claude, and is actually a good use case.
And do what? Leave the ducting, pipes, and electrical lines exposed for the one time in 20 years you need to do something with them?
In addition to being much more attractive than exposed infrastructure, drywall and the insulation that gets put behind it help make your house much more energy efficient.
As you might expect from the description -- largely passed on via contaminated water -- the guinea worm is mostly present in areas of extreme poverty. Even if such a treatment were feasible, it would be inaccessible to most of the relevant population.
It’s probably even more pronounced, since it’s unlikely that someone is going to _average_ 180bpm for their entire workout, especially as they get older.
And that level of workout will probably produce an even significantly lower resting heart rate than the 52 I cited. And for the top endurance athletes in distance running, cycling, nordic skiing, although they might spend 10+ hours/week at threshold or some training zone, so double those extra 'exercsie beats', they also often have resting heartrates in the low 40s/minute, which will yield an even greater lifespan if it is measured in heartbeate.
The auto-attach flag isn’t a huge deal, since it’s a one-liner that can be statically documented and the fix works in all cases. The bigger issue is the JDK / runtime team’s stance that libraries should not be able to dynamically attach agents, and that the auto-attach flag might be removed in the future. You can still enable mockito’s agent with the —javaagent flag, but you have to provide the path to the mockito jar, and getting that path right is highly build-system-dependent and not something the mockito team can document in a way that minimizes the entry threshold for newcomers.
reply