"Slow" in what sense? Development? Because I self host a Conduit server and I don't ever notice messages being slow. It would be hard to notice anyway, as in a group chat people usually take some time to type in their responses.
The sync between large groups used to be slow because of amount of data, but Element X and "sliding windows" were rolled out to help with it.
AFAIK, the public Matrix server used to be slow because of a heavy load (I think), but on my self-hosted instance that's not a problem at all.
The experience of using Matrix involves a lot of sluggishness at various points in the client - waiting to decrypt messages or properly sync keys, waiting to join a room or for room search to load - these are the things that have been salient to me using multiple matrix clients with a freshly-spun-up server within the past month.
I more mindfully played a bit with my Element (web UI), and Element X (Android), and while there might something to it, and I suspect the e2e encrypted data model will always lead to some extra work required. Element seems a bit sluggish. However Element X on my Android seems butter smooth.
And event the slower Element seems far better than Discord that I'm forced to use, where I can't even scroll history without the whole thing stuttering.
BTW. Pre-commit hooks are the wrong way to go about this stuff.
I'm advocating for JJ to build a proper daemon that runs "checks" per change in the background. So you don't run pre-commit checks when committing. They just happen in the background, and when by the time you get to sharing your changes, you get all the things verified for you for each change/commit, effortlessly without you wasting time or needing to do anything special.
I have something a bit like that implemented in SelfCI (a minimalistic local-first Unix-philosophy-abiding CI) https://app.radicle.xyz/nodes/radicle.dpc.pw/rad%3Az2tDzYbAX... and it replaced my use of pre-commit hooks entirely. And users already told me that it does feel like commit hooks done right.
Just because the hooks have the label "pre-commit" doesn't mean you have to run them before committing :).
I, too, want checks per change in jj -- but (in part because I need to work with people who are still using git) I need to still be able to use the same checks even if I'm not running them at the same point in the commit cycle.
So I have an alias, `jj pre-commit`, that I run when I want to validate my commits. And another, `jj pre-commit-branch`, that runs on a well-defined set of commits relative to @. They do use `pre-commit` internally, so I'm staying compatible with git users' use of the `pre-commit` tool.
What I can't yet do is run the checks in the background or store the check status in jj's data store. I do store the tree-ish of passing checks though, so it's really quick to re-run.
I prefer to configure my IDE to apply precisely the same linting and formatting rules as used for commits and in CI. Save a file, see the results, nothing changes between save, commit, stage, push, PR, merge.
> I personally can't stand my git commit command to be slow or to fail.
I feel the same way but you can have hooks run on pre-push instead of pre-commit. This way you can freely make your commits in peace and then do your cleanup once afterwards, at push time.
To myself: sometimes I think the background process should be committing for me automatically each time a new working set exists, and I should only rebase and squash before pushing.
That’s reversing the flow of control, but might be workable!
jj already pretty much does that with the oplog. A consistent way of making new snapshots in the background would be nice though. (Currently you have to run a jj command — any jj command — to capture the working directory.)
I see it as a layered system, each one slower than the last, but saving time in the long run.
* in-editor, real time linting / formatting / type checking. This handles whatever file you have open at the time.
* pre-commit, do quick checks for all affected code - linting, type checking, formatting, unit tests.
* CI server, async / slow tests. Also does all the above (because pre-commit / pre-push scripts are clientside and cannot be guaranteed to run), plus any slower checks like integration tests.
Basically "shift left", because it takes 100x as long to find and fix a typo (for example) if you find it in production compared to in your editor while writing.
I like this approach. Something related I've been tinkering with are "protected bookmarks" - you declare what bookmarks (main, etc) are protected in your config.toml and the normal `jj bookmark` commands that change the bookmark pointer will fail, unless you pass a flag. So in your local "CI" script you can do `jj bookmark set main -r@ --allow-protected` iff the tests/lints pass. Pairs well with workspaces and something that runs a local CI (like a watcher/other automated process).
I haven't yet submitted it to upstream for design discussion, but I pushed up my branch[1]. You can also declare a revset that the target revision must match, for extra belts and suspenders (eg., '~conflicts()')
That's a great idea, and I was just thinking about how it would pair with self hosted CI of some type.
Basically what I would want is write a commit (because I want to commit early and often) then run the lint (and tests) in a sandboxed environment. if they pass, great. if they fail and HERAD has moved ahead of the failing commit, create a "FIXME" branch off the failure. back on main or whatever branch head was pointed at, if tests start passing, you probably never need to revisit the failure.
I want to know about local test failures before I push to remote with full CI.
automatic branching and workflow stuff is optional. the core idea is great.
> automatic branching and workflow stuff is optional. the core idea is great.
I'm not sure if I fully understood. But SelfCI's Merge-Queue (mq) daemon has a built-in hook system, so it's possible to do custom stuff at certain points. So probably you should be able to implement it already, or it might require couple of minor tweaks (should be easy to do on SelfCI side after some discussion).
From the docs I think Limmat is much more minimal. It doesn't have a merge queue or anything, "jobs" are just commands that run in a worktree.
I would be interested to try SelfCI coz I have actually gone back and forth on whether I want that merge queue feature in Limmat. Sometimes I think for that feature I no longer want it to be a local tool but actually I just want a "proper CI system" that isn't a huge headache to configure.
I had been eagerly moving over to using JJ when I discovered that 'hook' behavior was not present. Pre-push hooks for formatting and linting were very helpful for me because I needed to enforce these standards on others who were more junior. It would be great for JJ to incorporate it in some way if possible. I understand the structural differences and why that makes it hard but something about that pre-* hook just hits right
Looks very interesting, I fully agree that running CI locally is viable.
But what I didn't pick up for a quick scan of README is best pattern for integrating with git. Do you expect users to manually run (a script calling) selfci manually or is it hooked up to git or similar? When does the merge hooks come into play? Do you ask selfci to merge?
That looks really cool! I've been looking for a more thought-out approach to hooks on JJ, I'll dig into this. Do you have any other higher level architecture/overview documentation other than what is in that repo? It has a sense of "you should already know what this does" from the documentation as is.
> Do you have any other higher level architecture/overview documentation other than what is in that repo?
SelfCI is _very_ minimal by design. There isn't really all that much to document other than what is described in the README.
> Also, how do you like Radicle?
I enjoy that it's p2p, and it works for me in this respect. Personally I disagree with it attempt to duplicate other features of GitHub-like forge, instead of the original collaborate model of Linux kernel that git was built for. I think it should try to replicate something more like SourceHut, mailinglist thread, communication that includes patches, etc. But I did not really _collaborated_ much using Radicle yet, I just push and pull stuff from it and it works for that just fine.
I don't get it. Article titled "Common misunderstanding". OK. “There are too many meetings”. How is this a misunderstanding? "...bunch of excuses confirming there are too many meetings...". Yyy... so it's not a misunderstanding?
The US government entered a debt spiral a while ago (https://fred.stlouisfed.org/series/A091RC1Q027SBEA), and needs lower interest rates to service its tremendous debt while trying to inflate it away by printing money. That's all it comes down to. For decades fiscal conservatives were warning about it and were laughed at. Now that we're are the end game the inevitable shitshow will become apparent. You can hate Trump (rightly so), but as much as he did contribute to the problem directly, the problem is larger and systemic, and anyone else in office would have the exact same problem now.
Exactly, interest rates must come down due to the government debt burden. This debt burden creates a strong incentive to force rates to zero, but we have to pretend the Federal Reserve is independent.
Separately, I think Jerome Powell is one of the worst Fed chairs as he is most (but not exclusively) responsible for what happened to the housing market by creating a lock-in effect and focusing on their CPI basket.
Might be obviously, but there is definitely a lot of biases in the data here. It's unavoidable. E.g. many bugs will not be detected, but they will be removed when the code is rewritten. So code that is refactored more often will have lower age of fixed bugs. Components/subsystems that are heavily used will detect bugs faster. Some subsystems by their very nature can tolerate bugs more, while some by necessity will need to be more correct (like bpf).
> We have an in-house, Rust-based proxy server. Claude is unable to contribute to it meaningfully outside
I have a great time using Claude Code in Rust projects, so I know it's not about the language exactly.
My working model is is that since LLM are basically inference/correlation based, the more you deviate from the mainstream corpus of training data, the more confused LLM gets. Because LLM doesn't "understand" anything. But if it was trained on a lot of things kind of like the problem, it can match the patterns just fine, and it can generalize over a lot layers, including programming languages.
Also I've noticed that it can get confused about stupid stuff. E.g. I had two different things named kind of the same in two parts of the codebase, and it would constantly stumble on conflating them. Changing the name in the codebase immediately improved it.
So yeah, we've got another potentially powerful tool that requires understanding how it works under the hood to be useful. Kind of like git.
Recently the v8 rust library changed it from mutable handle scopes to pinned scopes. A fairly simple change that I even put in my CLAUDE.md file. But it still generates methods with HandleScope's and then says... oh I have a different scope and goes on a random walk refactoring completely unrelated parts of the code. All the while Opus 4.5 burns through tokens. Things work great as long as you are testing on the training set. But that said, it is absolutely brilliant with React and Typescript.
Well, it's not like it never happened to me to "burn tokens" with some lifetime issue. :D But yeah, if you're working in Rust on something with sharp edges, LLM will get get hurt. I just don't tend to have these in my projects.
Even more basic failure mode. I told it to convert/copy a bit (1k LOC) of blocking code into a new module and convert to async. It just couldn't do a proper 1:1 logical _copy_. But when I manually `cp <src> <dst>` the file and then told it to convert that to async and fix issues, it did it 100% correct. Because fundamentally it's just non-deterministic pattern generator.
hot take (that shouldn't be?): if your code is super easy to follow as a human, it will be super easy to follow for an LLM. (hint: guess where the training data is coming from!)
It's not about memory/CPU/IO, but latency vs throughput. Most software is slow because it ignores the latency. If you program serially waiting for _whatever_ it is going to be slow. If you scatter your data around memory, or read from disk in small chunks, or make tons of tiny queries to the DB serially your software will be 99.9% waiting idle for something to finish. That's it. If you can organize your data linearly in memory and/or work on batches of it at the time and/or parallelize stuff and/or batch your IO, it is going to be fast.
The sync between large groups used to be slow because of amount of data, but Element X and "sliding windows" were rolled out to help with it.
AFAIK, the public Matrix server used to be slow because of a heavy load (I think), but on my self-hosted instance that's not a problem at all.
reply