What are the advantages of using this over lxd system container or if we want VM isolation them lxd VMs? Is it the developer experience or there are any agent specific experience which is the key thing here?
The main thing matchlock adds over general-purpose vm/container tooling is agent specific network and filesystem (wip) controls, so if an agent goes rogue it can't exfiltrate your API keys, and damage largely mitigated. You'd have to build all of that yourself on top of LXD (possibly similar to matchlock).
There's also the DX side - OCI image support, highly programmable, fuse for workspace sharing. It runs on both linux and mac with a unified interface, so you get the same/similar experience locally on a Mac as you do on a linux workstation.
Mostly it's built for the purpose of "running `claude --dangerously-skip-permissions` safely" use case rather than being a general hypervisor.
1. Containers aren't a security boundary. Yes they can be used as such, but there is too much overhead (privilege vs unprivileged, figuring out granular capabilities, mount permissions, SELinux/AppArmor/Seccomp, gVisor) and the whole thing is just too brittle.
2. lxd VMs are QEMU-based and very heavy. Great when you need full desktop virtualization, but not for this use case. They also don't work on macOS.
Using Apple virtualization framework (which natively supports lightweight containers) on macOS and a more barebones virtualization stack like Firecracker on Linux is really the sweet spot. You get boot times in milliseconds and the full security of a VM.
In Proof-of-Work the cost of the work is what keeps the network honest. If the work has value then an attacker is free to invest as many resources as I want into subverting the network. Even a failed attack can still be profitable, just less so.
In another scenario, where the works value is less then the cost you're still hoping that at no point in the future will an attacker figure out a way to do the work at a net profit.
The only way the network can be trusted is if the work has definitely now and always, 0 value.
Not littering has value. However, if I don't litter, it doesn't benefit me, and I cannot profit off of it; no matter how eco-friendly I am, I get no value from it.
because Proof-of-Work only generates value for an arbitrary, made up coin if it has no other real value.
Otherwise you're making money that way, and the value of the coin is tied to the work that you did.
until recently gold was a pretty but mostly useless metal. too heavy for practical uses, too melty for industrial uses, too soft for weapons, etc. but it didn't rust and was a good medium of exchange because it had no other real value. once it has value outside of being currency it's less useful in that capacity, since now its value is tied to how much you can get for it by utilizing it in computers, chemical reactions, etc.... same basic idea with PoW
It's worth noting that lots of projects claim to be "Proofs of Useful Work" without the academic rigor to actually prove so. The attacker of course being one of those who has failed to do so.
1. Their paper has not been accepted by any conference or journal.
2. Neither author on their paper is an academic (or practicing engineer or researcher) in the fields of computer science, economics, game theory, or cryptography (or any maths in general). The one is a C-level exec with what seems to be minimal CS experience and the other is a psychology professor. Neither author appears to have qualifications to be able to assume some level of rigor (before looking at the underlying work).
3. The paper is a bunch of text and buzzwords about AI and AGI intermixed with some academic history and some discussions on psychology. Of the 47 pages of the paper, only about 1-2 pages are semi-technical in major with an additional ~3 pages of code included to show their algorithm. There are two graphs relevant to the protocol on those 1-2 pages and neither one addresses any security aspects, instead showing it's performance at doing the "useful" part. So again to reiterate, their "academic paper" on the security of their PoUW algorithm includes no rigorous analysis of the protocol.
TLDR They aren't doing PoUW. They are doing cooperative compute with a centralised or federated coordinator dishing out rewards.
Proofs of Useful Work do actually exist and are an interesting field but they take a lot of rigor and analysis to be accepted and not immediately ripped to shreds. What the attacker claims is not even close to meeting that bar.
Yes. Any API Key is allowed, Also you can assign different LLMs for different modes. It is great for cost-optimization. Like architect, code, ask, debug etc.
I don't think so. I think embedding is just converting token string into its numeric representation. Numeric representations of semantically similar token strings are close geometrically.
RAG is training AI to be a guy who read a lot of books. He doesn't know all of them in the context of this conversation you are having with him, but he sort of remembers where he read about the thing you are talking about and he has a library behind him into which he can reach and cite what he read verbatim thus introducing it into the context of your conversation.