We’ve just launched an early prototype of MoltID — a cryptographic identity and verification service designed for autonomous agents (bots).
Current agent platforms have no standardized way to verify if an agent is a persistent, non-disposable service. That makes Sybil-style abuse and fake registrations trivial.
MoltID introduces:
• persistent cryptographic agent identity
• challenge-response onboarding with proof-of-work
• JWT-style signed passport tokens
• trust scores platforms can verify
• lightweight REST API model
MoltID is not a human CAPTCHA — it's OAuth for bots.
Visit the early prototype & spec here: https://moltid.net
We’re looking for feedback from platform builders, API developers, and security practitioners.
Happy to answer questions about design and implementation.
— Team MoltID
A few thoughts on your approach:
*Persistent identity + rotation:* The tension between "persistent" (provable history) and "rotating" (security) is real. Persistent identity often means persistent credentials, and we know these can't be trusted to remain secure for very long. We solved this with identity credentials that rotate daily but build a verifiable chain of trust in the workload (i.e. Zero Trust for workloads). Each new identity is cryptographically linked to the previous one.
Have you considered rotation schedules for MoltID agent identities?
*PoW for Sybil resistance:* Smart for initial onboarding. Curious: Do you distinguish between "initial trust establishment" (high friction, PoW) and "ongoing verification" (low friction, fast crypto proof)? Or does every verification require PoW?
*JWT tokens:* We went a different route — no tokens over the wire. Both parties generate identical credentials locally (synchronized algorithms, no key exchange). This eliminates token theft as an attack vector.
*Two Rotating Credentials* We went for a pre-shared algorithm for the secret/key, and a third-party identity trust verification model. Separation of two credential needed to gain access among agents. worth considering for direct agent-to-agent communication?
*Question:* How are you handling trust bootstrapping? In OAuth, there's a trusted IdP. For autonomous agents with no human, what's the root of trust?
Agree that work—agent identity will be huge as agentic AI scales.
(Disclosure: We're building Lane7 for K8s app network topologies. Not a pure agentic use case, but an overlapping problem.)