Thank you for the thoughts. Relative "made up words", what if the concepts reference things that didn't exist before their definition in these specifications?
ie: At one point "hyperlink" and "Uniform Resource Locator" were made up words. No different that say "web log" (blog) and "Client-Server Architecture" were being made up as well.
I contend that both "Artipoint" and "Cortex Layer" have similar parallels in the sense they refer to concepts newly formulated or at least are captured into a formalism with a label for the first time!
Cortex Layer is not made up, rather it is an anthropomorphization you want to become one of these words. Stop anthropomorphizing, it turns rational people off
Putting "Cognition" in the acronym is another example. These things are not cogniating, that's not how they work, they don't think, they have been gradient descended to use special <think> runs in their token generation
ASCP is a draft specification suite for a protocol to support persistent, structured context shared between humans and AI agents. It defines an immutable articulation grammar, a secure distribution layer, and a local-first append-only log sync model.
The work is still in draft form, but substantial: multiple detailed specifications are included, and we’re actively iterating across the grammar, security model, and synchronization layer. It’s not “finished,” but it’s well beyond half-baked.
We welcome contributions and feedback from collaboration-tool builders, protocol designers, distributed-systems engineers, and security/cryptographic reviewers who are interested in advancing an open, vendor-neutral standard for durable shared context.
Yes, I enjoyed the article as well and good for the non-technical reader.
I think of framing AI as having two fundamental problems:
- Practical problem: They operate in contextual and emotional "isolation" - no persistent understanding of your goals, values, or long-term intent
- Ethical problem: AI alignment is centralized around corporate values rather than individual users' authentic goals and ethics.
There is a direct parallel to social media's failure - platforms optimized for what they could do (engagement, monetization) rather than what they should do (serve user long term interests).
With these much more powerful AI systems emerging, we're at a crossroads of repeating this mistake...possibly at catastrophic scale even.
ie: At one point "hyperlink" and "Uniform Resource Locator" were made up words. No different that say "web log" (blog) and "Client-Server Architecture" were being made up as well.
I contend that both "Artipoint" and "Cortex Layer" have similar parallels in the sense they refer to concepts newly formulated or at least are captured into a formalism with a label for the first time!