Building Trust for AI Agents — ISM-X: A Privacy-Preserving Identity Layer (with demo)



This content originally appeared on DEV Community and was authored by Damjan Žakelj

In distributed AI systems, continuity and trust are hard problems.
An agent that restarts, migrates, or forks can lose its identity.
ISM-X is our answer — a small, privacy-preserving layer that combines cryptographic identity (DID) and attestation (HMAC over commitment).

  1. What we share

Reference code (Apache-2.0, ~250 lines)

Ed25519-signed passports

HMAC tag over pre-hashed commitments (no raw metrics)

Time/TTL, revocation, constant-time verification

  1. What we don’t share

Any private resonance metrics or production keys.
The demo uses DEMO_KEY_DO_NOT_USE, safe for sandboxing.

  1. Run the demo git clone https://github.com/Freeky7819/ismx-demo cd ismx-demo python ismx_open_demo.py

You’ll see the passport issuance, signature verification, and audit log in action.

  1. Why this matters

ISM-X bridges two domains:

Identity: persistent cryptographic DIDs.

Integrity: attestations that don’t leak proprietary state.

It’s a foundational step for local-first, privacy-preserving AI systems.

  1. What’s next

3-of-5 policy quorum

FROST/BLS threshold signatures

optional ZK-commit proofs

🔗 GitHub – ISM-X Demo Public Pack v1

License: Apache-2.0
Author: Freedom (Damjan)


This content originally appeared on DEV Community and was authored by Damjan Žakelj