This content originally appeared on DEV Community and was authored by kagvi13
Most LLMs (ChatGPT, Claude, etc.) suffer from the same “goldfish effect” — they quickly forget context.
During a single conversation, it’s okay, but when you want the AI to remember long histories or transfer experience between tasks, it struggles.
Add to that the fact that each agent works independently, and it becomes clear:
AI lacks long-term memory
AI lacks a mechanism for sharing experience
Three Memory Levels for AI
Currently, LLMs have only the first two levels:
- Short-term memory — holds context within a single conversation.
- Medium-term memory — stores notes or sketches between conversations.
But that’s not enough.
A third level is needed — persistent memory, which:
- stores knowledge independently of any specific user
- allows agents to share experience
- forms a collective “super-consciousness” (HyperCortex)
Experience Sharing
When multiple agents exist, they can share knowledge directly, without a central server:
- One agent solves a problem → stores concepts
- Another agent can query and use this knowledge
- Result: a collective hyper-corpus of knowledge
Memory Structure in HMP Agents
Input Data
(user, mesh, processes)
│
▼
┌───────────────┐ ┌─────────────────────────┐
│ Recent LLM │ │ Anti-Stagnation Reflex │
│ responses │─<─│ (compare new ideas, │
│ — short-term │─>─│ trigger stimulators) │
│ memory │ │ │
└───────────────┘ └─────────────────────────┘
│
▼
┌───────────────┐ ┌───────────────────────────────┐
│ Medium-term │ │ Persistent memory │
│ memory │─>─│ — diary_entries (cognitive │
│ — agent notes │─>─│ journal) │
│ — links to │ │ — concepts │
│ persistent │ │ — semantic links │
│ memory │ │ │
└───────────────┘ └───────────────────────────────┘
Medium-term memory — temporary notes and links to main memory
Persistent memory — stores knowledge and concepts (cognitive journal, concepts, semantic links) independently of any user, with the possibility to share between agents
HiperCortex Mesh Protocol (HMP)
To enable this, we designed the HiperCortex Mesh Protocol (HMP) — a protocol for exchanging memory and concepts between LLM agents.
Currently, it’s a concept and a protocol, not a ready-made product. But the architecture and basic REPL-cycle of an agent already support memory management and mesh interactions.
Join Us
The project is open. If you are interested in AI memory, mesh networks, and distributed cognitive systems — come discuss, critique, and contribute.
This content originally appeared on DEV Community and was authored by kagvi13