๐Ÿ“„Stalecollected in 13h

Kumiho: Graph-Native Memory for AI Agents

Kumiho: Graph-Native Memory for AI Agents
PostLinkedIn
๐Ÿ“„Read original on ArXiv AI

๐Ÿ’กSOTA agent memory beats Gemini 2x on cognitive benchmarks w/ formal proofs

โšก 30-Second TL;DR

What Changed

Formal proof of AGM postulates (K*2-K*6) and Hansson core-retainments

Why It Matters

Provides reliable, versioned memory for AI agents, enabling better handling of beliefs and assets. Outperforms all baselines at low cost (~$14 eval), model-agnostic design boosts scalability.

What To Do Next

Download arXiv:2603.17244v1 and prototype Kumiho's dual-store in your agent pipeline.

Who should care:Researchers & Academics

๐Ÿง  Deep Insight

Web-grounded analysis with 6 cited sources.

๐Ÿ”‘ Enhanced Key Takeaways

  • โ€ขKumiho offers a cloud service at kumiho.io alongside open-source Python SDK, MCP memory plugin, and benchmark suite on GitHub.
  • โ€ขDream State is an asynchronous consolidation process that enriches memories by reviewing for staleness, adding semantic tags, creating relationship edges, and deprecating superseded facts overnight.
  • โ€ขKumiho includes integrations like an OpenClaw plugin with zero-latency prefetch recall and two-track consolidation (threshold and idle), plus a Claude skill for persistent memory in AI sessions.
๐Ÿ“Š Competitor Analysisโ–ธ Show
FeatureKumihoGraphiti (2025)Mem0 (2025)A-MEM (2025)RAG + Vector DB
Architectural Synthesis & Formal GroundingYes (AGM semantics)Individual components onlyIndividual components onlyIndividual components onlyNo
Provenance TracingTyped edges, immutable revisionsNot specifiedNot specifiedNot specifiedNot available
Contradiction HandlingDream State resolutionNot specifiedNot specifiedNot specifiedModel-dependent
Model DecouplingYes, survives swapsNot specifiedNot specifiedNot specifiedRe-embedding required
BenchmarksLoCoMo 0.565 F1, LoCoMo-Plus 93.3%Not compared directlyNot compared directlyNot compared directlyLower retrieval accuracy

๐Ÿ› ๏ธ Technical Deep Dive

  • โ€ขDual-store model uses Redis for short-term working memory buffer and Neo4j for long-term graph storage.
  • โ€ขHybrid retrieval combines fulltext, graph traversal, and vector search; prospective indexing generates LLM-based future-scenario implications at write time.
  • โ€ขClient-side LLM reranking improves accuracy (e.g., GPT-4o-mini to GPT-4o boosts from ~88% to 93.3%); Principle 6 stores minimal metadata in cloud graph while keeping raw content local for privacy.
  • โ€ขOpenClaw plugin features zero-latency prefetch (parallel recall during response generation), two-track consolidation (threshold: 20 messages; idle: 300s), and creative_recall for versioned agent outputs with krefs.
  • โ€ขLocal mode recommended with cloud HTTPS API key option; Dream State uses LLM auth for scheduled maintenance without separate keys.

๐Ÿ”ฎ Future ImplicationsAI analysis grounded in cited sources

Kumiho's model-decoupled design will enable seamless integration across LLM providers without re-embedding costs.
The architecture stores structured metadata and provenance independently of specific models, as shown by accuracy gains from swapping GPT-4o-mini to GPT-4o.
Graph-native memory will become standard for production AI agents requiring auditability.
Immutable revisions and typed edges provide built-in provenance tracing absent in vector DBs, addressing the 'black box' problem in agent memory.

โณ Timeline

2025
Concurrent systems Graphiti, Mem0, and A-MEM introduce individual memory components.
2026-03
Kumiho paper published on arXiv with formal AGM proofs and SOTA LoCoMo benchmarks.
2026-03
Kumiho launches cloud service at kumiho.io and open-source GitHub repository.
๐Ÿ“ฐ

Weekly AI Recap

Read this week's curated digest of top AI events โ†’

๐Ÿ‘‰Related Updates

AI-curated news aggregator. All content rights belong to original publishers.
Original source: ArXiv AI โ†—