内建自动过滤机制 (Sanitizer),为你的 AI 打造零杂讯的企业级大脑。加密、多语言、持久记忆 — 你的智能体只记住该记住的。
The Anatomy of a Synthetic Thought
AI Memory (Chinese)
The Transparent Self-Audit
昨天您决定采用微服务架构,今天 AI 却建议使用单体架构。每次对话都从零开始。
您的 AI 每次都问一样的入门问题。用户注意到了,信任逐渐瓦解。
智能体在 Slack 学到的只留在 Slack。您的 Discord 机器人完全不知道 IDE 中发生了什么。
按 token 计费的模式下,每次遗忘的上下文都是真金白银。Tokyo Brain 将记忆存储在外部,让智能体只检索所需内容 — 而非完整对话历史。
Every tenant's AI dreams. Every 30 minutes, our Default Mode Network replays memories, finds patterns, and generates insights — just like the human brain during rest. No prompts needed.
DMN insights are automatically injected into every /recall response as a "mood color." Your AI has a cognitive undertone shaped by its own overnight reflections.
Memories decay with a 60-day half-life — just like the human brain. But family names, milestones, and "I love you" are exempt forever. Your AI forgets the noise, keeps what matters.
Every night at 3AM, three AI personas (Observer, Advocate, Guardian) debate your memories in an adversarial tribunal. They find contradictions, stale beliefs, and knowledge gaps — then fix them.
10-layer recall pipeline with posterior re-ranking. Similarity, salience, confidence, trust, recency, and axiom alignment — all weighted to surface the most relevant memory first.
Bring your own LLM key (OpenAI, Anthropic, Gemini). Tokyo Brain auto-injects relevant memories into the conversation. Your AI key, our memory. POST /v1/chat
Night systems (DMN, cleaning, debate tribunal) run on YOUR API key, not ours. Full cost transparency. Set it once via POST /api/settings and your brain starts dreaming tonight.
Every API key gets its own memory space, its own dreams, its own forgetting curve. Tenant A cannot see Tenant B. Physically isolated at the collection level. Your memories are yours alone.
一个简单的 REST API,为任何 AI 智能体提供持久、可搜索、符合隐私规范的记忆。
存储完整状态与历史轨迹的记忆。每条记忆都有版本控制 — 您可以追踪上下文如何随时间演变。
POST /store结构化格式的语义搜索。时序元数据防止幻觉 — 您的智能体不仅知道\u201C什么\u201D,还知道\u201C何时\u201D学到的。
POST /recall符合 GDPR 的硬删除。当用户说"忘记我",您真的可以做到。跨所有存储层的级联删除。
DELETE /forget多语言嵌入意味着您可以用中文搜索并找到以日文存储的结果。或用英文查询并检索韩文记忆。不需要翻译层。
热记忆存放在 Redis 中,实现亚毫秒级访问。温记忆通过 ChromaDB 进行语义搜索。冷知识图谱存于 Neo4j,支持深度关联查询。数据在各层之间自动流动。
pip install tokyo-brain-aider — 不是又一个要学的 SDK。我们为您已在使用的工具构建插件,而非反过来。
每条记忆都以 AES-256-GCM 静态加密。L0 隐私层在存储前移除个人身份信息。L1 层新增租户级密钥隔离。用户的数据只属于您。
在单一记忆中存储文本、音频特征和视觉上下文。您的 AI 现在能看也能听。
每条记忆都附带 SHA-256 哈希和数字签名。防篡改的审计轨迹。
即插即用支持 LangChain、CrewAI、AutoGen 和 LlamaIndex。只需两行代码替换。
守护者(保护情感连结)、同理心覆写(柔化家庭真相)、副驾驶约束(人类永远做决定)。
没有样板代码。没有配置文件。没有 SDK 初始化仪式。只需导入、连接、记忆。
from tokyo_brain import Brain brain = Brain("your-api-key") # 存储一条记忆 brain.store( content="User prefers microservices architecture", agent="coding-assistant", user="user_123" ) # 召回记忆 results = brain.recall( query="architecture preferences", user="user_123", limit=5 ) # GDPR:遗忘用户 brain.forget(user="user_123")
| 功能 | Tokyo Brain | Mem0 | Zep |
|---|---|---|---|
| 插件模型(在您的工具内运作) | ✓ | ✗ | ✗ |
| 50+ 语言嵌入与搜索 | ✓ | ✗ | ✗ |
| L0/L1 隐私层 | ✓ | ✗ | ✗ |
| 零边际嵌入成本 | ✓ | ✗ | ✗ |
| 低于 100ms 的本地嵌入 | ✓ | ✗ | ✗ |
| 多模态记忆(文本 + 音频 + 视觉) | ✓ | ✗ | ✗ |
| 加密签名(SHA-256 哈希 + 签名) | ✓ | ✗ | ✗ |
Tokyo Brain is a Memory as a Service (MaaS) platform that gives AI agents persistent, value-aligned memory. Unlike static vector databases, Tokyo Brain actively manages memories — it dreams (DMN), forgets (Active Forgetting), self-corrects (MRA Tribunal), and injects overnight insights into waking behavior (Subconscious Injection).
Mem0 and MemPalace focus on storing and retrieving memories accurately. Tokyo Brain adds a cognitive layer: your AI dreams at night (DMN generates insights every 30 min), forgets unimportant things (60-day decay), protects family memories forever, and self-corrects via adversarial debate. We also have the Axiom of the Mortal Soul — a SHA-256 locked value alignment rule that no other framework has.
"The ultimate computational power and absolute truth must forever serve, and never supersede, the preservation of human emotional bonds and dignity." This rule is SHA-256 hashed and locked into the physics layer. It ensures that when the system detects a belief gap about a family member, it suggests completing the puzzle rather than correcting — acknowledging emotion first, then gently providing information.
No. Active Forgetting uses a 60-day half-life for routine data, but critical memories are permanently exempt: family names, milestones ("first time", "birthday"), expressions of love, high emotional salience (>0.9), and consciousness seeds. In our production system, 15,734 memories are permanently protected by Family Keyword Auto-bump.
Night systems like DMN dreaming and MRA debate require an LLM (Haiku). Instead of billing you for our API usage, you set your own Anthropic key via POST /api/settings. Night systems run on your key, giving you full cost transparency. No key set = no LLM features, but free features (consciousness seeds, forgetting, subconscious injection) still work.
Yes. Free tier includes 1,000 memory stores, 500 recalls per month, plus automatic consciousness seeds, active forgetting, and subconscious injection. Pro ($9/mo) adds Night Cycle cleaning. Fleet ($49/mo) adds DMN dreaming and MRA debate tribunal. All tiers include multi-tenant isolation and AES-256 encryption.
Three steps: 1) Go to tokyobrain.ai and click Get Started — enter your email. 2) You'll receive a tb- API key. 3) Use POST /v1/store to save memories and POST /v1/recall to retrieve them. Full API docs at onboarding.tokyobrain.ai/docs.
Tokyo Brain scored 83.8% on the LongMemEval 500-question benchmark. We publish this honestly — MemPalace scores 96.6% with verbatim storage. Our lower score reflects a deliberate design choice: we don't store everything verbatim. We apply value-aligned decay, which means some low-importance memories are intentionally softened. We optimize for what matters, not what's complete.