Loading...
What BrainLayer — Persistent Memory for AI Agents can do
Semantic vectors meet keyword precision
Single queries run two search strategies simultaneously. bge-large-en-v1.5 embeddings (1024 dimensions) find conceptually similar content, while FTS5 catches exact keyword matches. Reciprocal Rank Fusion merges both ranked lists — results appearing in both get boosted, giving you the best of both worlds without tuning weights.
results = brainlayer.search(
query="authentication middleware",
project="golems",
intent="implementing",
importance_min=5,
n_results=10
)
# Returns: score, content, summary, tags,
# importance, intent, primary_symbolsSearch with filters — project, intent, importance threshold
Topic-specific tags via Gemini 2.5 Flash
Three enrichment backends (Groq, MLX, Ollama) analyze every chunk and generate structured metadata. Enrichment v2 uses a faceted tag schema: topic tags (brainlayer-search-quality, cmux-terminal-orchestration), activity tags (act:debugging, act:implementing), domain tags (dom:python, dom:sql), and confidence scores. 98% valid JSON in 100-chunk pilot with 204 unique topic tags generated.
{
"summary": "Debugging Telegram bot message drops",
"tags": ["telegram", "debugging", "performance"],
"importance": 8,
"intent": "debugging",
"primary_symbols": ["TelegramBot", "handleMessage"],
"resolved_query": "Why does the bot drop messages?",
"epistemic_level": "substantiated",
"debt_impact": "resolution"
}Enrichment output per chunk — 10 structured fields
Powerful memory layer with 12 intelligent tools that understand what you need
From 14 specialized tools to 12 that cover every use case. 3 core memory tools (brain_search, brain_store, brain_recall) plus 9 knowledge graph and lifecycle tools (brain_digest, brain_entity, brain_update, brain_expand, brain_tags, brain_subscribe, brain_unsubscribe, brain_stats, brain_crossref). Backward-compat aliases keep existing workflows intact.
Native macOS daemon for always-on recall
A 209KB Swift binary providing MCP over Unix socket. BrainBar runs as a macOS LaunchAgent, handling all MCP connections through a high-performance native bridge. Real-time indexing hooks capture prompt/response pairs as they happen — every conversation is indexed without manual intervention. Dual-protocol support (NDJSON + MCP Content-Length) ensures compatibility with all Claude Code transports.
Entities, relations, and person lookup across your codebase
BrainLayer builds a knowledge graph from your conversations. Bilingual entity extraction (English + Hebrew) with 3 strategies: GLiNER model, regex patterns, and seed entity matching. 119 entities across people, projects, and technologies, connected by typed relations. Person lookup returns entity profiles with scoped memories in a single call. Sentiment analysis per chunk adds emotional context to your development history.
Not just Claude Code sessions
The pipeline ingests from six sources: Claude Code JSONL transcripts (primary), WhatsApp message exports, YouTube transcript downloads, Markdown docs, Desktop files, and manual entries. Each source has content-aware filtering — WhatsApp messages need only 15 characters to be indexed (short-form messaging), while general assistant text requires 50 characters. Source metadata is preserved for filtered search.
Your memories as a navigable vault
Export your entire memory database as an Obsidian-compatible markdown vault. Each session becomes a note with metadata frontmatter, linked to related sessions and referenced files. Tags from enrichment become Obsidian tags. The result is a browsable knowledge graph of your development history.