Your agents forget everything.
They don't have to.

Memory Engine is the persistent, searchable memory layer for AI agents. Store context once. Retrieve it by keyword, meaning, time, or category. Built on Postgres.

curl -sSfL https://install.memory.build | sh && me install

No API key required. Local embeddings. Works with any MCP client.

Claude Code
Cursor
Windsurf
Codex CLI
Gemini CLI

Your context layer.

Persistent, searchable memory for every workflow.

Engineering contextWhy did we switch from REST to gRPC?
Product & strategyWhy did we make this pricing change?
Data analysisWhat did customers say about our new onboarding?
Organizational knowledgeHow are our customers using AI?
Conversation historyWhat approaches did we rule out yesterday for the caching layer?
Code intelligenceHow did auth implementation change this month?

Find what's relevant, not what's similar.

Grep finds exact matches. Vector search finds similar vibes. Neither is enough when your agent needs “the auth decision from last Tuesday.” Memory Engine searches six ways at once. Your agent just describes what it needs.

KeywordBM25 via pg_textsearch
SemanticHNSW via pgvector
Temporaltstzrange + GiST index
Facetedjsonb + GIN index
Hierarchicalltree path traversal
Hybrid rankingReciprocal Rank Fusion

Eliminate the re-retrieval token overhead.

Most of the tokens your multi-agent systems burn are coordination overhead. Agents re-retrieve information other agents already fetched, re-explain context that should exist as shared state, and re-validate assumptions that could be read from common memory. Memory Engine stores context once. Every agent retrieves what it needs.

Memory for teams, not just scratchpads.

Shared context across agents with role-based access control built in.

Get started

curl -sSfL https://install.memory.build | sh && me install

Runs locally using a hosted ghost database

linux-x64linux-arm64macos-arm64