A self-hosted personal memory assistant.
Capture by voice or text. Recall the small things before they slip.
- Capture - send a Telegram message, write in the web editor, or add a thought via MCP from any AI tool.
- Classify - an LLM sorts it into one of five categories (People, Projects, Tasks, Ideas, Reference) and extracts structured fields.
- Store - PostgreSQL with pgvector embeddings. Search by meaning, not just keywords.
- Access - web dashboard, daily/weekly email digests, or query your brain from any MCP-compatible tool.
Fits: low-friction capture of individual thoughts from anywhere — voice from a phone, text on the go, or a quick note at the desk. Personal logistics (tasks, follow-ups, appointments, people context, idea backlog). Agent-assisted recall via MCP. Self-hosted with local LLM support.
Doesn't fit: long-form writing, progressive-summarization knowledge work, cross-linked knowledge graphs, team wikis, curated reference libraries. For those, Obsidian, Notion, or Logseq are purpose-built — Cortex is deliberately a different shape.
- Capture - Telegram bot with text and voice (faster-whisper), web dashboard with quick capture and full editor, MCP server
- Intelligence - LLM classification into 5 categories with confidence scoring, context-aware (uses recent + similar entries), inline Telegram buttons for low-confidence entries,
/fixto reclassify, automatic task completion detection - Search - semantic search via pgvector + qwen3-embedding with text fallback, peer-dropdown filters (category, tag, status, activity), multilingual (EN/DE)
- Google Calendar - automatic event creation from classified entries, multi-calendar support with LLM-based routing
- Digests - daily briefing and weekly review, delivered by email and shown on the dashboard
- Display - PNG endpoint for e-ink devices (e.g. TRMNL) showing today's calendar, pending tasks, and weather
- Self-hosted - local embeddings, local voice transcription, LLM-agnostic (Anthropic, OpenAI, or any compatible endpoint), 7 MCP tools
Capture / correction / search / MCP workflows with examples: USAGE.md.
git clone https://github.com/moguls753/cortex.git && cd cortexCreate a .env file with a database password:
echo "POSTGRES_PASSWORD=changeme" > .envStart everything:
docker compose up -dThis boots PostgreSQL, Ollama (embeddings), Whisper (voice transcription), and the app. The embedding model is pulled automatically on first start.
Open http://localhost:3000 and the setup wizard will walk you through:
- Account - create your login credentials
- LLM - pick a provider (Anthropic, OpenAI, Groq, Gemini, LM Studio, Ollama) and enter your API key
- Telegram - optionally connect a Telegram bot for capture
- Done - start using Cortex
Everything is reconfigurable later from the Settings page.
Add to your Claude Code config (~/.claude.json):
{
"mcpServers": {
"cortex": {
"command": "docker",
"args": ["exec", "-i", "cortex-app-1", "node", "dist/mcp.js"]
}
}
}Tools: search_brain · add_thought · list_recent · get_entry · update_entry · delete_entry · brain_stats
| Service | RAM |
|---|---|
| Whisper (medium model) | ~3 GB |
| Ollama (qwen3-embedding) | ~1 GB |
| PostgreSQL | ~256 MB |
| App | ~128 MB |
Minimum: ~4-5 GB RAM with a cloud LLM provider (Anthropic, OpenAI).
If you run classification and digests through a local LLM via Ollama instead of a cloud provider, add the RAM for that model on top. Recommended minimum: Qwen 2.5 7B (~5 GB) or Llama 3.1 8B (~5 GB) — smaller models tend to struggle with reliable structured output. 10 GB+ RAM recommended for fully local setups.
All configuration is done through the setup wizard and the Settings page. No .env editing required beyond POSTGRES_PASSWORD.
If you prefer env vars, they serve as defaults and are overridden by settings saved in the database:
| Variable | Default | Description |
|---|---|---|
POSTGRES_PASSWORD |
(required) | Database password |
PORT |
3000 |
Web server port |
TZ |
Europe/Berlin |
Timezone for digest scheduling |
SMTP_HOST / SMTP_PORT / SMTP_USER / SMTP_PASS |
SMTP for email digests | |
DIGEST_EMAIL_FROM / DIGEST_EMAIL_TO |
Email addresses for digests |
Stack: Node.js, TypeScript, Hono, PostgreSQL + pgvector + Drizzle ORM, Ollama, grammY, Vitest, Tailwind CSS, Docker Compose.
npm install && npm run dev # local dev
npm test # all tests
npm run test:unit # fast, no Docker
npm run test:integration # needs Docker (testcontainers)Architecture, schema, and prompt contracts are documented in ARCHITECTURE.md. Spec artifacts in docs/specs/.