Quick Start
MemGhost is designed to run on modest hardware with Docker Compose. No need to clone a repository — just download the compose file and go.
Overview
The core deployment consists of four containers:
| Service | Image | Purpose |
|---|---|---|
| db | pgvector/pgvector:pg15 | Event store, read models, and vector search |
| api | memghost:latest | Go backend API server |
| web | memghost-web:latest | Next.js web interface |
| caddy | caddy:2-alpine | Reverse proxy with automatic HTTPS |
Optional services extend the platform with AI and voice capabilities:
| Service | Profile | Purpose |
|---|---|---|
| ollama | ai | LLM chat and semantic embeddings |
| kokoro | voice | Text-to-speech (67+ voices) |
| whisper | voice | Speech-to-text (voice input) |
Steps
- Prerequisites — install Docker and Docker Compose.
- Deploy — download the compose file and start the stack.
- First Run — migrations, AI model setup, and seed data.
- Verify Installation — health checks and accessing the UI.
Minimum Requirements
| Resource | Core Only | With AI | With AI + Voice |
|---|---|---|---|
| CPU | 1 core | 2 cores | 4 cores |
| RAM | 512 MB | 4 GB | 6 GB |
| Disk | 1 GB | 10 GB | 15 GB |
| OS | Any Linux with Docker | Ubuntu 22.04+ / Debian 12+ | Ubuntu 22.04+ / Debian 12+ |
AI features use Ollama for local inference. A GPU is not required but significantly speeds up chat responses and embedding generation.