First Run
Automatic Migrations
Database migrations run automatically via the migrate service when the stack starts. You don’t need to run them manually. The API waits for migrations to complete before starting.
If a migration fails, check the migrate service logs:
docker compose logs migrateSet Up AI Models (Optional)
If you started the stack with the ai profile, Ollama needs to download the inference models on first run. This is a one-time step:
# Embedding model (~275 MB) — powers semantic search and hub routingdocker compose exec ollama ollama pull nomic-embed-text
# Chat model (~5 GB) — powers AI chat and hub pipelinedocker compose exec ollama ollama pull qwen3:8bYou can verify the models loaded:
docker compose exec ollama ollama listOnce models are downloaded, the AI chat, semantic search, and hub pipeline features activate automatically.
Setup Wizard
On your first visit to the web UI, MemGhost walks you through a setup wizard:
- Creating your admin account
- Basic configuration
Once setup is complete, you can start capturing items into the vault and the AI pipeline will begin organizing them into hub pages.
Seed Development Data (Optional)
If you cloned the repository and have the Taskfile available, you can seed sample data:
task seed:dev