Verify Installation
Health Checks
The backend exposes two health endpoints:
Liveness
Returns 200 OK when the process is running:
curl https://your-domain/api/v1/health/liveReadiness
Returns 200 OK when the database connection and migrations are complete:
curl https://your-domain/api/v1/health/readyIf readiness fails, check that the database container is healthy:
docker compose psAccess the Web UI
Open your browser to your configured domain (or http://localhost if using the default SITE_ADDRESS).
You should see the MemGhost setup wizard on first visit. If the frontend is not yet ready, give it a moment — the web container starts after the API is available.
Test the API
List notes (should return an empty array on a fresh install):
curl -s https://your-domain/api/v1/notes | head -c 200Verify AI Features
If you enabled the ai profile, check that Ollama is running and models are loaded:
# Check Ollama is respondingdocker compose exec ollama ollama list
# Test the health endpointcurl -s https://your-domain/api/v1/health/readyIf models haven’t been pulled yet, see First Run for the model download commands.
Verify Voice Features
If you enabled the voice profile:
# Check TTS servicedocker compose logs kokoro | tail -5
# Check STT servicedocker compose logs whisper | tail -5The voice toggle buttons appear in the chat UI header when TTS and STT services are available.
Container Status
View all running containers and their health:
docker compose psExpected output for a full install (all profiles):
| Service | Status | Notes |
|---|---|---|
| db | Up (healthy) | PostgreSQL with pgvector |
| api | Up | Go backend |
| web | Up | Next.js frontend |
| caddy | Up | Reverse proxy |
| ollama | Up | LLM inference (ai profile) |
| kokoro | Up | TTS (voice profile) |
| whisper | Up | STT (voice profile) |
View Logs
# All servicesdocker compose logs -f
# Single servicedocker compose logs -f apiTroubleshooting
API container won’t start
Check the migrate service completed successfully:
docker compose logs migrateCommon causes:
- Database not yet ready — the API waits for the
migrateservice, which waits fordbto be healthy. - Migration conflict — run
docker compose down -vto reset the database and try again.
Frontend shows blank page or errors
The frontend proxies API requests through Caddy. If using the standalone profile, make sure Caddy is running:
docker compose logs caddyOllama out of memory
The default chat model (qwen3:8b) needs ~5 GB RAM. On memory-constrained systems, try a smaller model:
docker compose exec ollama ollama pull qwen3:4bThen update AI_LLM_MODEL=qwen3:4b in your .env and restart the API.
Port conflicts
If ports 80 or 443 are already in use, change them in .env:
PORT=8080HTTPS_PORT=8443