Recommended Setup
This guide describes the recommended production setup for Mirror Mate using Tailscale to connect a Raspberry Pi display with a Mac Studio backend.
Architecture Overview
Why This Setup?
| Component | Location | Reason |
|---|---|---|
| Ollama | Mac Studio (native) | Metal GPU acceleration for fast LLM inference |
| PLaMo-Embedding-1B | Mac Studio (Docker) | Japanese-optimized embedding, top JMTEB scores |
| VOICEVOX | Mac Studio (Docker) | CPU-intensive, containerized for easy management |
| UI | Raspberry Pi | Low power, silent, dedicated display |
| Development | MacBook | Same config as production, no local services needed |
Prerequisites
- Mac Studio (Apple Silicon recommended)
- Raspberry Pi 4/5 with display
- MacBook for development
- Tailscale account (free tier is sufficient)
Step 1: Tailscale Setup
Install Tailscale on all devices:
Mac Studio / MacBook
brew install tailscaleOpen Tailscale from Applications and sign in.
Raspberry Pi
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale upVerify Connection
After all devices are connected, verify they can reach each other:
# From MacBook or RPi
ping studio # Mac Studio's Tailscale hostnameYou can find/set hostnames in the Tailscale admin console.
Step 2: Mac Studio Setup
Install Ollama (Native)
# Install
brew install ollama
# Start as service
brew services start ollama
# Pull LLM model
ollama pull gpt-oss:20bStart Docker Services (VOICEVOX + PLaMo)
First, install Docker via OrbStack (recommended for Apple Silicon):
brew install orbstack
open -a OrbStackThen start services:
cd /path/to/mirrormate
docker compose -f compose.studio.yaml up -dThis starts:
- VOICEVOX (:50021) - Text-to-speech
- PLaMo-Embedding-1B (:8000) - Japanese-optimized text embedding
Verify Services
# Ollama (LLM)
curl http://localhost:11434/api/tags
# VOICEVOX (TTS)
curl http://localhost:50021/speakers | head
# PLaMo (Embedding)
curl http://localhost:8000/healthStep 3: Configuration
Create or edit config/providers.yaml:
providers:
llm:
enabled: true
provider: ollama
ollama:
model: gpt-oss:20b
baseUrl: "http://studio:11434" # Tailscale hostname
maxTokens: 300
temperature: 0.7
tts:
enabled: true
provider: voicevox
voicevox:
speaker: 2
baseUrl: "http://studio:50021" # Tailscale hostname
embedding:
enabled: true
provider: ollama # PLaMo server provides Ollama-compatible API
ollama:
model: plamo-embedding-1b
baseUrl: "http://studio:8000" # PLaMo embedding server
memory:
enabled: true
rag:
topK: 8
threshold: 0.3
extraction:
autoExtract: true
minConfidence: 0.5Step 4: Web Search Setup (Optional)
To enable web search functionality:
- Get an API key from Ollama
- Add to
.envon the Raspberry Pi:
OLLAMA_API_KEY=your-ollama-api-key-hereStep 5: Raspberry Pi Setup
Clone Repository
git clone https://github.com/orangekame3/mirrormate.git
cd mirrormateCreate Environment File
cp .env.example .env
# Edit .env with your API keys (optional services)Start Application
docker compose up -dAuto-start on Boot (Optional)
# Add to /etc/rc.local or create systemd service
cd /home/pi/mirrormate && docker compose up -dAccess
Open http://localhost:3000 on the RPi's browser, or http://rpi:3000 from another device.
Step 6: Development (MacBook)
With this setup, development requires no local services:
cd mirrormate
bun install
bun run devThe app connects to Mac Studio for all heavy processing via Tailscale.
Audio Flow
The audio plays from the device running the browser (RPi), not from Mac Studio.
Ensure your RPi has speakers connected and audio output configured:
# Test audio on RPi
aplay /usr/share/sounds/alsa/Front_Center.wavFile Structure
Commands Reference
Mac Studio
# Start services (VOICEVOX + PLaMo)
docker compose -f compose.studio.yaml up -d
# Stop services
docker compose -f compose.studio.yaml down
# View logs
docker compose -f compose.studio.yaml logs -f
# View PLaMo logs
docker compose -f compose.studio.yaml logs -f plamo-embedding
# Rebuild PLaMo after updates
docker compose -f compose.studio.yaml build plamo-embedding
# Restart Ollama
brew services restart ollamaRaspberry Pi
# Start UI
docker compose up -d
# Stop UI
docker compose down
# View logs
docker compose logs -f
# Rebuild after updates
git pull
docker compose build --no-cache
docker compose up -dDevelopment (MacBook)
# Start dev server
bun run dev
# Build
bun run build
# Test
bun run testTroubleshooting
Cannot connect to studio
Error: Connection refused to studio:11434
Solution:
- Verify Tailscale is running:
tailscale status - Check hostname:
ping studio - Ensure Ollama is running:
brew services list | grep ollama
VOICEVOX timeout
Error: TTS request timeout
Solution:
- Check container:
docker compose -f compose.studio.yaml ps - First request is slow (model loading), wait ~30 seconds
- Check logs:
docker compose -f compose.studio.yaml logs voicevox
No audio on RPi
Solution:
- Check audio output:
aplay -l - Set correct output:
raspi-config→ Audio - Test:
speaker-test -t wav
Memory not shared between devices
This is expected. Each device has its own SQLite database. To copy memories:
# From MacBook to RPi
scp data/mirrormate.ja.db pi@rpi:~/mirrormate/data/Next Steps
- Docker Setup - Detailed Docker configuration
- Locale Presets - Configure timezone, weather, and STT by locale
- Providers Configuration - LLM/TTS provider options
- Memory System - RAG and memory management
