Running AI models locally gives you complete control, privacy, and zero dependency on internet connectivity. This guide walks you through setting up Open WebUI with Ollama—a ChatGPT-like interface running entirely on your machine.
What You’ll Need
- Docker Desktop
- Ollama (local LLM runtime)
- About 15 minutes
Step 1: Install Docker Desktop
Docker provides the containerized environment for Open WebUI.
- Download Docker Desktop: https://docs.docker.com/desktop/
- Install and launch the application
- Verify installation by opening a terminal and running:
docker --version
Step 2: Install Ollama
Ollama runs the AI models locally on your machine.
- Download Ollama: https://ollama.com/download
- Install for your operating system
- Verify installation:
ollama --version
- Pull a model (e.g., Llama 3.2):
ollama pull llama3.2
Step 3: Launch Open WebUI
Run this single command to start Open WebUI:
docker run -d -p 3000:8080 \
--add-host=host.docker.internal:host-gateway \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
What this does:
- Maps port 3000 on your machine to the container
- Enables communication with Ollama on your host
- Persists data between restarts
- Automatically restarts the container if it stops
Step 4: Access Your Interface
Open your browser and navigate to:
http://localhost:3000
Create an account (stored locally) and start chatting with your AI models.
Why This Setup?
For Program Managers:
- No vendor lock-in: Full control over your AI infrastructure
- Data privacy: Everything stays on your machine
- Cost predictable: No usage-based pricing
- Offline capable: Works without internet after initial setup
- Customizable: Add models and adjust configurations as needed
Troubleshooting
Can’t connect to Ollama? Ensure Ollama is running and the host gateway is properly configured.
Port 3000 already in use?
Change -p 3000:8080
to -p 3001:8080
(or any available port).
Container won’t start? Check Docker Desktop is running and you have sufficient disk space.
This setup provides a production-ready local AI assistant in under 15 minutes—no cloud dependencies required.