上一篇分别讲了 Ollama 和 Open WebUI 怎么部署,这篇直接上全套:Ollama + Open WebUI + Nginx 反代,一套 docker-compose 搞定。
最终效果:访问 https://ai.yourdomain.com 就是 Open WebUI 聊天界面,后面 Ollama 自动对接,HTTPS 自动续签,一条命令启动一条命令停止。
环境要求
Linux 系统,已装 Docker + Compose,一个域名指向服务器 IP,80 和 443 端口开着。
没装 Docker 的话:
curl -fsSL https://get.docker.com | sh
systemctl enable --now docker
apt install -y docker-compose-plugin项目结构
ai-stack/
├── docker-compose.yml
├── nginx/
│ ├── nginx.conf
│ ├── certs/ # SSL 证书
│ └── html/ # Let's Encrypt 验证
├── data/
│ ├── ollama/ # 模型数据
│ └── open-webui/ # WebUI 数据
└── .envmkdir -p /opt/ai-stack/nginx/certs /opt/ai-stack/nginx/html
mkdir -p /opt/ai-stack/data/ollama /opt/ai-stack/data/open-webui
cd /opt/ai-stackcompose 文件
cat > docker-compose.yml << 'EOF'
version: "3.8"
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
restart: unless-stopped
volumes:
- ./data/ollama:/root/.ollama
environment:
- OLLAMA_HOST=0.0.0.0
networks:
- ai-net
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
restart: unless-stopped
volumes:
- ./data/open-webui:/app/backend/data
environment:
- OLLAMA_BASE_URL=http://ollama:11434
- WEBUI_AUTH=True
depends_on:
- ollama
networks:
- ai-net
nginx:
image: nginx:alpine
container_name: ai-nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/certs:/etc/nginx/certs:ro
- ./nginx/html:/usr/share/nginx/html:ro
depends_on:
- open-webui
- ollama
networks:
- ai-net
networks:
ai-net:
driver: bridge
EOF三个服务在同一个 Docker 网络里,互相通过容器名通信。只有 Nginx 暴露端口到外面,内部完全隔离。
Nginx 配置
cat > nginx/nginx.conf << 'EOF'
worker_processes auto;
events { worker_connections 1024; }
http {
client_max_body_size 100m;
server {
listen 80;
server_name _;
location /.well-known/acme-challenge/ { root /usr/share/nginx/html; }
location / { return 301 https://$host$request_uri; }
}
server {
listen 443 ssl;
server_name ai.yourdomain.com; # 改成你的
ssl_certificate /etc/nginx/certs/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
auth_basic "AI Stack";
auth_basic_user_file /etc/nginx/.htpasswd;
location / {
proxy_pass http://open-webui:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /ollama/ {
proxy_pass http://ollama:11434/;
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 300s;
}
}
}
EOF把 ai.yourdomain.com 换成你的真实域名。
创建 Basic Auth 密码:
apt install -y apache2-utils
htpasswd -c nginx/.htpasswd adminSSL 证书
apt install -y certbot
docker compose up -d nginx
docker compose down
certbot certonly --standalone -d ai.yourdomain.com --email your@email.com --agree-tos
cp /etc/letsencrypt/live/ai.yourdomain.com/fullchain.pem nginx/certs/
cp /etc/letsencrypt/live/ai.yourdomain.com/privkey.pem nginx/certs/证书续签 cron:
(crontab -l 2>/dev/null; echo "0 3 1 * * certbot renew --quiet && cp /etc/letsencrypt/live/ai.yourdomain.com/fullchain.pem /opt/ai-stack/nginx/certs/ && cp /etc/letsencrypt/live/ai.yourdomain.com/privkey.pem /opt/ai-stack/nginx/certs/ && docker compose -f /opt/ai-stack/docker-compose.yml exec nginx nginx -s reload") | crontab -启动
cd /opt/ai-stack
docker compose up -d
docker compose ps
# 看到三个容器都在运行就对了拉模型:
docker exec -it ollama ollama pull qwen2.5:7b打开浏览器访问 https://ai.yourdomain.com,输入密码后开始聊天。
常见问题
WebSocket 连不上 — 确认 nginx.conf 里有 proxy_http_version 1.1 和 Upgrade/Connection headers。
端口冲突 — 改 docker-compose.yml 里 nginx 的端口映射,比如 "8443:443"。
模型数据放其他盘 — 改 volume 路径:- /data/ollama:/root/.ollama。