mirror of
https://github.com/open-webui/docs.git
synced 2026-03-26 13:18:42 +07:00
HTTPS moved
This commit is contained in:
@@ -1,7 +0,0 @@
|
||||
{
|
||||
"label": "HTTPS",
|
||||
"position": 10,
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
}
|
||||
}
|
||||
@@ -1,141 +0,0 @@
|
||||
---
|
||||
sidebar_position: 202
|
||||
title: "HTTPS using Caddy"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
|
||||
|
||||
:::
|
||||
|
||||
## HTTPS Using Caddy
|
||||
|
||||
Ensuring secure communication between your users and the Open WebUI is paramount. HTTPS (HyperText Transfer Protocol Secure) encrypts the data transmitted, protecting it from eavesdroppers and tampering. By configuring Caddy as a reverse proxy, you can seamlessly add HTTPS to your Open WebUI deployment, enhancing both security and trustworthiness.
|
||||
|
||||
This guide is simple walkthrough to set up a Ubuntu server with Caddy as a reverse proxy for Open WebUI, enabling HTTPS with automatic certificate management.
|
||||
|
||||
There's a few steps we'll follow to get everything set up:
|
||||
|
||||
- [HTTPS Using Caddy](#https-using-caddy)
|
||||
- [Docker](#docker)
|
||||
- [Installing Docker](#installing-docker)
|
||||
- [OpenWebUI](#openwebui)
|
||||
- [Installing OpenWebUI](#installing-openwebui)
|
||||
- [Caddy](#caddy)
|
||||
- [Installing Caddy](#installing-caddy)
|
||||
- [Configure Caddy](#configure-caddy)
|
||||
- [Testing HTTPS](#testing-https)
|
||||
- [Updating Open WebUI](#updating-open-webui)
|
||||
- [Stopping Open WebUI](#stopping-open-webui)
|
||||
- [Pulling the latest image](#pulling-the-latest-image)
|
||||
- [Starting Open WebUI](#starting-open-webui)
|
||||
|
||||
## Docker
|
||||
|
||||
Follow the guide to set up Docker's apt repository [Docker](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository)
|
||||
|
||||
I've included `docker-compose` as it's needed to run `docker compose`.
|
||||
|
||||
### Installing Docker
|
||||
|
||||
Here's the command I've used to install Docker on Ubuntu:
|
||||
|
||||
```bash
|
||||
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
|
||||
```
|
||||
|
||||
## OpenWebUI
|
||||
|
||||
I'd go ahead and create a directory for the Open WebUI project:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/open-webui
|
||||
cd ~/open-webui
|
||||
```
|
||||
|
||||
### Installing OpenWebUI
|
||||
|
||||
Create a `docker-compose.yml` file in the `~/open-webui` directory. I've left in a commented section for setting some environment varibles for Qdrant, but you can follow that for any other [environment variables](https://docs.openwebui.com/reference/env-configuration) you might need to set.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
open-webui:
|
||||
image: ghcr.io/open-webui/open-webui:main
|
||||
container_name: open-webui
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- ./data:/app/backend/data
|
||||
# environment:
|
||||
# - "QDRANT_API_KEY=API_KEY_HERE"
|
||||
# - "QDRANT_URI=https://example.com"
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
## Caddy
|
||||
|
||||
Caddy is a powerful web server that automatically manages TLS certificates for you, making it an excellent choice for serving Open WebUI over HTTPS.
|
||||
|
||||
### Installing Caddy
|
||||
|
||||
Follow the [guide to set up Caddy's on Ubuntu](https://caddyserver.com/docs/install#debian-ubuntu-raspbian).
|
||||
|
||||
### Configure Caddy
|
||||
|
||||
You're going to need to change the `CaddyFile` to use your domain.
|
||||
|
||||
To do that, edit the file `/etc/caddy/Caddyfile`.
|
||||
|
||||
```bash
|
||||
sudo nano /etc/caddy/Caddyfile
|
||||
```
|
||||
|
||||
Then the configuration should have the following:
|
||||
|
||||
```caddyfile
|
||||
your-domain.com {
|
||||
reverse_proxy localhost:8080
|
||||
}
|
||||
```
|
||||
|
||||
Make sure to replace `your-domain.com` with your actual domain name.
|
||||
|
||||
## Testing HTTPS
|
||||
|
||||
Now assuming you've already set up your DNS records to point to your server's IP address, you should be able to test if Open WebUI is accessible via HTTPS by running `docker compose up` in the `~/open-webui` directory.
|
||||
|
||||
```bash
|
||||
cd ~/open-webui
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
You should now be able to access Open WebUI at `https://your-domain.com`.
|
||||
|
||||
## Updating Open WebUI
|
||||
|
||||
I wanted to include a quick note on how to update Open WebUI without losing your data. Since we're using a volume to store the data, you can simply pull the latest image and restart the container.
|
||||
|
||||
### Stopping Open WebUI
|
||||
|
||||
First we need to stop and remove the existing container:
|
||||
|
||||
```bash
|
||||
docker rm -f open-webui
|
||||
```
|
||||
|
||||
### Pulling the latest image
|
||||
|
||||
Then you can start the container again:
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
### Starting Open WebUI
|
||||
|
||||
Now you can start the Open WebUI container again:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
@@ -1,202 +0,0 @@
|
||||
---
|
||||
sidebar_position: 201
|
||||
title: "HTTPS using HAProxy"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
|
||||
|
||||
:::
|
||||
|
||||
# HAProxy Configuration for Open WebUI
|
||||
|
||||
HAProxy (High Availability Proxy) is specialized load-balancing and reverse proxy solution that is highly configurable and designed to handle large amounts of connections with a relatively low resource footprint. for more information, please see: https://www.haproxy.org/
|
||||
|
||||
## Install HAProxy and Let's Encrypt
|
||||
|
||||
First, install HAProxy and Let's Encrypt's certbot:
|
||||
|
||||
### Redhat derivatives
|
||||
|
||||
```sudo dnf install haproxy certbot openssl -y```
|
||||
|
||||
### Debian derivatives
|
||||
|
||||
```sudo apt install haproxy certbot openssl -y```
|
||||
|
||||
## HAProxy Configuration Basics
|
||||
|
||||
HAProxy's configuration is by default stored in ```/etc/haproxy/haproxy.cfg```. This file contains all the configuration directives that determine how HAProxy will operate.
|
||||
|
||||
The base configuration for HAProxy to work with Open WebUI is pretty simple.
|
||||
|
||||
```shell
|
||||
#---------------------------------------------------------------------
|
||||
|
||||
# Global settings
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
global
|
||||
# to have these messages end up in /var/log/haproxy.log you will
|
||||
# need to:
|
||||
#
|
||||
# 1) configure syslog to accept network log events. This is done
|
||||
# by adding the '-r' option to the SYSLOGD_OPTIONS in
|
||||
# /etc/sysconfig/syslog
|
||||
#
|
||||
# 2) configure local2 events to go to the /var/log/haproxy.log
|
||||
# file. A line like the following can be added to
|
||||
# /etc/sysconfig/syslog
|
||||
#
|
||||
# local2.* /var/log/haproxy.log
|
||||
#
|
||||
log 127.0.0.1 local2
|
||||
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/run/haproxy.pid
|
||||
maxconn 4000
|
||||
user haproxy
|
||||
group haproxy
|
||||
daemon
|
||||
|
||||
#adjust the dh-param if too low
|
||||
tune.ssl.default-dh-param 2048
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
|
||||
# use if not designated in their block
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
defaults
|
||||
mode http
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
option http-server-close
|
||||
option forwardfor #except 127.0.0.0/8
|
||||
option redispatch
|
||||
retries 3
|
||||
timeout http-request 300s
|
||||
timeout queue 2m
|
||||
timeout connect 120s
|
||||
timeout client 10m
|
||||
timeout server 10m
|
||||
timeout http-keep-alive 120s
|
||||
timeout check 10s
|
||||
maxconn 3000
|
||||
|
||||
#http
|
||||
frontend web
|
||||
#Non-SSL
|
||||
bind 0.0.0.0:80
|
||||
#SSL/TLS
|
||||
bind 0.0.0.0:443 ssl crt /path/to/ssl/folder/
|
||||
|
||||
#Let's Encrypt SSL
|
||||
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
|
||||
use_backend letsencrypt-backend if letsencrypt-acl
|
||||
|
||||
#Subdomain method
|
||||
acl chat-acl hdr(host) -i subdomain.domain.tld
|
||||
#Path Method
|
||||
acl chat-acl path_beg /owui/
|
||||
use_backend owui_chat if chat-acl
|
||||
|
||||
#Pass SSL Requests to Lets Encrypt
|
||||
backend letsencrypt-backend
|
||||
server letsencrypt 127.0.0.1:8688
|
||||
|
||||
#OWUI Chat
|
||||
backend owui_chat
|
||||
# add X-FORWARDED-FOR
|
||||
option forwardfor
|
||||
# add X-CLIENT-IP
|
||||
http-request add-header X-CLIENT-IP %[src]
|
||||
http-request set-header X-Forwarded-Proto https if { ssl_fc }
|
||||
server chat <ip>:3000
|
||||
|
||||
## WebSocket and HTTP/2 Compatibility
|
||||
|
||||
Starting with recent versions (including HAProxy 3.x), HAProxy may enable HTTP/2 by default. While HTTP/2 supports WebSockets (RFC 8441), some clients or backend configurations may experience "freezes" or unresponsiveness when icons or data start loading via WebSockets over an H2 tunnel.
|
||||
|
||||
If you experience these issues:
|
||||
1. **Force HTTP/1.1 for WebSockets**: Add `option h2-workaround-bogus-websocket-clients` to your `frontend` or `defaults` section. This prevents HAProxy from advertising RFC 8441 support to the client, forcing a fallback to the more stable HTTP/1.1 Upgrade mechanism.
|
||||
2. **Backend Version**: Ensure your backend connection is using HTTP/1.1 (the default for `mode http`).
|
||||
|
||||
Example addition to your `defaults` or `frontend`:
|
||||
```shell
|
||||
defaults
|
||||
# ... other settings
|
||||
option h2-workaround-bogus-websocket-clients
|
||||
```
|
||||
|
||||
You will see that we have ACL records (routers) for both Open WebUI and Let's Encrypt. To use WebSocket with OWUI, you need to have an SSL configured, and the easiest way to do that is to use Let's Encrypt.
|
||||
|
||||
You can use either the subdomain method or the path method for routing traffic to Open WebUI. The subdomain method requires a dedicated subdomain (e.g., chat.yourdomain.com), while the path method allows you to access Open WebUI through a specific path on your domain (e.g., yourdomain.com/owui/). Choose the method that best suits your needs and update the configuration accordingly.
|
||||
|
||||
:::info
|
||||
|
||||
You will need to expose port 80 and 443 to your HAProxy server. These ports are required for Let's Encrypt to validate your domain and for HTTPS traffic. You will also need to ensure your DNS records are properly configured to point to your HAProxy server. If you are running HAProxy at home, you will need to use port forwarding in your router to forward ports 80 and 443 to your HAProxy server.
|
||||
|
||||
:::
|
||||
|
||||
## Issuing SSL Certificates with Let's Encrypt
|
||||
|
||||
Before starting HAProxy, you will want to generate a self signed certificate to use as a placeholder until Let's Encrypt issues a proper one. Here's how to generate a self-signed certificate:
|
||||
|
||||
```shell
|
||||
openssl req -x509 -newkey rsa:2048 -keyout /tmp/haproxy.key -out /tmp/haproxy.crt -days 3650 -nodes -subj "/CN=localhost"
|
||||
```
|
||||
|
||||
Then combine the key and certificate into a PEM file that HAProxy can use:
|
||||
|
||||
```cat /tmp/haproxy.crt /tmp/haproxy.key > /etc/haproxy/certs/haproxy.pem```
|
||||
|
||||
:::info
|
||||
|
||||
Make sure you update the HAProxy configuration based on your needs and configuration.
|
||||
|
||||
:::
|
||||
|
||||
Once you have your HAProxy configuration set up, you can use certbot to obtain and manage your SSL certificates. Certbot will handle the validation process with Let's Encrypt and automatically update your certificates when they are close to expiring (assuming you use the certbot auto-renewal service).
|
||||
|
||||
You can validate the HAProxy configuration by running `haproxy -c -f /etc/haproxy/haproxy.cfg`. If there are no errors, you can start HAProxy with `systemctl start haproxy` and verify it's running with `systemctl status haproxy`.
|
||||
|
||||
To ensure HAProxy starts with the system, `systemctl enable haproxy`.
|
||||
|
||||
When you have HAProxy configured, you can use Let's encrypt to issue your valid SSL certificate.
|
||||
First, you will need to register with Let's Encrypt. You should only need to do this one time:
|
||||
|
||||
`certbot register --agree-tos --email your@email.com --non-interactive`
|
||||
|
||||
Then you can request your certificate:
|
||||
|
||||
```shell
|
||||
certbot certonly -n --standalone --preferred-challenges http --http-01-port-8688 -d yourdomain.com
|
||||
```
|
||||
|
||||
Once the certificate is issued, you will need to merge the certificate and private key files into a single PEM file that HAProxy can use.
|
||||
|
||||
```shell
|
||||
cat /etc/letsencrypt/live/{domain}/fullchain.pem /etc/letsencrypt/live/{domain}/privkey.pem > /etc/haproxy/certs/{domain}.pem
|
||||
chmod 600 /etc/haproxy/certs/{domain}.pem
|
||||
chown haproxy:haproxy /etc/haproxy/certs/{domain}.pem
|
||||
```
|
||||
|
||||
You can then restart HAProxy to apply the new certificate:
|
||||
`systemctl restart haproxy`
|
||||
|
||||
## HAProxy Manager (Easy Deployment Option)
|
||||
|
||||
If you would like to have something manage your HAProxy configuration and Let's Encrypt SSLs automatically, I have written a simple python script and created a docker container you can use to create and manage your HAProxy config and manage the Let's Encrypt certificate lifecycle.
|
||||
|
||||
https://github.com/shadowdao/haproxy-manager
|
||||
|
||||
:::warning
|
||||
|
||||
Please do not expose port 8000 publicly if you use the script or container!
|
||||
|
||||
:::
|
||||
@@ -1,390 +0,0 @@
|
||||
---
|
||||
sidebar_position: 200
|
||||
title: "HTTPS using Nginx"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
|
||||
|
||||
:::
|
||||
|
||||
# HTTPS using Nginx
|
||||
|
||||
Ensuring secure communication between your users and the Open WebUI is paramount. HTTPS (HyperText Transfer Protocol Secure) encrypts the data transmitted, protecting it from eavesdroppers and tampering. By configuring Nginx as a reverse proxy, you can seamlessly add HTTPS to your Open WebUI deployment, enhancing both security and trustworthiness.
|
||||
|
||||
This guide provides three methods to set up HTTPS:
|
||||
|
||||
- **Self-Signed Certificates**: Ideal for development and internal use, using docker.
|
||||
- **Let's Encrypt**: Perfect for production environments requiring trusted SSL certificates, using docker.
|
||||
- **Windows+Self-Signed**: Simplified instructions for development and internal use on windows, no docker required.
|
||||
|
||||
:::danger Critical: Configure CORS for WebSocket Connections
|
||||
|
||||
A very common and difficult-to-debug issue with WebSocket connections is a misconfigured Cross-Origin Resource Sharing (CORS) policy. When running Open WebUI behind a reverse proxy like Nginx Proxy Manager, you **must** set the `CORS_ALLOW_ORIGIN` environment variable in your Open WebUI configuration.
|
||||
|
||||
Failure to do so will cause WebSocket connections to fail, even if you have enabled "Websockets support" in Nginx Proxy Manager.
|
||||
|
||||
### HTTP/2 and WebSockets
|
||||
|
||||
If you enable **HTTP/2** on your Nginx server, ensure that your proxy configuration still uses **HTTP/1.1** for the connection to the Open WebUI backend. This is crucial as most WebUI features (like streaming and real-time updates) rely on WebSockets, which are more stable when handled via HTTP/1.1 `Upgrade` than over the newer RFC 8441 (WebSockets over H2) in many proxy environments.
|
||||
|
||||
In your Nginx location block, always include:
|
||||
```nginx
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
:::danger Critical: Disable Proxy Buffering for SSE Streaming
|
||||
|
||||
**This is the most common cause of garbled markdown and broken streaming responses.**
|
||||
|
||||
When Nginx's `proxy_buffering` is enabled (the default!), it re-chunks SSE streams arbitrarily. This breaks markdown tokens across chunk boundaries—for example, `**bold**` becomes `**` + `bold` + `**`—causing corrupted output with visible `##`, `**`, or missing words.
|
||||
|
||||
**You MUST include these directives in your Nginx location block:**
|
||||
|
||||
```nginx
|
||||
# CRITICAL: Disable buffering for SSE streaming
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
```
|
||||
|
||||
**Symptoms if you forget this:**
|
||||
- Raw markdown tokens visible (`##`, `**`, `###`)
|
||||
- Bold/heading markers appearing incorrectly
|
||||
- Words or sections randomly missing from responses
|
||||
- Streaming works perfectly when disabled, breaks when enabled
|
||||
|
||||
**Bonus:** Disabling buffering also makes streaming responses **significantly faster**, as content flows directly to the client without Nginx's buffering delay.
|
||||
|
||||
:::
|
||||
|
||||
Choose the method that best fits your deployment needs.
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
import NginxProxyManager from '../tab-nginx/NginxProxyManager.md';
|
||||
import SelfSigned from '../tab-nginx/SelfSigned.md';
|
||||
import LetsEncrypt from '../tab-nginx/LetsEncrypt.md';
|
||||
import Windows from '../tab-nginx/Windows.md';
|
||||
|
||||
<!-- markdownlint-disable-next-line MD033 -->
|
||||
<Tabs>
|
||||
<TabItem value="NginxProxyManager" label="Nginx Proxy Manager">
|
||||
<NginxProxyManager />
|
||||
</TabItem>
|
||||
<TabItem value="letsencrypt" label="Let's Encrypt">
|
||||
<LetsEncrypt />
|
||||
</TabItem>
|
||||
<TabItem value="selfsigned" label="Self-Signed">
|
||||
<SelfSigned />
|
||||
</TabItem>
|
||||
<TabItem value="windows" label="Windows">
|
||||
<Windows />
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
## Complete Optimized NGINX Configuration
|
||||
|
||||
This section provides a production-ready NGINX configuration optimized for Open WebUI streaming, WebSocket connections, and high-concurrency deployments.
|
||||
|
||||
### Upstream Configuration
|
||||
|
||||
Define an upstream with keepalive connections to reduce connection setup overhead:
|
||||
|
||||
```nginx
|
||||
upstream openwebui {
|
||||
server 127.0.0.1:3000;
|
||||
keepalive 128; # Persistent connections
|
||||
keepalive_timeout 1800s; # 30 minutes
|
||||
keepalive_requests 10000;
|
||||
}
|
||||
```
|
||||
|
||||
### Timeout Configuration
|
||||
|
||||
Long-running LLM completions require extended timeouts:
|
||||
|
||||
```nginx
|
||||
location /api/ {
|
||||
proxy_connect_timeout 1800; # 30 minutes
|
||||
proxy_send_timeout 1800;
|
||||
proxy_read_timeout 1800;
|
||||
}
|
||||
|
||||
# WebSocket connections need even longer timeouts
|
||||
location ~ ^/(ws/|socket\.io/) {
|
||||
proxy_connect_timeout 86400; # 24 hours
|
||||
proxy_send_timeout 86400;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
```
|
||||
|
||||
### Header and Body Size Limits
|
||||
|
||||
Prevent errors with large requests or OAuth tokens:
|
||||
|
||||
```nginx
|
||||
# In http {} or server {} block
|
||||
client_max_body_size 100M; # Large file uploads
|
||||
proxy_buffer_size 128k; # Large headers (OAuth tokens)
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
large_client_header_buffers 4 32k;
|
||||
```
|
||||
|
||||
### Common Streaming Mistakes
|
||||
|
||||
| Setting | Impact on Streaming |
|
||||
|---------|---------------------|
|
||||
| `gzip on` with `application/json` | 🔴 Buffers for compression |
|
||||
| `proxy_buffering on` | 🔴 Buffers entire response |
|
||||
| `proxy_request_buffering on` | Should be turned off |
|
||||
| `tcp_nodelay on` | 🔴 **Most Critical:** Disables Nagle's algorithm to send packets immediately (prevents 200ms delays) |
|
||||
| `chunked_transfer_encoding on` | 🟡 Can break SSE |
|
||||
| `proxy_cache` enabled on `/api/` | 🟡 Adds overhead |
|
||||
| `X-Accel-Buffering "yes"` | This header should be set to "no" for extra safety |
|
||||
| `HTTP/2` | If you experience streaming issues, lag, streaming ending before the last chunk arrived on the frontend, then using HTTP 1.1 instead of HTTP/2 might also help |
|
||||
|
||||
### Full Example Configuration
|
||||
|
||||
```nginx
|
||||
upstream openwebui {
|
||||
server 127.0.0.1:3000;
|
||||
keepalive 128;
|
||||
keepalive_timeout 1800s;
|
||||
keepalive_requests 10000;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name your-domain.com;
|
||||
|
||||
# SSL configuration...
|
||||
|
||||
# Compression - EXCLUDE streaming content types
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/javascript image/svg+xml;
|
||||
# DO NOT include: application/json, text/event-stream
|
||||
|
||||
# API endpoints - streaming optimized
|
||||
location /api/ {
|
||||
proxy_pass http://openwebui;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# CRITICAL: Disable all buffering for streaming
|
||||
gzip off;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
proxy_cache off;
|
||||
tcp_nodelay on;
|
||||
add_header X-Accel-Buffering "no" always;
|
||||
add_header Cache-Control "no-store" always;
|
||||
|
||||
# Extended timeouts for LLM completions
|
||||
proxy_connect_timeout 1800;
|
||||
proxy_send_timeout 1800;
|
||||
proxy_read_timeout 1800;
|
||||
}
|
||||
|
||||
# WebSocket endpoints
|
||||
location ~ ^/(ws/|socket\.io/) {
|
||||
proxy_pass http://openwebui;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
gzip off;
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
|
||||
# 24-hour timeout for persistent connections
|
||||
proxy_connect_timeout 86400;
|
||||
proxy_send_timeout 86400;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
|
||||
# Static assets - CAN buffer and cache
|
||||
location /static/ {
|
||||
proxy_pass http://openwebui;
|
||||
proxy_buffering on;
|
||||
proxy_cache_valid 200 7d;
|
||||
add_header Cache-Control "public, max-age=604800, immutable";
|
||||
}
|
||||
|
||||
# Default location
|
||||
location / {
|
||||
proxy_pass http://openwebui;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Caching Configuration
|
||||
|
||||
Proper caching significantly improves Open WebUI performance by reducing backend load and speeding up page loads. This section provides guidance for advanced users who want to implement server-side and client-side caching.
|
||||
|
||||
### Cache Zones
|
||||
|
||||
Define cache zones in your nginx `http` block to store cached responses:
|
||||
|
||||
```nginx
|
||||
# General cache for pages and assets
|
||||
proxy_cache_path /var/cache/nginx/openwebui levels=1:2
|
||||
keys_zone=OPENWEBUI_CACHE:10m max_size=1g inactive=60m use_temp_path=off;
|
||||
|
||||
# Dedicated cache for images (profile pictures, model avatars)
|
||||
proxy_cache_path /var/cache/nginx/openwebui_images levels=1:2
|
||||
keys_zone=OPENWEBUI_IMAGES:10m max_size=2g inactive=7d use_temp_path=off;
|
||||
```
|
||||
|
||||
:::note Create Cache Directories
|
||||
|
||||
You must create these directories and set proper ownership before nginx can use them:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /var/cache/nginx/openwebui /var/cache/nginx/openwebui_images
|
||||
sudo chown -R www-data:www-data /var/cache/nginx
|
||||
```
|
||||
|
||||
Replace `www-data` with your nginx user (check with `ps aux | grep nginx`). Common alternatives: `nginx`, `nobody`.
|
||||
|
||||
:::
|
||||
|
||||
### What to Cache
|
||||
|
||||
| Content Type | Cache Duration | Notes |
|
||||
|--------------|----------------|-------|
|
||||
| Static assets (CSS, JS, fonts) | 7-30 days | Use `immutable` for versioned assets |
|
||||
| Profile/model images | 1 day | Balance freshness vs performance |
|
||||
| Static files (/static/) | 7 days | Favicons, default avatars |
|
||||
| HTML pages | 5 minutes | Short cache with revalidation |
|
||||
| Uploaded file content | 1 day | User uploads, generated images |
|
||||
|
||||
### What to Never Cache
|
||||
|
||||
:::danger Critical: Never Cache Authentication
|
||||
|
||||
These paths must **never** be cached to prevent security issues and broken logins:
|
||||
|
||||
- `/api/v1/auths/` - Authentication endpoints
|
||||
- `/oauth/` - OAuth/SSO callbacks
|
||||
- `/api/` (general) - Dynamic API responses
|
||||
- `/ws/` - WebSocket connections
|
||||
|
||||
Always include these directives for auth endpoints:
|
||||
|
||||
```nginx
|
||||
proxy_no_cache 1;
|
||||
proxy_cache_bypass 1;
|
||||
add_header Cache-Control "no-store, no-cache, must-revalidate";
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
### Example: Image Caching
|
||||
|
||||
Profile images and model avatars benefit greatly from caching:
|
||||
|
||||
```nginx
|
||||
# User and model profile images
|
||||
location ~ ^/api/v1/(users/[^/]+/profile/image|models/model/profile/image)$ {
|
||||
proxy_pass http://your_backend;
|
||||
|
||||
proxy_cache OPENWEBUI_IMAGES;
|
||||
proxy_cache_valid 200 302 1d;
|
||||
proxy_cache_valid 404 1m;
|
||||
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
|
||||
proxy_cache_lock on;
|
||||
proxy_cache_key "$request_uri$is_args$args";
|
||||
|
||||
# Force caching even without backend cache headers
|
||||
proxy_ignore_headers Cache-Control Expires Set-Cookie;
|
||||
proxy_hide_header Set-Cookie;
|
||||
|
||||
# Client-side caching
|
||||
add_header Cache-Control "public, max-age=86400, stale-while-revalidate=604800" always;
|
||||
add_header X-Cache-Status $upstream_cache_status always;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
```
|
||||
|
||||
### Example: Static Asset Caching
|
||||
|
||||
```nginx
|
||||
location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|otf|eot)$ {
|
||||
proxy_pass http://your_backend;
|
||||
|
||||
proxy_cache OPENWEBUI_CACHE;
|
||||
proxy_cache_valid 200 302 60m;
|
||||
proxy_cache_valid 404 1m;
|
||||
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
|
||||
proxy_cache_lock on;
|
||||
|
||||
add_header Cache-Control "public, max-age=2592000"; # 30 days
|
||||
add_header X-Cache-Status $upstream_cache_status;
|
||||
|
||||
etag on;
|
||||
if_modified_since exact;
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Debugging
|
||||
|
||||
Add the `X-Cache-Status` header to verify caching is working:
|
||||
|
||||
```nginx
|
||||
add_header X-Cache-Status $upstream_cache_status always;
|
||||
```
|
||||
|
||||
Check the header in browser DevTools:
|
||||
- `HIT` - Served from cache
|
||||
- `MISS` - Fetched from backend, now cached
|
||||
- `EXPIRED` - Cache expired, refreshed
|
||||
- `BYPASS` - Cache intentionally skipped
|
||||
|
||||
### Trade-offs
|
||||
|
||||
:::warning Cache Invalidation
|
||||
|
||||
When images are cached aggressively, users may not see immediate updates after changing their profile picture. Consider:
|
||||
|
||||
- **Shorter cache times** (e.g., 1 hour) if users frequently update images
|
||||
- **Longer cache times** (e.g., 1 day) for better performance in stable deployments
|
||||
- Cache can be manually cleared with: `rm -rf /var/cache/nginx/openwebui_images/*`
|
||||
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
|
||||
After setting up HTTPS, access Open WebUI securely at:
|
||||
|
||||
- [https://localhost](https://localhost)
|
||||
|
||||
Ensure that your DNS records are correctly configured if you're using a domain name. For production environments, it's recommended to use Let's Encrypt for trusted SSL certificates.
|
||||
|
||||
---
|
||||
@@ -1,320 +0,0 @@
|
||||
### Let's Encrypt
|
||||
|
||||
Let's Encrypt provides free SSL certificates trusted by most browsers, ideal for securing your production environment. 🔐
|
||||
|
||||
This guide uses a two-phase approach:
|
||||
|
||||
1. **Phase 1:** Temporarily run Nginx to prove you own the domain and get a certificate from Let's Encrypt.
|
||||
2. **Phase 2:** Reconfigure Nginx to use the new certificate for a secure HTTPS connection.
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
* A **domain name** (e.g., `my-webui.com`) with a **DNS `A` record** pointing to your server's public IP address.
|
||||
* **Docker** and **Docker Compose** installed on your server.
|
||||
* Basic understanding of running commands in a terminal.
|
||||
|
||||
:::info
|
||||
**Heads up\!** Let's Encrypt **cannot** issue certificates for an IP address. You **must** use a domain name.
|
||||
:::
|
||||
|
||||
-----
|
||||
|
||||
### Step 1: Initial Setup for Certificate Validation
|
||||
|
||||
First, we'll set up the necessary files and a temporary Nginx configuration that allows Let's Encrypt's servers to verify your domain.
|
||||
|
||||
1. **Make sure you followed the [Prerequisites](#prerequisites) above.**
|
||||
|
||||
2. **Create the Directory Structure**
|
||||
|
||||
From your project's root directory, run this command to create folders for your Nginx configuration and Let's Encrypt certificates:
|
||||
|
||||
```bash
|
||||
mkdir -p nginx/conf.d ssl/certbot/conf ssl/certbot/www
|
||||
```
|
||||
|
||||
3. **Create a Temporary Nginx Configuration**
|
||||
|
||||
Create the file `nginx/conf.d/open-webui.conf`. This initial config only listens on port 80 and serves the validation files for Certbot.
|
||||
|
||||
⚠️ **Remember to replace `<YOUR_DOMAIN_NAME>`** with your actual domain.
|
||||
|
||||
```nginx
|
||||
# nginx/conf.d/open-webui.conf
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
server_name <YOUR_DOMAIN_NAME>;
|
||||
|
||||
# Route for Let's Encrypt validation challenges
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
# All other requests will be ignored for now
|
||||
location / {
|
||||
return 404;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Update Your `docker-compose.yml`**
|
||||
|
||||
Add the `nginx` service to your `docker-compose.yml` and ensure your `open-webui` service is configured to use the shared Docker network.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
nginx:
|
||||
image: nginx:alpine
|
||||
restart: always
|
||||
ports:
|
||||
# Expose HTTP and HTTPS ports to the host machine
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
# Mount Nginx configs and SSL certificate data
|
||||
- ./nginx/conf.d:/etc/nginx/conf.d
|
||||
- ./ssl/certbot/conf:/etc/letsencrypt
|
||||
- ./ssl/certbot/www:/var/www/certbot
|
||||
depends_on:
|
||||
- open-webui
|
||||
networks:
|
||||
- open-webui-network
|
||||
|
||||
open-webui:
|
||||
# Your existing open-webui configuration...
|
||||
# ...
|
||||
# Ensure it's on the same network
|
||||
networks:
|
||||
- open-webui-network
|
||||
# Expose the port internally to the Docker network.
|
||||
# You do NOT need to publish it to the host (e.g., no `ports` section is needed here).
|
||||
expose:
|
||||
- 8080
|
||||
|
||||
networks:
|
||||
open-webui-network:
|
||||
driver: bridge
|
||||
```
|
||||
|
||||
-----
|
||||
|
||||
### Step 2: Obtain the SSL Certificate
|
||||
|
||||
Now we'll run a script that uses Docker to fetch the certificate.
|
||||
|
||||
1. **Create the Certificate Request Script**
|
||||
|
||||
Create an executable script named `enable_letsencrypt.sh` in your project root.
|
||||
|
||||
⚠️ **Remember to replace `<YOUR_DOMAIN_NAME>` and `<YOUR_EMAIL_ADDRESS>`** with your actual information.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# enable_letsencrypt.sh
|
||||
|
||||
DOMAIN="<YOUR_DOMAIN_NAME>"
|
||||
EMAIL="<YOUR_EMAIL_ADDRESS>"
|
||||
|
||||
echo "### Obtaining SSL certificate for $DOMAIN ###"
|
||||
|
||||
# Start Nginx to serve the challenge
|
||||
docker compose up -d nginx
|
||||
|
||||
# Run Certbot in a container to get the certificate
|
||||
docker run --rm \
|
||||
-v "./ssl/certbot/conf:/etc/letsencrypt" \
|
||||
-v "./ssl/certbot/www:/var/www/certbot" \
|
||||
certbot/certbot certonly \
|
||||
--webroot \
|
||||
--webroot-path=/var/www/certbot \
|
||||
--email "$EMAIL" \
|
||||
--agree-tos \
|
||||
--no-eff-email \
|
||||
--force-renewal \
|
||||
-d "$DOMAIN"
|
||||
|
||||
if [[ $? != 0 ]]; then
|
||||
echo "Error: Failed to obtain SSL certificate."
|
||||
docker compose stop nginx
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Stop Nginx before we apply the final config
|
||||
docker compose stop nginx
|
||||
echo "### Certificate obtained successfully! ###"
|
||||
```
|
||||
|
||||
2. **Make the Script Executable**
|
||||
|
||||
```bash
|
||||
chmod +x enable_letsencrypt.sh
|
||||
```
|
||||
|
||||
3. **Run the Script**
|
||||
|
||||
Execute the script. It will automatically start Nginx, request the certificate, and then stop Nginx.
|
||||
|
||||
```bash
|
||||
./enable_letsencrypt.sh
|
||||
```
|
||||
|
||||
-----
|
||||
|
||||
### Important: Caching Configuration
|
||||
|
||||
When using NGINX with Open WebUI, proper caching is crucial for performance while ensuring authentication remains secure. The configuration below includes:
|
||||
|
||||
- **Cached**: Static assets (CSS, JS, fonts, images) for better performance
|
||||
- **Not Cached**: Authentication endpoints, API calls, SSO/OAuth callbacks, and session data
|
||||
- **Result**: Faster page loads without breaking login functionality
|
||||
|
||||
The configuration below implements these rules automatically.
|
||||
|
||||
### Step 3: Finalize Nginx Configuration for HTTPS
|
||||
|
||||
With the certificate saved in your `ssl` directory, you can now update the Nginx configuration to enable HTTPS.
|
||||
|
||||
1. **Update the Nginx Configuration for SSL**
|
||||
|
||||
**Replace the entire contents** of `nginx/conf.d/open-webui.conf` with the final configuration below.
|
||||
|
||||
⚠️ **Replace all 4 instances of `<YOUR_DOMAIN_NAME>`** with your domain.
|
||||
|
||||
```nginx
|
||||
# nginx/conf.d/open-webui.conf
|
||||
|
||||
# Redirect all HTTP traffic to HTTPS
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
server_name <YOUR_DOMAIN_NAME>;
|
||||
|
||||
location /.well-known/acme-challenge/ {
|
||||
root /var/www/certbot;
|
||||
}
|
||||
|
||||
location / {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
listen [::]:443 ssl;
|
||||
http2 on;
|
||||
server_name <YOUR_DOMAIN_NAME>;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/<YOUR_DOMAIN_NAME>/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/<YOUR_DOMAIN_NAME>/privkey.pem;
|
||||
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers 'TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:ECDHE-RSA-AES128-GCM-SHA256';
|
||||
ssl_prefer_server_ciphers off;
|
||||
|
||||
location ~* ^/(auth|api|oauth|admin|signin|signup|signout|login|logout|sso)/ {
|
||||
proxy_pass http://open-webui:8080;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_read_timeout 10m;
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
client_max_body_size 20M;
|
||||
|
||||
proxy_no_cache 1;
|
||||
proxy_cache_bypass 1;
|
||||
add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0" always;
|
||||
add_header Pragma "no-cache" always;
|
||||
expires -1;
|
||||
}
|
||||
|
||||
# Profile and model images - cached for performance
|
||||
location ~ ^/api/v1/(users/[^/]+/profile/image|models/model/profile/image)$ {
|
||||
proxy_pass http://open-webui:8080;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Cache images for 1 day
|
||||
expires 1d;
|
||||
add_header Cache-Control "public, max-age=86400";
|
||||
}
|
||||
|
||||
location ~* \.(css|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
|
||||
proxy_pass http://open-webui:8080;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Cache static assets for 7 days
|
||||
expires 7d;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
|
||||
location / {
|
||||
proxy_pass http://open-webui:8080;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# Extended timeout for long LLM completions (30 minutes)
|
||||
proxy_read_timeout 1800;
|
||||
proxy_send_timeout 1800;
|
||||
proxy_connect_timeout 1800;
|
||||
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
client_max_body_size 20M;
|
||||
|
||||
add_header Cache-Control "public, max-age=300, must-revalidate";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Launch All Services**
|
||||
|
||||
Start both Nginx and Open WebUI with the final, secure configuration.
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
-----
|
||||
|
||||
### Step 4: Access Your Secure WebUI
|
||||
|
||||
You can now access your Open WebUI instance securely via HTTPS.
|
||||
|
||||
➡️ **`https://<YOUR_DOMAIN_NAME>`**
|
||||
|
||||
-----
|
||||
|
||||
### (Optional) Step 5: Setting Up Automatic Renewal
|
||||
|
||||
Let's Encrypt certificates expire every 90 days. You should set up a `cron` job to renew them automatically.
|
||||
|
||||
1. Open the crontab editor:
|
||||
|
||||
```bash
|
||||
sudo crontab -e
|
||||
```
|
||||
|
||||
2. Add the following line to run a renewal check every day at 3:30 AM. It will only renew if the certificate is close to expiring.
|
||||
|
||||
```cron
|
||||
30 3 * * * /usr/bin/docker run --rm -v "<absolute_path>/ssl/certbot/conf:/etc/letsencrypt" -v "<absolute_path>/ssl/certbot/www:/var/www/certbot" certbot/certbot renew --quiet --webroot --webroot-path=/var/www/certbot --deploy-hook "/usr/bin/docker compose -f <absolute_path>/docker-compose.yml restart nginx"
|
||||
```
|
||||
@@ -1,147 +0,0 @@
|
||||
### Nginx Proxy Manager
|
||||
|
||||
Nginx Proxy Manager (NPM) allows you to easily manage reverse proxies and secure your local applications, like Open WebUI, with valid SSL certificates from Let's Encrypt.
|
||||
This setup enables HTTPS access, which is necessary for using voice input features on many mobile browsers due to their security requirements, without exposing the application's specific port directly to the internet.
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
- A home server running Docker and open-webui container running.
|
||||
- A domain name (free options like DuckDNS or paid ones like Namecheap/GoDaddy).
|
||||
- Basic knowledge of Docker and DNS configuration.
|
||||
|
||||
#### Nginx Proxy Manager Steps
|
||||
|
||||
1. **Create Directories for Nginx Files:**
|
||||
|
||||
```bash
|
||||
mkdir ~/nginx_config
|
||||
cd ~/nginx_config
|
||||
```
|
||||
|
||||
2. **Set Up Nginx Proxy Manager with Docker:**
|
||||
|
||||
```bash
|
||||
nano docker-compose.yml
|
||||
```
|
||||
|
||||
```yaml
|
||||
services:
|
||||
app:
|
||||
image: 'jc21/nginx-proxy-manager:latest'
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- '80:80'
|
||||
- '81:81'
|
||||
- '443:443'
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- ./letsencrypt:/etc/letsencrypt
|
||||
```
|
||||
|
||||
Run the container:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
3. **Configure DNS and Domain:**
|
||||
|
||||
- Log in to your domain provider (e.g., DuckDNS) and create a domain.
|
||||
- Point the domain to your proxy’s local IP (e.g., 192.168.0.6).
|
||||
- If using DuckDNS, get an API token from their dashboard.
|
||||
|
||||
###### Here is a simple example how it's done in https://www.duckdns.org/domains
|
||||
|
||||
4. **Set Up SSL Certificates:**
|
||||
|
||||
- Access Nginx Proxy Manager at http://server_ip:81. For example: ``192.168.0.6:81``
|
||||
|
||||
- Log in with the default credentials (admin@example.com / changeme). Change them as asked.
|
||||
- Go to SSL Certificates → Add SSL Certificate → Let's Encrypt.
|
||||
- Write your email and domain name you got from DuckDNS. One domain name contains an asterisk and another does not. Example: ``*.hello.duckdns.org`` and ``hello.duckdns.org``.
|
||||
- Select Use a DNS challenge, choose DuckDNS, and paste your API token. example:
|
||||
```dns_duckdns_token=f4e2a1b9-c78d-e593-b0d7-67f2e1c9a5b8```
|
||||
- Agree to Let’s Encrypt terms and save. Change propagation time **if needed** (120 seconds).
|
||||
|
||||
5. **Create Proxy Hosts:**
|
||||
|
||||
- For each service (e.g., openwebui, nextcloud), go to Hosts → Proxy Hosts → Add Proxy Host.
|
||||
|
||||
- Fill in the domain name (e.g., openwebui.hello.duckdns.org).
|
||||
- Set the scheme to HTTP (default), enable ``Websockets support`` and point to your Docker IP (if docker with open-webui is running on the same computer as NGINX manager, this will be the same IP as earlier (example: ``192.168.0.6``)
|
||||
- Select the SSL certificate generated earlier, force SSL, and enable HTTP/2.
|
||||
|
||||
:::danger Critical: Configure CORS for WebSocket Connections
|
||||
|
||||
A very common and difficult-to-debug issue with WebSocket connections is a misconfigured Cross-Origin Resource Sharing (CORS) policy. When running Open WebUI behind a reverse proxy like Nginx Proxy Manager, you **must** set the `CORS_ALLOW_ORIGIN` environment variable in your Open WebUI configuration.
|
||||
|
||||
Failure to do so will cause WebSocket connections to fail, even if you have enabled "Websockets support" in Nginx Proxy Manager.
|
||||
|
||||
:::
|
||||
|
||||
:::danger Critical: Disable Proxy Buffering for Streaming
|
||||
|
||||
**This is the most common cause of garbled markdown and broken streaming responses.**
|
||||
|
||||
In Nginx Proxy Manager, go to your proxy host → **Advanced** tab → and add these directives to the **Custom Nginx Configuration** field:
|
||||
|
||||
```nginx
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
```
|
||||
|
||||
Without this, Nginx re-chunks SSE streams, breaking markdown formatting (visible `##`, `**`, missing words). This also makes streaming responses significantly faster.
|
||||
|
||||
:::
|
||||
|
||||
:::tip Extended Timeouts for Long Completions
|
||||
|
||||
Long LLM completions (30+ minutes for complex tasks) may exceed the default 60-second timeout. Add these directives in the **Advanced** tab → **Custom Nginx Configuration**:
|
||||
|
||||
```nginx
|
||||
proxy_read_timeout 1800;
|
||||
proxy_send_timeout 1800;
|
||||
proxy_connect_timeout 1800;
|
||||
```
|
||||
|
||||
This sets a 30-minute timeout. Adjust as needed for your use case.
|
||||
|
||||
:::
|
||||
|
||||
:::tip Caching Best Practice
|
||||
|
||||
While Nginx Proxy Manager handles most configuration automatically, be aware that:
|
||||
|
||||
- **Static assets** (CSS, JS, images) are cached by default for better performance
|
||||
- **Authentication endpoints** should never be cached
|
||||
- If you add custom caching rules in NPM's "Advanced" tab, ensure you exclude paths like `/api/`, `/auth/`, `/signup/` , `/signin/`, `/sso/`, `/admin/`, `/signout/`, `/oauth/`, `/login/`, and `/logout/`
|
||||
|
||||
The default NPM configuration handles this correctly - only modify caching if you know what you're doing.
|
||||
|
||||
:::
|
||||
|
||||
**Example:**
|
||||
If you access your UI at `https://openwebui.hello.duckdns.org`, you must set:
|
||||
|
||||
```bash
|
||||
CORS_ALLOW_ORIGIN="https://openwebui.hello.duckdns.org"
|
||||
```
|
||||
|
||||
You can also provide a semicolon-separated list of allowed domains. **Do not skip this step.**
|
||||
|
||||
:::
|
||||
|
||||
6. **Add your url to open-webui (otherwise getting HTTPS error):**
|
||||
|
||||
- Go to your open-webui → Admin Panel → Settings → General
|
||||
- In the **Webhook URL** text field, enter your URL through which you will connect to your open-webui via Nginx reverse proxy. Example: ``hello.duckdns.org`` (not essential with this one) or ``openwebui.hello.duckdns.org`` (essential with this one).
|
||||
|
||||
#### Access the WebUI
|
||||
|
||||
Access Open WebUI via HTTPS at either ``hello.duckdns.org`` or ``openwebui.hello.duckdns.org`` (in whatever way you set it up).
|
||||
|
||||
:::note
|
||||
|
||||
Firewall Note: Be aware that local firewall software (like Portmaster) might block internal Docker network traffic or required ports. If you experience issues, check your firewall rules to ensure necessary communication for this setup is allowed.
|
||||
|
||||
:::
|
||||
@@ -1,135 +0,0 @@
|
||||
### Self-Signed Certificate
|
||||
|
||||
Using self-signed certificates is suitable for development or internal use where trust is not a critical concern.
|
||||
|
||||
#### Self-Signed Certificate Steps
|
||||
|
||||
1. **Create Directories for Nginx Files:**
|
||||
|
||||
```bash
|
||||
mkdir -p conf.d ssl
|
||||
```
|
||||
|
||||
2. **Create Nginx Configuration File:**
|
||||
|
||||
**`conf.d/open-webui.conf`:**
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name your_domain_or_IP;
|
||||
|
||||
ssl_certificate /etc/nginx/ssl/nginx.crt;
|
||||
ssl_certificate_key /etc/nginx/ssl/nginx.key;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
|
||||
location ~* ^/(auth|api|oauth|admin|signin|signup|signout|login|logout|sso)/ {
|
||||
proxy_pass http://host.docker.internal:3000;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
client_max_body_size 20M;
|
||||
proxy_read_timeout 10m;
|
||||
|
||||
# Disable caching for auth endpoints
|
||||
proxy_no_cache 1;
|
||||
proxy_cache_bypass 1;
|
||||
add_header Cache-Control "no-store, no-cache, must-revalidate" always;
|
||||
expires -1;
|
||||
}
|
||||
|
||||
# Profile and model images - cached for performance
|
||||
location ~ ^/api/v1/(users/[^/]+/profile/image|models/model/profile/image)$ {
|
||||
proxy_pass http://host.docker.internal:3000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
|
||||
# Cache images for 1 day
|
||||
expires 1d;
|
||||
add_header Cache-Control "public, max-age=86400";
|
||||
}
|
||||
|
||||
location ~* \.(css|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
|
||||
proxy_pass http://host.docker.internal:3000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
|
||||
expires 7d;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
|
||||
location / {
|
||||
proxy_pass http://host.docker.internal:3000;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
|
||||
client_max_body_size 20M;
|
||||
|
||||
# Extended timeout for long LLM completions (30 minutes)
|
||||
proxy_read_timeout 1800;
|
||||
proxy_send_timeout 1800;
|
||||
proxy_connect_timeout 1800;
|
||||
|
||||
add_header Cache-Control "public, max-age=300, must-revalidate";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Generate Self-Signed SSL Certificates:**
|
||||
|
||||
```bash
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
|
||||
-keyout ssl/nginx.key \
|
||||
-out ssl/nginx.crt \
|
||||
-subj "/CN=your_domain_or_IP"
|
||||
```
|
||||
|
||||
4. **Update Docker Compose Configuration:**
|
||||
|
||||
Add the Nginx service to your `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
nginx:
|
||||
image: nginx:alpine
|
||||
ports:
|
||||
- "443:443"
|
||||
volumes:
|
||||
- ./conf.d:/etc/nginx/conf.d
|
||||
- ./ssl:/etc/nginx/ssl
|
||||
depends_on:
|
||||
- open-webui
|
||||
```
|
||||
|
||||
5. **Start Nginx Service:**
|
||||
|
||||
```bash
|
||||
docker compose up -d nginx
|
||||
```
|
||||
|
||||
#### Access the WebUI
|
||||
|
||||
Access Open WebUI via HTTPS at:
|
||||
|
||||
[https://your_domain_or_IP](https://your_domain_or_IP)
|
||||
|
||||
---
|
||||
@@ -1,177 +0,0 @@
|
||||
### Using a Self-Signed Certificate and Nginx on Windows without Docker
|
||||
|
||||
For basic internal/development installations, you can use nginx and a self-signed certificate to proxy Open WebUI to https, allowing use of features such as microphone input over LAN. (By default, most browsers will not allow microphone input on insecure non-localhost urls)
|
||||
|
||||
This guide assumes you installed Open WebUI using pip and are running `open-webui serve`
|
||||
|
||||
#### Step 1: Installing openssl for certificate generation
|
||||
|
||||
You will first need to install openssl
|
||||
|
||||
You can download and install precompiled binaries from the [Shining Light Productions (SLP)](https://slproweb.com/) website.
|
||||
|
||||
Alternatively, if you have [Chocolatey](https://chocolatey.org/) installed, you can use it to install OpenSSL quickly:
|
||||
|
||||
1. Open a command prompt or PowerShell.
|
||||
2. Run the following command to install OpenSSL:
|
||||
|
||||
```bash
|
||||
choco install openssl -y
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### **Verify Installation**
|
||||
|
||||
After installation, open a command prompt and type:
|
||||
|
||||
```bash
|
||||
openssl version
|
||||
```
|
||||
|
||||
If it displays the OpenSSL version (e.g., `OpenSSL 3.x.x ...`), it is installed correctly.
|
||||
|
||||
#### Step 2: Installing nginx
|
||||
|
||||
Download the official Nginx for Windows from [nginx.org](https://nginx.org) or use a package manager like Chocolatey.
|
||||
Extract the downloaded ZIP file to a directory (e.g., C:\nginx).
|
||||
|
||||
#### Step 3: Generate certificate
|
||||
|
||||
Run the following command:
|
||||
|
||||
```bash
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nginx.key -out nginx.crt
|
||||
```
|
||||
|
||||
Move the generated nginx.key and nginx.crt files to a folder of your choice, or to the C:\nginx directory
|
||||
|
||||
#### Step 4: Configure nginx
|
||||
|
||||
Open C:\nginx\conf\nginx.conf in a text editor
|
||||
|
||||
If you want Open WebUI to be accessible over your local LAN, be sure to note your LAN ip address using `ipconfig` e.g., 192.168.1.15
|
||||
|
||||
Set it up as follows:
|
||||
|
||||
```conf
|
||||
|
||||
#user nobody;
|
||||
worker_processes 1;
|
||||
|
||||
#error_log logs/error.log;
|
||||
|
||||
#error_log logs/error.log notice;
|
||||
|
||||
#error_log logs/error.log info;
|
||||
|
||||
#pid logs/nginx.pid;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
include mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
sendfile on;
|
||||
keepalive_timeout 120;
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name 192.168.1.15;
|
||||
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name 192.168.1.15;
|
||||
|
||||
ssl_certificate C:\\nginx\\nginx.crt;
|
||||
ssl_certificate_key C:\\nginx\\nginx.key;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
location ~* ^/(auth|api|oauth|admin|signin|signup|signout|login|logout|sso)/ {
|
||||
proxy_pass http://localhost:8080;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
client_max_body_size 20M;
|
||||
proxy_read_timeout 10m;
|
||||
|
||||
add_header Cache-Control "no-store, no-cache, must-revalidate" always;
|
||||
expires -1;
|
||||
}
|
||||
|
||||
# Profile and model images - cached for performance
|
||||
location ~ ^/api/v1/(users/[^/]+/profile/image|models/model/profile/image)$ {
|
||||
proxy_pass http://localhost:8080;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
|
||||
# Cache images for 1 day
|
||||
expires 1d;
|
||||
add_header Cache-Control "public, max-age=86400";
|
||||
}
|
||||
|
||||
location ~* \.(css|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
|
||||
proxy_pass http://localhost:8080;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
|
||||
expires 7d;
|
||||
add_header Cache-Control "public, immutable";
|
||||
}
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:8080;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
client_max_body_size 20M;
|
||||
|
||||
# Extended timeout for long LLM completions (30 minutes)
|
||||
proxy_read_timeout 1800;
|
||||
proxy_send_timeout 1800;
|
||||
proxy_connect_timeout 1800;
|
||||
|
||||
add_header Cache-Control "public, max-age=300, must-revalidate";
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Save the file, and check the configuration has no errors or syntax issues by running `nginx -t`. You may need to `cd C:\nginx` first depending on how you installed it
|
||||
|
||||
Run nginx by running `nginx`. If an nginx service is already started, you can reload new config by running `nginx -s reload`
|
||||
|
||||
---
|
||||
|
||||
You should now be able to access Open WebUI on https://192.168.1.15 (or your own LAN ip as appropriate). Be sure to allow windows firewall access as needed.
|
||||
@@ -1,7 +0,0 @@
|
||||
{
|
||||
"label": "HTTPS",
|
||||
"position": 8,
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user