mirror of
https://github.com/open-webui/docs.git
synced 2026-04-03 14:38:45 +07:00
HTTPS moved
This commit is contained in:
7
docs/reference/https/_category_.json
Normal file
7
docs/reference/https/_category_.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"position": 6,
|
||||
"link": {
|
||||
"type": "doc",
|
||||
"id": "reference/https/index"
|
||||
}
|
||||
}
|
||||
136
docs/reference/https/caddy.md
Normal file
136
docs/reference/https/caddy.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
sidebar_position: 202
|
||||
title: "HTTPS using Caddy"
|
||||
---
|
||||
|
||||
|
||||
## HTTPS Using Caddy
|
||||
|
||||
Ensuring secure communication between your users and the Open WebUI is paramount. HTTPS (HyperText Transfer Protocol Secure) encrypts the data transmitted, protecting it from eavesdroppers and tampering. By configuring Caddy as a reverse proxy, you can seamlessly add HTTPS to your Open WebUI deployment, enhancing both security and trustworthiness.
|
||||
|
||||
This guide is simple walkthrough to set up a Ubuntu server with Caddy as a reverse proxy for Open WebUI, enabling HTTPS with automatic certificate management.
|
||||
|
||||
There's a few steps we'll follow to get everything set up:
|
||||
|
||||
- [HTTPS Using Caddy](#https-using-caddy)
|
||||
- [Docker](#docker)
|
||||
- [Installing Docker](#installing-docker)
|
||||
- [OpenWebUI](#openwebui)
|
||||
- [Installing OpenWebUI](#installing-openwebui)
|
||||
- [Caddy](#caddy)
|
||||
- [Installing Caddy](#installing-caddy)
|
||||
- [Configure Caddy](#configure-caddy)
|
||||
- [Testing HTTPS](#testing-https)
|
||||
- [Updating Open WebUI](#updating-open-webui)
|
||||
- [Stopping Open WebUI](#stopping-open-webui)
|
||||
- [Pulling the latest image](#pulling-the-latest-image)
|
||||
- [Starting Open WebUI](#starting-open-webui)
|
||||
|
||||
## Docker
|
||||
|
||||
Follow the guide to set up Docker's apt repository [Docker](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository)
|
||||
|
||||
I've included `docker-compose` as it's needed to run `docker compose`.
|
||||
|
||||
### Installing Docker
|
||||
|
||||
Here's the command I've used to install Docker on Ubuntu:
|
||||
|
||||
```bash
|
||||
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
|
||||
```
|
||||
|
||||
## OpenWebUI
|
||||
|
||||
I'd go ahead and create a directory for the Open WebUI project:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/open-webui
|
||||
cd ~/open-webui
|
||||
```
|
||||
|
||||
### Installing OpenWebUI
|
||||
|
||||
Create a `docker-compose.yml` file in the `~/open-webui` directory. I've left in a commented section for setting some environment varibles for Qdrant, but you can follow that for any other [environment variables](https://docs.openwebui.com/reference/env-configuration) you might need to set.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
open-webui:
|
||||
image: ghcr.io/open-webui/open-webui:main
|
||||
container_name: open-webui
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- ./data:/app/backend/data
|
||||
# environment:
|
||||
# - "QDRANT_API_KEY=API_KEY_HERE"
|
||||
# - "QDRANT_URI=https://example.com"
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
## Caddy
|
||||
|
||||
Caddy is a powerful web server that automatically manages TLS certificates for you, making it an excellent choice for serving Open WebUI over HTTPS.
|
||||
|
||||
### Installing Caddy
|
||||
|
||||
Follow the [guide to set up Caddy's on Ubuntu](https://caddyserver.com/docs/install#debian-ubuntu-raspbian).
|
||||
|
||||
### Configure Caddy
|
||||
|
||||
You're going to need to change the `CaddyFile` to use your domain.
|
||||
|
||||
To do that, edit the file `/etc/caddy/Caddyfile`.
|
||||
|
||||
```bash
|
||||
sudo nano /etc/caddy/Caddyfile
|
||||
```
|
||||
|
||||
Then the configuration should have the following:
|
||||
|
||||
```caddyfile
|
||||
your-domain.com {
|
||||
reverse_proxy localhost:8080
|
||||
}
|
||||
```
|
||||
|
||||
Make sure to replace `your-domain.com` with your actual domain name.
|
||||
|
||||
## Testing HTTPS
|
||||
|
||||
Now assuming you've already set up your DNS records to point to your server's IP address, you should be able to test if Open WebUI is accessible via HTTPS by running `docker compose up` in the `~/open-webui` directory.
|
||||
|
||||
```bash
|
||||
cd ~/open-webui
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
You should now be able to access Open WebUI at `https://your-domain.com`.
|
||||
|
||||
## Updating Open WebUI
|
||||
|
||||
I wanted to include a quick note on how to update Open WebUI without losing your data. Since we're using a volume to store the data, you can simply pull the latest image and restart the container.
|
||||
|
||||
### Stopping Open WebUI
|
||||
|
||||
First we need to stop and remove the existing container:
|
||||
|
||||
```bash
|
||||
docker rm -f open-webui
|
||||
```
|
||||
|
||||
### Pulling the latest image
|
||||
|
||||
Then you can start the container again:
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
### Starting Open WebUI
|
||||
|
||||
Now you can start the Open WebUI container again:
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
197
docs/reference/https/haproxy.md
Normal file
197
docs/reference/https/haproxy.md
Normal file
@@ -0,0 +1,197 @@
|
||||
---
|
||||
sidebar_position: 201
|
||||
title: "HTTPS using HAProxy"
|
||||
---
|
||||
|
||||
|
||||
# HAProxy Configuration for Open WebUI
|
||||
|
||||
HAProxy (High Availability Proxy) is specialized load-balancing and reverse proxy solution that is highly configurable and designed to handle large amounts of connections with a relatively low resource footprint. for more information, please see: https://www.haproxy.org/
|
||||
|
||||
## Install HAProxy and Let's Encrypt
|
||||
|
||||
First, install HAProxy and Let's Encrypt's certbot:
|
||||
|
||||
### Redhat derivatives
|
||||
|
||||
```sudo dnf install haproxy certbot openssl -y```
|
||||
|
||||
### Debian derivatives
|
||||
|
||||
```sudo apt install haproxy certbot openssl -y```
|
||||
|
||||
## HAProxy Configuration Basics
|
||||
|
||||
HAProxy's configuration is by default stored in ```/etc/haproxy/haproxy.cfg```. This file contains all the configuration directives that determine how HAProxy will operate.
|
||||
|
||||
The base configuration for HAProxy to work with Open WebUI is pretty simple.
|
||||
|
||||
```shell
|
||||
#---------------------------------------------------------------------
|
||||
|
||||
# Global settings
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
global
|
||||
# to have these messages end up in /var/log/haproxy.log you will
|
||||
# need to:
|
||||
#
|
||||
# 1) configure syslog to accept network log events. This is done
|
||||
# by adding the '-r' option to the SYSLOGD_OPTIONS in
|
||||
# /etc/sysconfig/syslog
|
||||
#
|
||||
# 2) configure local2 events to go to the /var/log/haproxy.log
|
||||
# file. A line like the following can be added to
|
||||
# /etc/sysconfig/syslog
|
||||
#
|
||||
# local2.* /var/log/haproxy.log
|
||||
#
|
||||
log 127.0.0.1 local2
|
||||
|
||||
chroot /var/lib/haproxy
|
||||
pidfile /var/run/haproxy.pid
|
||||
maxconn 4000
|
||||
user haproxy
|
||||
group haproxy
|
||||
daemon
|
||||
|
||||
#adjust the dh-param if too low
|
||||
tune.ssl.default-dh-param 2048
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
|
||||
# common defaults that all the 'listen' and 'backend' sections will
|
||||
|
||||
# use if not designated in their block
|
||||
|
||||
#---------------------------------------------------------------------
|
||||
defaults
|
||||
mode http
|
||||
log global
|
||||
option httplog
|
||||
option dontlognull
|
||||
option http-server-close
|
||||
option forwardfor #except 127.0.0.0/8
|
||||
option redispatch
|
||||
retries 3
|
||||
timeout http-request 300s
|
||||
timeout queue 2m
|
||||
timeout connect 120s
|
||||
timeout client 10m
|
||||
timeout server 10m
|
||||
timeout http-keep-alive 120s
|
||||
timeout check 10s
|
||||
maxconn 3000
|
||||
|
||||
#http
|
||||
frontend web
|
||||
#Non-SSL
|
||||
bind 0.0.0.0:80
|
||||
#SSL/TLS
|
||||
bind 0.0.0.0:443 ssl crt /path/to/ssl/folder/
|
||||
|
||||
#Let's Encrypt SSL
|
||||
acl letsencrypt-acl path_beg /.well-known/acme-challenge/
|
||||
use_backend letsencrypt-backend if letsencrypt-acl
|
||||
|
||||
#Subdomain method
|
||||
acl chat-acl hdr(host) -i subdomain.domain.tld
|
||||
#Path Method
|
||||
acl chat-acl path_beg /owui/
|
||||
use_backend owui_chat if chat-acl
|
||||
|
||||
#Pass SSL Requests to Lets Encrypt
|
||||
backend letsencrypt-backend
|
||||
server letsencrypt 127.0.0.1:8688
|
||||
|
||||
#OWUI Chat
|
||||
backend owui_chat
|
||||
# add X-FORWARDED-FOR
|
||||
option forwardfor
|
||||
# add X-CLIENT-IP
|
||||
http-request add-header X-CLIENT-IP %[src]
|
||||
http-request set-header X-Forwarded-Proto https if { ssl_fc }
|
||||
server chat <ip>:3000
|
||||
|
||||
## WebSocket and HTTP/2 Compatibility
|
||||
|
||||
Starting with recent versions (including HAProxy 3.x), HAProxy may enable HTTP/2 by default. While HTTP/2 supports WebSockets (RFC 8441), some clients or backend configurations may experience "freezes" or unresponsiveness when icons or data start loading via WebSockets over an H2 tunnel.
|
||||
|
||||
If you experience these issues:
|
||||
1. **Force HTTP/1.1 for WebSockets**: Add `option h2-workaround-bogus-websocket-clients` to your `frontend` or `defaults` section. This prevents HAProxy from advertising RFC 8441 support to the client, forcing a fallback to the more stable HTTP/1.1 Upgrade mechanism.
|
||||
2. **Backend Version**: Ensure your backend connection is using HTTP/1.1 (the default for `mode http`).
|
||||
|
||||
Example addition to your `defaults` or `frontend`:
|
||||
```shell
|
||||
defaults
|
||||
# ... other settings
|
||||
option h2-workaround-bogus-websocket-clients
|
||||
```
|
||||
|
||||
You will see that we have ACL records (routers) for both Open WebUI and Let's Encrypt. To use WebSocket with OWUI, you need to have an SSL configured, and the easiest way to do that is to use Let's Encrypt.
|
||||
|
||||
You can use either the subdomain method or the path method for routing traffic to Open WebUI. The subdomain method requires a dedicated subdomain (e.g., chat.yourdomain.com), while the path method allows you to access Open WebUI through a specific path on your domain (e.g., yourdomain.com/owui/). Choose the method that best suits your needs and update the configuration accordingly.
|
||||
|
||||
:::info
|
||||
|
||||
You will need to expose port 80 and 443 to your HAProxy server. These ports are required for Let's Encrypt to validate your domain and for HTTPS traffic. You will also need to ensure your DNS records are properly configured to point to your HAProxy server. If you are running HAProxy at home, you will need to use port forwarding in your router to forward ports 80 and 443 to your HAProxy server.
|
||||
|
||||
:::
|
||||
|
||||
## Issuing SSL Certificates with Let's Encrypt
|
||||
|
||||
Before starting HAProxy, you will want to generate a self signed certificate to use as a placeholder until Let's Encrypt issues a proper one. Here's how to generate a self-signed certificate:
|
||||
|
||||
```shell
|
||||
openssl req -x509 -newkey rsa:2048 -keyout /tmp/haproxy.key -out /tmp/haproxy.crt -days 3650 -nodes -subj "/CN=localhost"
|
||||
```
|
||||
|
||||
Then combine the key and certificate into a PEM file that HAProxy can use:
|
||||
|
||||
```cat /tmp/haproxy.crt /tmp/haproxy.key > /etc/haproxy/certs/haproxy.pem```
|
||||
|
||||
:::info
|
||||
|
||||
Make sure you update the HAProxy configuration based on your needs and configuration.
|
||||
|
||||
:::
|
||||
|
||||
Once you have your HAProxy configuration set up, you can use certbot to obtain and manage your SSL certificates. Certbot will handle the validation process with Let's Encrypt and automatically update your certificates when they are close to expiring (assuming you use the certbot auto-renewal service).
|
||||
|
||||
You can validate the HAProxy configuration by running `haproxy -c -f /etc/haproxy/haproxy.cfg`. If there are no errors, you can start HAProxy with `systemctl start haproxy` and verify it's running with `systemctl status haproxy`.
|
||||
|
||||
To ensure HAProxy starts with the system, `systemctl enable haproxy`.
|
||||
|
||||
When you have HAProxy configured, you can use Let's encrypt to issue your valid SSL certificate.
|
||||
First, you will need to register with Let's Encrypt. You should only need to do this one time:
|
||||
|
||||
`certbot register --agree-tos --email your@email.com --non-interactive`
|
||||
|
||||
Then you can request your certificate:
|
||||
|
||||
```shell
|
||||
certbot certonly -n --standalone --preferred-challenges http --http-01-port-8688 -d yourdomain.com
|
||||
```
|
||||
|
||||
Once the certificate is issued, you will need to merge the certificate and private key files into a single PEM file that HAProxy can use.
|
||||
|
||||
```shell
|
||||
cat /etc/letsencrypt/live/{domain}/fullchain.pem /etc/letsencrypt/live/{domain}/privkey.pem > /etc/haproxy/certs/{domain}.pem
|
||||
chmod 600 /etc/haproxy/certs/{domain}.pem
|
||||
chown haproxy:haproxy /etc/haproxy/certs/{domain}.pem
|
||||
```
|
||||
|
||||
You can then restart HAProxy to apply the new certificate:
|
||||
`systemctl restart haproxy`
|
||||
|
||||
## HAProxy Manager (Easy Deployment Option)
|
||||
|
||||
If you would like to have something manage your HAProxy configuration and Let's Encrypt SSLs automatically, I have written a simple python script and created a docker container you can use to create and manage your HAProxy config and manage the Let's Encrypt certificate lifecycle.
|
||||
|
||||
https://github.com/shadowdao/haproxy-manager
|
||||
|
||||
:::warning
|
||||
|
||||
Please do not expose port 8000 publicly if you use the script or container!
|
||||
|
||||
:::
|
||||
40
docs/reference/https/index.md
Normal file
40
docs/reference/https/index.md
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
sidebar_position: 6
|
||||
title: "HTTPS & Reverse Proxies"
|
||||
---
|
||||
|
||||
# Secure Your Open WebUI with HTTPS 🔒
|
||||
|
||||
While **HTTPS is not strictly required** for basic local operation, it is **highly recommended** for all deployments and **mandatory** for enabling specific features like Voice Calls.
|
||||
|
||||
:::warning Critical Feature Dependency
|
||||
Modern browsers require a **Secure Context** (HTTPS) to access the microphone.
|
||||
**Voice Calls will NOT work** if you access Open WebUI via `http://` (unless using `localhost`).
|
||||
:::
|
||||
|
||||
## Why HTTPS Matters 🛡️
|
||||
|
||||
Enabling HTTPS encryption provides essential benefits:
|
||||
|
||||
1. **🔒 Privacy & Security**: Encrypts all data between the user and the server, protecting chat history and credentials.
|
||||
2. **🎤 Feature Unlocking**: Enables browser restrictions for Microphone (Voice Mode) and Camera access.
|
||||
3. **💪 Integrity**: Ensures data is not tampered with in transit.
|
||||
4. **✅ Trust**: Displays the padlock icon, reassuring users that the service is secure.
|
||||
|
||||
## Choosing Your Solution 🛠️
|
||||
|
||||
The best method depends on your infrastructure.
|
||||
|
||||
### 🏠 For Local/Docker Users
|
||||
If you are running Open WebUI with Docker, the standard approach is to use a **Reverse Proxy**. This sits in front of Open WebUI and handles the SSL encryption.
|
||||
|
||||
* **[Nginx](./nginx)**: The industry standard. Highly configurable, great performance.
|
||||
* **[Caddy](./caddy)**: **Easiest option**. Automatically obtains and renews Let's Encrypt certificates with minimal config.
|
||||
* **[HAProxy](./haproxy)**: Robust choice for advanced load balancing needs.
|
||||
|
||||
### ☁️ For Cloud Deployments
|
||||
* **Cloud Load Balancers**: (AWS ALB, Google Cloud Load Balancing) often handle SSL termination natively.
|
||||
* **Cloudflare Tunnel**: Excellent for exposing localhost to the web securely without opening ports.
|
||||
|
||||
### 🧪 For Development
|
||||
* **Ngrok**: Good for quickly testing Voice features locally. *Not for production.*
|
||||
385
docs/reference/https/nginx.md
Normal file
385
docs/reference/https/nginx.md
Normal file
@@ -0,0 +1,385 @@
|
||||
---
|
||||
sidebar_position: 200
|
||||
title: "HTTPS using Nginx"
|
||||
---
|
||||
|
||||
|
||||
# HTTPS using Nginx
|
||||
|
||||
Ensuring secure communication between your users and the Open WebUI is paramount. HTTPS (HyperText Transfer Protocol Secure) encrypts the data transmitted, protecting it from eavesdroppers and tampering. By configuring Nginx as a reverse proxy, you can seamlessly add HTTPS to your Open WebUI deployment, enhancing both security and trustworthiness.
|
||||
|
||||
This guide provides three methods to set up HTTPS:
|
||||
|
||||
- **Self-Signed Certificates**: Ideal for development and internal use, using docker.
|
||||
- **Let's Encrypt**: Perfect for production environments requiring trusted SSL certificates, using docker.
|
||||
- **Windows+Self-Signed**: Simplified instructions for development and internal use on windows, no docker required.
|
||||
|
||||
:::danger Critical: Configure CORS for WebSocket Connections
|
||||
|
||||
A very common and difficult-to-debug issue with WebSocket connections is a misconfigured Cross-Origin Resource Sharing (CORS) policy. When running Open WebUI behind a reverse proxy like Nginx Proxy Manager, you **must** set the `CORS_ALLOW_ORIGIN` environment variable in your Open WebUI configuration.
|
||||
|
||||
Failure to do so will cause WebSocket connections to fail, even if you have enabled "Websockets support" in Nginx Proxy Manager.
|
||||
|
||||
### HTTP/2 and WebSockets
|
||||
|
||||
If you enable **HTTP/2** on your Nginx server, ensure that your proxy configuration still uses **HTTP/1.1** for the connection to the Open WebUI backend. This is crucial as most WebUI features (like streaming and real-time updates) rely on WebSockets, which are more stable when handled via HTTP/1.1 `Upgrade` than over the newer RFC 8441 (WebSockets over H2) in many proxy environments.
|
||||
|
||||
In your Nginx location block, always include:
|
||||
```nginx
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
:::danger Critical: Disable Proxy Buffering for SSE Streaming
|
||||
|
||||
**This is the most common cause of garbled markdown and broken streaming responses.**
|
||||
|
||||
When Nginx's `proxy_buffering` is enabled (the default!), it re-chunks SSE streams arbitrarily. This breaks markdown tokens across chunk boundaries—for example, `**bold**` becomes `**` + `bold` + `**`—causing corrupted output with visible `##`, `**`, or missing words.
|
||||
|
||||
**You MUST include these directives in your Nginx location block:**
|
||||
|
||||
```nginx
|
||||
# CRITICAL: Disable buffering for SSE streaming
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
```
|
||||
|
||||
**Symptoms if you forget this:**
|
||||
- Raw markdown tokens visible (`##`, `**`, `###`)
|
||||
- Bold/heading markers appearing incorrectly
|
||||
- Words or sections randomly missing from responses
|
||||
- Streaming works perfectly when disabled, breaks when enabled
|
||||
|
||||
**Bonus:** Disabling buffering also makes streaming responses **significantly faster**, as content flows directly to the client without Nginx's buffering delay.
|
||||
|
||||
:::
|
||||
|
||||
Choose the method that best fits your deployment needs.
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
import NginxProxyManager from '../tab-nginx/NginxProxyManager.md';
|
||||
import SelfSigned from '../tab-nginx/SelfSigned.md';
|
||||
import LetsEncrypt from '../tab-nginx/LetsEncrypt.md';
|
||||
import Windows from '../tab-nginx/Windows.md';
|
||||
|
||||
<!-- markdownlint-disable-next-line MD033 -->
|
||||
<Tabs>
|
||||
<TabItem value="NginxProxyManager" label="Nginx Proxy Manager">
|
||||
<NginxProxyManager />
|
||||
</TabItem>
|
||||
<TabItem value="letsencrypt" label="Let's Encrypt">
|
||||
<LetsEncrypt />
|
||||
</TabItem>
|
||||
<TabItem value="selfsigned" label="Self-Signed">
|
||||
<SelfSigned />
|
||||
</TabItem>
|
||||
<TabItem value="windows" label="Windows">
|
||||
<Windows />
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
## Complete Optimized NGINX Configuration
|
||||
|
||||
This section provides a production-ready NGINX configuration optimized for Open WebUI streaming, WebSocket connections, and high-concurrency deployments.
|
||||
|
||||
### Upstream Configuration
|
||||
|
||||
Define an upstream with keepalive connections to reduce connection setup overhead:
|
||||
|
||||
```nginx
|
||||
upstream openwebui {
|
||||
server 127.0.0.1:3000;
|
||||
keepalive 128; # Persistent connections
|
||||
keepalive_timeout 1800s; # 30 minutes
|
||||
keepalive_requests 10000;
|
||||
}
|
||||
```
|
||||
|
||||
### Timeout Configuration
|
||||
|
||||
Long-running LLM completions require extended timeouts:
|
||||
|
||||
```nginx
|
||||
location /api/ {
|
||||
proxy_connect_timeout 1800; # 30 minutes
|
||||
proxy_send_timeout 1800;
|
||||
proxy_read_timeout 1800;
|
||||
}
|
||||
|
||||
# WebSocket connections need even longer timeouts
|
||||
location ~ ^/(ws/|socket\.io/) {
|
||||
proxy_connect_timeout 86400; # 24 hours
|
||||
proxy_send_timeout 86400;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
```
|
||||
|
||||
### Header and Body Size Limits
|
||||
|
||||
Prevent errors with large requests or OAuth tokens:
|
||||
|
||||
```nginx
|
||||
# In http {} or server {} block
|
||||
client_max_body_size 100M; # Large file uploads
|
||||
proxy_buffer_size 128k; # Large headers (OAuth tokens)
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
large_client_header_buffers 4 32k;
|
||||
```
|
||||
|
||||
### Common Streaming Mistakes
|
||||
|
||||
| Setting | Impact on Streaming |
|
||||
|---------|---------------------|
|
||||
| `gzip on` with `application/json` | 🔴 Buffers for compression |
|
||||
| `proxy_buffering on` | 🔴 Buffers entire response |
|
||||
| `proxy_request_buffering on` | Should be turned off |
|
||||
| `tcp_nodelay on` | 🔴 **Most Critical:** Disables Nagle's algorithm to send packets immediately (prevents 200ms delays) |
|
||||
| `chunked_transfer_encoding on` | 🟡 Can break SSE |
|
||||
| `proxy_cache` enabled on `/api/` | 🟡 Adds overhead |
|
||||
| `X-Accel-Buffering "yes"` | This header should be set to "no" for extra safety |
|
||||
| `HTTP/2` | If you experience streaming issues, lag, streaming ending before the last chunk arrived on the frontend, then using HTTP 1.1 instead of HTTP/2 might also help |
|
||||
|
||||
### Full Example Configuration
|
||||
|
||||
```nginx
|
||||
upstream openwebui {
|
||||
server 127.0.0.1:3000;
|
||||
keepalive 128;
|
||||
keepalive_timeout 1800s;
|
||||
keepalive_requests 10000;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name your-domain.com;
|
||||
|
||||
# SSL configuration...
|
||||
|
||||
# Compression - EXCLUDE streaming content types
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/javascript image/svg+xml;
|
||||
# DO NOT include: application/json, text/event-stream
|
||||
|
||||
# API endpoints - streaming optimized
|
||||
location /api/ {
|
||||
proxy_pass http://openwebui;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# CRITICAL: Disable all buffering for streaming
|
||||
gzip off;
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
proxy_cache off;
|
||||
tcp_nodelay on;
|
||||
add_header X-Accel-Buffering "no" always;
|
||||
add_header Cache-Control "no-store" always;
|
||||
|
||||
# Extended timeouts for LLM completions
|
||||
proxy_connect_timeout 1800;
|
||||
proxy_send_timeout 1800;
|
||||
proxy_read_timeout 1800;
|
||||
}
|
||||
|
||||
# WebSocket endpoints
|
||||
location ~ ^/(ws/|socket\.io/) {
|
||||
proxy_pass http://openwebui;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
gzip off;
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
|
||||
# 24-hour timeout for persistent connections
|
||||
proxy_connect_timeout 86400;
|
||||
proxy_send_timeout 86400;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
|
||||
# Static assets - CAN buffer and cache
|
||||
location /static/ {
|
||||
proxy_pass http://openwebui;
|
||||
proxy_buffering on;
|
||||
proxy_cache_valid 200 7d;
|
||||
add_header Cache-Control "public, max-age=604800, immutable";
|
||||
}
|
||||
|
||||
# Default location
|
||||
location / {
|
||||
proxy_pass http://openwebui;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Caching Configuration
|
||||
|
||||
Proper caching significantly improves Open WebUI performance by reducing backend load and speeding up page loads. This section provides guidance for advanced users who want to implement server-side and client-side caching.
|
||||
|
||||
### Cache Zones
|
||||
|
||||
Define cache zones in your nginx `http` block to store cached responses:
|
||||
|
||||
```nginx
|
||||
# General cache for pages and assets
|
||||
proxy_cache_path /var/cache/nginx/openwebui levels=1:2
|
||||
keys_zone=OPENWEBUI_CACHE:10m max_size=1g inactive=60m use_temp_path=off;
|
||||
|
||||
# Dedicated cache for images (profile pictures, model avatars)
|
||||
proxy_cache_path /var/cache/nginx/openwebui_images levels=1:2
|
||||
keys_zone=OPENWEBUI_IMAGES:10m max_size=2g inactive=7d use_temp_path=off;
|
||||
```
|
||||
|
||||
:::note Create Cache Directories
|
||||
|
||||
You must create these directories and set proper ownership before nginx can use them:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /var/cache/nginx/openwebui /var/cache/nginx/openwebui_images
|
||||
sudo chown -R www-data:www-data /var/cache/nginx
|
||||
```
|
||||
|
||||
Replace `www-data` with your nginx user (check with `ps aux | grep nginx`). Common alternatives: `nginx`, `nobody`.
|
||||
|
||||
:::
|
||||
|
||||
### What to Cache
|
||||
|
||||
| Content Type | Cache Duration | Notes |
|
||||
|--------------|----------------|-------|
|
||||
| Static assets (CSS, JS, fonts) | 7-30 days | Use `immutable` for versioned assets |
|
||||
| Profile/model images | 1 day | Balance freshness vs performance |
|
||||
| Static files (/static/) | 7 days | Favicons, default avatars |
|
||||
| HTML pages | 5 minutes | Short cache with revalidation |
|
||||
| Uploaded file content | 1 day | User uploads, generated images |
|
||||
|
||||
### What to Never Cache
|
||||
|
||||
:::danger Critical: Never Cache Authentication
|
||||
|
||||
These paths must **never** be cached to prevent security issues and broken logins:
|
||||
|
||||
- `/api/v1/auths/` - Authentication endpoints
|
||||
- `/oauth/` - OAuth/SSO callbacks
|
||||
- `/api/` (general) - Dynamic API responses
|
||||
- `/ws/` - WebSocket connections
|
||||
|
||||
Always include these directives for auth endpoints:
|
||||
|
||||
```nginx
|
||||
proxy_no_cache 1;
|
||||
proxy_cache_bypass 1;
|
||||
add_header Cache-Control "no-store, no-cache, must-revalidate";
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
### Example: Image Caching
|
||||
|
||||
Profile images and model avatars benefit greatly from caching:
|
||||
|
||||
```nginx
|
||||
# User and model profile images
|
||||
location ~ ^/api/v1/(users/[^/]+/profile/image|models/model/profile/image)$ {
|
||||
proxy_pass http://your_backend;
|
||||
|
||||
proxy_cache OPENWEBUI_IMAGES;
|
||||
proxy_cache_valid 200 302 1d;
|
||||
proxy_cache_valid 404 1m;
|
||||
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
|
||||
proxy_cache_lock on;
|
||||
proxy_cache_key "$request_uri$is_args$args";
|
||||
|
||||
# Force caching even without backend cache headers
|
||||
proxy_ignore_headers Cache-Control Expires Set-Cookie;
|
||||
proxy_hide_header Set-Cookie;
|
||||
|
||||
# Client-side caching
|
||||
add_header Cache-Control "public, max-age=86400, stale-while-revalidate=604800" always;
|
||||
add_header X-Cache-Status $upstream_cache_status always;
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
```
|
||||
|
||||
### Example: Static Asset Caching
|
||||
|
||||
```nginx
|
||||
location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|otf|eot)$ {
|
||||
proxy_pass http://your_backend;
|
||||
|
||||
proxy_cache OPENWEBUI_CACHE;
|
||||
proxy_cache_valid 200 302 60m;
|
||||
proxy_cache_valid 404 1m;
|
||||
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
|
||||
proxy_cache_lock on;
|
||||
|
||||
add_header Cache-Control "public, max-age=2592000"; # 30 days
|
||||
add_header X-Cache-Status $upstream_cache_status;
|
||||
|
||||
etag on;
|
||||
if_modified_since exact;
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Debugging
|
||||
|
||||
Add the `X-Cache-Status` header to verify caching is working:
|
||||
|
||||
```nginx
|
||||
add_header X-Cache-Status $upstream_cache_status always;
|
||||
```
|
||||
|
||||
Check the header in browser DevTools:
|
||||
- `HIT` - Served from cache
|
||||
- `MISS` - Fetched from backend, now cached
|
||||
- `EXPIRED` - Cache expired, refreshed
|
||||
- `BYPASS` - Cache intentionally skipped
|
||||
|
||||
### Trade-offs
|
||||
|
||||
:::warning Cache Invalidation
|
||||
|
||||
When images are cached aggressively, users may not see immediate updates after changing their profile picture. Consider:
|
||||
|
||||
- **Shorter cache times** (e.g., 1 hour) if users frequently update images
|
||||
- **Longer cache times** (e.g., 1 day) for better performance in stable deployments
|
||||
- Cache can be manually cleared with: `rm -rf /var/cache/nginx/openwebui_images/*`
|
||||
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
|
||||
After setting up HTTPS, access Open WebUI securely at:
|
||||
|
||||
- [https://localhost](https://localhost)
|
||||
|
||||
Ensure that your DNS records are correctly configured if you're using a domain name. For production environments, it's recommended to use Let's Encrypt for trusted SSL certificates.
|
||||
|
||||
---
|
||||
Reference in New Issue
Block a user