NGINX CRITICAL BUFFERING CACHE PROXY

This commit is contained in:
DrMelone
2026-01-11 18:04:38 +01:00
parent 201fdaf102
commit e005a46a51
7 changed files with 98 additions and 0 deletions

View File

@@ -14,6 +14,7 @@ If you're experiencing connectivity problems with Open WebUI, especially when us
You might be experiencing these issues if you see: You might be experiencing these issues if you see:
- Empty responses like `"{}"` in the chat - Empty responses like `"{}"` in the chat
- Errors like `"Unexpected token 'd', "data: {"id"... is not valid JSON"` - Errors like `"Unexpected token 'd', "data: {"id"... is not valid JSON"`
- Garbled markdown (visible `##`, `**`, broken formatting) during streaming—see [Streaming Response Corruption](#-garbled-markdown--streaming-response-corruption)
- WebSocket connection failures in browser console - WebSocket connection failures in browser console
- WebSocket connection failures in CLI logs - WebSocket connection failures in CLI logs
- Login problems or session issues - Login problems or session issues
@@ -108,6 +109,57 @@ To verify your setup is working:
- ✓ Set proper `X-Forwarded-Proto` headers in your reverse proxy - ✓ Set proper `X-Forwarded-Proto` headers in your reverse proxy
- ✓ Ensure HTTP to HTTPS redirects are in place - ✓ Ensure HTTP to HTTPS redirects are in place
- ✓ Configure Let's Encrypt for automatic certificate renewal - ✓ Configure Let's Encrypt for automatic certificate renewal
- ✓ Disable proxy buffering for SSE streaming (see below)
## 📝 Garbled Markdown / Streaming Response Corruption
If streaming responses show garbled markdown rendering (e.g., visible `##`, `**`, or broken formatting), but disabling streaming fixes the issue, this is typically caused by **nginx proxy buffering**.
### Common Symptoms
- Raw markdown tokens visible in responses (`##`, `**`, `###`)
- Bold markers appearing incorrectly (`** Control:**` instead of `**Control:**`)
- Words or sections randomly missing from responses
- Formatting works correctly when streaming is disabled
### Cause: Nginx Proxy Buffering
When nginx's proxy buffering is enabled, it re-chunks the SSE (Server-Sent Events) stream arbitrarily. This breaks markdown tokens across chunk boundaries—for example, `**bold**` becomes separate chunks `**` + `bold` + `**`, causing the markdown parser to fail.
### Solution: Disable Proxy Buffering
Add these directives to your nginx location block for Open WebUI:
```nginx
location / {
proxy_pass http://your-open-webui-upstream;
# CRITICAL: Disable buffering for SSE streaming
proxy_buffering off;
proxy_cache off;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Standard proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
```
:::tip
Disabling proxy buffering also **significantly improves streaming speed**, as responses stream byte-by-byte directly to the client without nginx's buffering delay.
:::
### For Other Reverse Proxies
- **HAProxy**: Ensure `option http-buffer-request` is not enabled for SSE endpoints
- **Traefik**: Check compression/buffering middleware settings
- **Caddy**: Generally handles SSE correctly by default, but check for any buffering plugins
## 🌟 Connection to Ollama Server ## 🌟 Connection to Ollama Server

View File

@@ -152,6 +152,7 @@ If you are deploying for **enterprise scale** (hundreds of users), simple Docker
* **Redis (Mandatory)**: When running multiple workers (`UVICORN_WORKERS > 1`) or multiple replicas, **Redis is required** to handle WebSocket connections and session syncing. See **[Redis Integration](/tutorials/integrations/redis)**. * **Redis (Mandatory)**: When running multiple workers (`UVICORN_WORKERS > 1`) or multiple replicas, **Redis is required** to handle WebSocket connections and session syncing. See **[Redis Integration](/tutorials/integrations/redis)**.
* **Load Balancing**: Ensure your Ingress controller supports **Session Affinity** (Sticky Sessions) for best performance. * **Load Balancing**: Ensure your Ingress controller supports **Session Affinity** (Sticky Sessions) for best performance.
* **Reverse Proxy Caching**: Configure your reverse proxy (e.g., Nginx, Caddy, Cloudflare) to **cache static assets** (JS, CSS, Images). This significantly reduces load on the application server. See **[Nginx Config](/tutorials/https/nginx)** or **[Caddy Config](/tutorials/https/caddy)**. * **Reverse Proxy Caching**: Configure your reverse proxy (e.g., Nginx, Caddy, Cloudflare) to **cache static assets** (JS, CSS, Images). This significantly reduces load on the application server. See **[Nginx Config](/tutorials/https/nginx)** or **[Caddy Config](/tutorials/https/caddy)**.
* **Disable Proxy Buffering (Critical for Streaming)**: If using Nginx, you **must** disable `proxy_buffering` for Open WebUI. Proxy buffering re-chunks SSE streams, causing garbled markdown and slow streaming. Add `proxy_buffering off;` and `proxy_cache off;` to your location block. See **[Streaming Troubleshooting](/troubleshooting/connection-error#-garbled-markdown--streaming-response-corruption)**.
--- ---

View File

@@ -38,6 +38,30 @@ proxy_set_header Connection "upgrade";
::: :::
:::danger Critical: Disable Proxy Buffering for SSE Streaming
**This is the most common cause of garbled markdown and broken streaming responses.**
When Nginx's `proxy_buffering` is enabled (the default!), it re-chunks SSE streams arbitrarily. This breaks markdown tokens across chunk boundaries—for example, `**bold**` becomes `**` + `bold` + `**`—causing corrupted output with visible `##`, `**`, or missing words.
**You MUST include these directives in your Nginx location block:**
```nginx
# CRITICAL: Disable buffering for SSE streaming
proxy_buffering off;
proxy_cache off;
```
**Symptoms if you forget this:**
- Raw markdown tokens visible (`##`, `**`, `###`)
- Bold/heading markers appearing incorrectly
- Words or sections randomly missing from responses
- Streaming works perfectly when disabled, breaks when enabled
**Bonus:** Disabling buffering also makes streaming responses **significantly faster**, as content flows directly to the client without Nginx's buffering delay.
:::
Choose the method that best fits your deployment needs. Choose the method that best fits your deployment needs.
import Tabs from '@theme/Tabs'; import Tabs from '@theme/Tabs';

View File

@@ -224,6 +224,7 @@ With the certificate saved in your `ssl` directory, you can now update the Nginx
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 10m; proxy_read_timeout 10m;
proxy_buffering off; proxy_buffering off;
proxy_cache off;
client_max_body_size 20M; client_max_body_size 20M;
proxy_no_cache 1; proxy_no_cache 1;
@@ -257,6 +258,7 @@ With the certificate saved in your `ssl` directory, you can now update the Nginx
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 10m; proxy_read_timeout 10m;
proxy_buffering off; proxy_buffering off;
proxy_cache off;
client_max_body_size 20M; client_max_body_size 20M;
add_header Cache-Control "public, max-age=300, must-revalidate"; add_header Cache-Control "public, max-age=300, must-revalidate";

View File

@@ -79,6 +79,21 @@ Failure to do so will cause WebSocket connections to fail, even if you have enab
::: :::
:::danger Critical: Disable Proxy Buffering for Streaming
**This is the most common cause of garbled markdown and broken streaming responses.**
In Nginx Proxy Manager, go to your proxy host → **Advanced** tab → and add these directives to the **Custom Nginx Configuration** field:
```nginx
proxy_buffering off;
proxy_cache off;
```
Without this, Nginx re-chunks SSE streams, breaking markdown formatting (visible `##`, `**`, missing words). This also makes streaming responses significantly faster.
:::
:::tip Caching Best Practice :::tip Caching Best Practice
While Nginx Proxy Manager handles most configuration automatically, be aware that: While Nginx Proxy Manager handles most configuration automatically, be aware that:

View File

@@ -36,6 +36,7 @@ Using self-signed certificates is suitable for development or internal use where
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off; proxy_buffering off;
proxy_cache off;
client_max_body_size 20M; client_max_body_size 20M;
proxy_read_timeout 10m; proxy_read_timeout 10m;
@@ -68,6 +69,7 @@ Using self-signed certificates is suitable for development or internal use where
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off; proxy_buffering off;
proxy_cache off;
client_max_body_size 20M; client_max_body_size 20M;
proxy_read_timeout 10m; proxy_read_timeout 10m;

View File

@@ -113,6 +113,7 @@ http {
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off; proxy_buffering off;
proxy_cache off;
client_max_body_size 20M; client_max_body_size 20M;
proxy_read_timeout 10m; proxy_read_timeout 10m;
@@ -142,6 +143,7 @@ http {
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off; proxy_buffering off;
proxy_cache off;
client_max_body_size 20M; client_max_body_size 20M;
proxy_read_timeout 10m; proxy_read_timeout 10m;