mirror of
https://github.com/open-webui/docs.git
synced 2025-12-12 07:29:49 +07:00
Merge pull request #871 from open-webui/main
This commit is contained in:
@@ -38,17 +38,18 @@ Explore how other organizations are driving real impact with Open WebUI.
|
||||
|
||||
## Let’s Talk
|
||||
|
||||
:::tip
|
||||
To help us respond quickly and efficiently to your inquiry, **please use your official work email address**, Personal email accounts (e.g. gmail.com, hotmail.com, proton.me, icloud.com, yahoo.com etc.) are often flagged by our system and will not be answered.
|
||||
:::
|
||||
|
||||
**sales@openwebui.com** — Send us your deployment **end user count (seats)**, and let’s explore how we can work together!
|
||||
|
||||
:::info
|
||||
|
||||
Enterprise licenses and partnership opportunities are available **exclusively to registered entities and organizations**. At this time, we are unable to accommodate individual users. We appreciate your understanding and interest.
|
||||
|
||||
To help us respond quickly and efficiently to your inquiry, **please use your official work email address**, Personal email accounts (e.g. gmail.com, hotmail.com, icloud.com, yahoo.com etc.) are often flagged by our system and will not be answered.
|
||||
|
||||
**We are not seeking, open to, or in need of any form of disguised collaboration or contribution pitches/offers.** Any attempt to characterize engineering assistance, co-development, or roadmap input **as an alternative to our enterprise licensing program will not receive a response**. All organizational engagement with Open WebUI occurs **solely and exclusively through enterprise licensing**.
|
||||
:::
|
||||
|
||||
|
||||
Take your AI strategy to the next level with our **premium enterprise solutions**, crafted for organizations that demand **expert consulting, tailored deployment, and dedicated support.**
|
||||
|
||||
:::warning
|
||||
|
||||
@@ -34,7 +34,7 @@ Chat permissions determine what actions users can perform within chat conversati
|
||||
|
||||
Features permissions control access to specialized capabilities within Open WebUI:
|
||||
|
||||
- **Web Search**: Toggle to allow users to perform web searches during chat sessions. (Environment variable: `ENABLE_RAG_WEB_SEARCH`)
|
||||
- **Web Search**: Toggle to allow users to perform web searches during chat sessions. (Environment variable: `ENABLE_WEB_SEARCH`)
|
||||
- **Image Generation**: Toggle to allow users to generate images. (Environment variable: `ENABLE_IMAGE_GENERATION`)
|
||||
- **Code Interpreter**: Toggle to allow users to use the code interpreter feature. (Environment variable: `USER_PERMISSIONS_FEATURES_CODE_INTERPRETER`)
|
||||
- **Direct Tool Servers**: Toggle to allow users to connect directly to tool servers. (Environment variable: `USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS`)
|
||||
@@ -60,7 +60,7 @@ By default, Open WebUI applies the following permission settings:
|
||||
|
||||
**Features Permissions**:
|
||||
|
||||
- Web Search: Enabled (`ENABLE_RAG_WEB_SEARCH=True`)
|
||||
- Web Search: Enabled (`ENABLE_WEB_SEARCH=True`)
|
||||
- Image Generation: Enabled (`ENABLE_IMAGE_GENERATION=True`)
|
||||
- Code Interpreter: Enabled (`USER_PERMISSIONS_FEATURES_CODE_INTERPRETER`)
|
||||
- Direct Tool Servers: Disabled (`USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS=False`)
|
||||
|
||||
@@ -21,15 +21,15 @@ To configure SearXNG optimally for use with Open WebUI, follow these steps:
|
||||
|
||||
**Step 1: `git clone` SearXNG Docker and navigate to the folder:**
|
||||
|
||||
1. Create a New Directory `searxng-docker`
|
||||
1. Clone the repository `searxng-docker`
|
||||
|
||||
Clone the searxng-docker repository. This folder will contain your SearXNG configuration files. Refer to the [SearXNG documentation](https://docs.searxng.org/) for configuration instructions.
|
||||
Clone the searxng-docker repository. This will create a new directory called `searxng-docker`, which will contain your SearXNG configuration files. Refer to the [SearXNG documentation](https://docs.searxng.org/) for configuration instructions.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/searxng/searxng-docker.git
|
||||
```
|
||||
|
||||
Navigate to the `searxng-docker` repository:
|
||||
Navigate to the `searxng-docker` repository, and run all commands from there:
|
||||
|
||||
```bash
|
||||
cd searxng-docker
|
||||
@@ -49,7 +49,7 @@ cd searxng-docker
|
||||
|
||||
# * uncomment LETSENCRYPT_EMAIL, and replace <email> by your email (require to create a Let's Encrypt certificate)
|
||||
|
||||
SEARXNG_HOSTNAME=localhost:8080/
|
||||
SEARXNG_HOSTNAME=localhost
|
||||
|
||||
# LETSENCRYPT_EMAIL=<email>
|
||||
|
||||
@@ -68,8 +68,18 @@ SEARXNG_HOSTNAME=localhost:8080/
|
||||
|
||||
3. Remove the `localhost` restriction by modifying the `docker-compose.yaml` file:
|
||||
|
||||
If port 8080 is already in use, change `0.0.0.0:8080` to `0.0.0.0:[available port]` in the command before running it.
|
||||
|
||||
Run the appropriate command for your operating system:
|
||||
|
||||
- **Linux**
|
||||
```bash
|
||||
sed -i "s/127.0.0.1:8080/0.0.0.0:8080/"
|
||||
sed -i 's/127.0.0.1:8080/0.0.0.0:8080/' docker-compose.yaml
|
||||
```
|
||||
|
||||
- **macOS**:
|
||||
```bash
|
||||
sed -i '' 's/127.0.0.1:8080/0.0.0.0:8080/' docker-compose.yaml
|
||||
```
|
||||
|
||||
**Step 4: Grant Necessary Permissions**
|
||||
@@ -77,13 +87,15 @@ sed -i "s/127.0.0.1:8080/0.0.0.0:8080/"
|
||||
4. Allow the container to create new config files by running the following command in the root directory:
|
||||
|
||||
```bash
|
||||
sudo chmod a+rwx searxng-docker/searxng
|
||||
sudo chmod a+rwx searxng
|
||||
```
|
||||
|
||||
**Step 5: Create a Non-Restrictive `limiter.toml` File**
|
||||
|
||||
5. Create a non-restrictive `searxng-docker/searxng/limiter.toml` config file:
|
||||
|
||||
*If the file already exists, append the missing lines to it.*
|
||||
|
||||
<!-- markdownlint-disable-next-line MD033 -->
|
||||
<details>
|
||||
<!-- markdownlint-disable-next-line MD033 -->
|
||||
@@ -112,172 +124,74 @@ pass_ip = []
|
||||
6. Delete the default `searxng-docker/searxng/settings.yml` file if it exists, as it will be regenerated on the first launch of SearXNG:
|
||||
|
||||
```bash
|
||||
rm searxng-docker/searxng/settings.yml
|
||||
rm searxng/settings.yml
|
||||
```
|
||||
|
||||
**Step 7: Create a Fresh `settings.yml` File**
|
||||
|
||||
:::note
|
||||
|
||||
On the first run, you must remove `cap_drop: - ALL` from the `docker-compose.yaml` file for the `searxng` service to successfully create `/etc/searxng/uwsgi`.ini. This is necessary because the `cap_drop: - ALL` directive removes all capabilities, including those required for the creation of the `uwsgi.ini` file. After the first run, you should re-add `cap_drop: - ALL` to the `docker-compose.yaml` file for security reasons.
|
||||
|
||||
:::
|
||||
|
||||
7. Bring up the container momentarily to generate a fresh settings.yml file:
|
||||
|
||||
If you have multiple containers running with the same name, such as caddy, redis, or searxng, you need to rename them in the docker-compose.yaml file to avoid conflicts.
|
||||
|
||||
```bash
|
||||
docker compose up -d ; sleep 10 ; docker compose down
|
||||
```
|
||||
|
||||
**Step 8: Add Formats and Update Port Number**
|
||||
After the initial run, add `cap_drop: - ALL` to the `docker-compose.yaml` file for security reasons.
|
||||
|
||||
If Open WebUI is running in the same Docker network as Searxng, you may remove the `0.0.0.0` and only specify the port mapping. In this case, Open WebUI can access Searxng directly using the container name.
|
||||
|
||||
<details>
|
||||
<summary>docker-compose.yaml</summary>
|
||||
|
||||
```yaml
|
||||
searxng:
|
||||
container_name: searxng
|
||||
image: docker.io/searxng/searxng:latest
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- searxng
|
||||
ports:
|
||||
- "0.0.0.0:8080:8080" # use 8080:8080 if containers are in the same Docker network
|
||||
volumes:
|
||||
- ./searxng:/etc/searxng:rw
|
||||
- searxng-data:/var/cache/searxng:rw
|
||||
environment:
|
||||
- SEARXNG_BASE_URL=https://${SEARXNG_HOSTNAME:-localhost}/
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "1m"
|
||||
max-file: "1"
|
||||
cap_drop:
|
||||
- ALL
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
**Step 8: Add Formats**
|
||||
|
||||
8. Add HTML and JSON formats to the `searxng-docker/searxng/settings.yml` file:
|
||||
|
||||
- **Linux**
|
||||
```bash
|
||||
sed -i 's/formats: \[\"html\"\/]/formats: [\"html\", \"json\"]/' searxng-docker/searxng/settings.yml
|
||||
sed -i 's/- html/- html\n - json/' searxng/settings.yml
|
||||
```
|
||||
|
||||
Generate a secret key for your SearXNG instance:
|
||||
- **macOS**
|
||||
```bash
|
||||
sed -i '' 's/- html/- html\n - json/' searxng/settings.yml
|
||||
```
|
||||
|
||||
**Step 9: Run the Server**
|
||||
|
||||
9. Start the container with the following command:
|
||||
|
||||
```bash
|
||||
sed -i "s|ultrasecretkey|$(openssl rand -hex 32)|g" searxng-docker/searxng/settings.yml
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Windows users can use the following powershell script to generate the secret key:
|
||||
|
||||
```powershell
|
||||
$randomBytes = New-Object byte[] 32
|
||||
(New-Object Security.Cryptography.RNGCryptoServiceProvider).GetBytes($randomBytes)
|
||||
$secretKey = -join ($randomBytes | ForEach-Object { "{0:x2}" -f $_ })
|
||||
(Get-Content searxng-docker/searxng/settings.yml) -replace 'ultrasecretkey', $secretKey | Set-Content searxng-docker/searxng/settings.yml
|
||||
```
|
||||
|
||||
Update the port number in the `server` section to match the one you set earlier (in this case, `8080`):
|
||||
|
||||
```bash
|
||||
sed -i 's/port: 8080/port: 8080/' searxng-docker/searxng/settings.yml
|
||||
```
|
||||
|
||||
Change the `bind_address` as desired:
|
||||
|
||||
```bash
|
||||
sed -i 's/bind_address: "0.0.0.0"/bind_address: "127.0.0.1"/' searxng-docker/searxng/settings.yml
|
||||
```
|
||||
|
||||
#### Configuration Files
|
||||
|
||||
#### searxng-docker/searxng/settings.yml (Extract)
|
||||
|
||||
The default `settings.yml` file contains many engine settings. Below is an extract of what the default `settings.yml` file might look like:
|
||||
|
||||
<!-- markdownlint-disable-next-line MD033 -->
|
||||
<details>
|
||||
<!-- markdownlint-disable-next-line MD033 -->
|
||||
<summary>searxng-docker/searxng/settings.yml</summary>
|
||||
|
||||
```yaml
|
||||
|
||||
# see https://docs.searxng.org/admin/settings/settings.html#settings-use-default-settings
|
||||
use_default_settings: true
|
||||
|
||||
server:
|
||||
# base_url is defined in the SEARXNG_BASE_URL environment variable, see .env and docker-compose.yml
|
||||
secret_key: "ultrasecretkey" # change this!
|
||||
limiter: true # can be disabled for a private instance
|
||||
image_proxy: true
|
||||
port: 8080
|
||||
bind_address: "0.0.0.0"
|
||||
|
||||
ui:
|
||||
static_use_hash: true
|
||||
|
||||
search:
|
||||
safe_search: 0
|
||||
autocomplete: ""
|
||||
default_lang: ""
|
||||
formats:
|
||||
- html
|
||||
- json # json is required
|
||||
# remove format to deny access, use lower case.
|
||||
# formats: [html, csv, json, rss]
|
||||
redis:
|
||||
# URL to connect redis database. Is overwritten by ${SEARXNG_REDIS_URL}.
|
||||
# https://docs.searxng.org/admin/settings/settings_redis.html#settings-redis
|
||||
url: redis://redis:6379/0
|
||||
```
|
||||
|
||||
The port in the settings.yml file for SearXNG should match that of the port number in your docker-compose.yml file for SearXNG.
|
||||
|
||||
</details>
|
||||
|
||||
**Step 9: Update `uwsgi.ini` File**
|
||||
|
||||
9. Ensure your `searxng-docker/searxng/uwsgi.ini` file matches the following:
|
||||
|
||||
<!-- markdownlint-disable-next-line MD033 -->
|
||||
<details>
|
||||
<!-- markdownlint-disable-next-line MD033 -->
|
||||
<summary>searxng-docker/searxng/uwsgi.ini</summary>
|
||||
|
||||
```ini
|
||||
[uwsgi]
|
||||
|
||||
# Who will run the code
|
||||
uid = searxng
|
||||
gid = searxng
|
||||
|
||||
# Number of workers (usually CPU count)
|
||||
|
||||
# default value: %k (= number of CPU core, see Dockerfile)
|
||||
workers = %k
|
||||
|
||||
# Number of threads per worker
|
||||
|
||||
# default value: 4 (see Dockerfile)
|
||||
threads = 4
|
||||
|
||||
# The right granted on the created socket
|
||||
chmod-socket = 666
|
||||
|
||||
# Plugin to use and interpreter config
|
||||
single-interpreter = true
|
||||
master = true
|
||||
plugin = python3
|
||||
lazy-apps = true
|
||||
enable-threads = 4
|
||||
|
||||
# Module to import
|
||||
module = searx.webapp
|
||||
|
||||
# Virtualenv and python path
|
||||
pythonpath = /usr/local/searxng/
|
||||
chdir = /usr/local/searxng/searx/
|
||||
|
||||
# automatically set processes name to something meaningful
|
||||
auto-procname = true
|
||||
|
||||
# Disable request logging for privacy
|
||||
disable-logging = true
|
||||
log-5xx = true
|
||||
|
||||
# Set the max size of a request (request-body excluded)
|
||||
buffer-size = 8192
|
||||
|
||||
# No keep alive
|
||||
|
||||
# See https://github.com/searx/searx-docker/issues/24
|
||||
add-header = Connection: close
|
||||
|
||||
# uwsgi serves the static files
|
||||
static-map = /static=/usr/local/searxng/searx/static
|
||||
|
||||
# expires set to one day
|
||||
static-expires = /* 86400
|
||||
static-gzip-all = True
|
||||
offload-threads = 4
|
||||
```
|
||||
|
||||
</details>
|
||||
The searXNG will be available at http://localhost:8080 (or the port number you set earlier).
|
||||
|
||||
## 2. Alternative Setup
|
||||
|
||||
@@ -412,6 +326,7 @@ docker exec -it open-webui curl http://host.docker.internal:8080/search?q=this+i
|
||||
3. Set `Web Search Engine` from dropdown menu to `searxng`
|
||||
4. Set `Searxng Query URL` to one of the following examples:
|
||||
|
||||
- `http://localhost:8080/search?q=<query>` (using the host and host port, suitable for Docker-based setups)
|
||||
- `http://searxng:8080/search?q=<query>` (using the container name and exposed port, suitable for Docker-based setups)
|
||||
- `http://host.docker.internal:8080/search?q=<query>` (using the `host.docker.internal` DNS name and the host port, suitable for Docker-based setups)
|
||||
- `http://<searxng.local>/search?q=<query>` (using a local domain name, suitable for local network access)
|
||||
@@ -426,7 +341,7 @@ docker exec -it open-webui curl http://host.docker.internal:8080/search?q=this+i
|
||||
|
||||
## 5. Using Web Search in a Chat
|
||||
|
||||
To access Web Search, Click on the + next to the message input field.
|
||||
To access Web Search, Click the Integrations button next to the + icon.
|
||||
|
||||
Here you can toggle Web Search On/Off.
|
||||
|
||||
|
||||
@@ -94,7 +94,7 @@ From the main list view in the `Models` section, click the ellipsis (`...`) next
|
||||
- **Clone**: Create a copy of a model configuration, which will be appended with `-clone`.
|
||||
|
||||
:::note
|
||||
You cannot clone a raw Base Model directly; you must create a custom model first before cloning it.
|
||||
A raw Base Model can be cloned as a custom Workspace model, but it will not clone the raw Base Model itself.
|
||||
:::
|
||||
|
||||
- **Copy Link**: Copies a direct URL to the model settings.
|
||||
|
||||
908
docs/troubleshooting/manual-database-migration.md
Normal file
908
docs/troubleshooting/manual-database-migration.md
Normal file
@@ -0,0 +1,908 @@
|
||||
---
|
||||
sidebar_position: 900
|
||||
title: Manual Alembic Database Migration
|
||||
sidebar_label: Manual Migration
|
||||
description: Complete guide for manually running Alembic database migrations when Open WebUI's automatic migration fails or requires direct intervention.
|
||||
keywords: [alembic, migration, database, troubleshooting, sqlite, postgresql, docker]
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
## Overview
|
||||
|
||||
Open WebUI automatically runs database migrations on startup. **Manual migration is rarely needed** and should only be performed in specific failure scenarios or maintenance situations.
|
||||
|
||||
:::info When Manual Migration is Required
|
||||
You need manual migration only if:
|
||||
|
||||
- Open WebUI logs show specific migration errors during startup
|
||||
- You're performing offline database maintenance
|
||||
- Automatic migration fails after a version upgrade
|
||||
- You're migrating between database types (SQLite ↔ PostgreSQL)
|
||||
- A developer has instructed you to run migrations manually
|
||||
:::
|
||||
|
||||
:::danger Critical Warning
|
||||
Manual migration can corrupt your database if performed incorrectly. **Always create a verified backup before proceeding.**
|
||||
:::
|
||||
|
||||
## Prerequisites Checklist
|
||||
|
||||
Before starting, ensure you have:
|
||||
|
||||
- [ ] **Root/admin access** to your Open WebUI installation
|
||||
- [ ] **Database location confirmed** (default: `/app/backend/data/webui.db` in Docker)
|
||||
- [ ] **Open WebUI completely stopped** (no running processes)
|
||||
- [ ] **Backup created and verified** (see below)
|
||||
- [ ] **Access to container or Python environment** where Open WebUI runs
|
||||
|
||||
:::warning Stop All Processes First
|
||||
Database migrations cannot run while Open WebUI is active. You **must** stop all Open WebUI processes before attempting manual migration.
|
||||
:::
|
||||
|
||||
## Step 1: Create and Verify Backup
|
||||
|
||||
### Backup Your Database
|
||||
|
||||
<Tabs groupId="database-type">
|
||||
<TabItem value="sqlite" label="SQLite (Default)" default>
|
||||
```bash title="Terminal"
|
||||
# Find your database location first
|
||||
docker inspect open-webui | grep -A 5 Mounts
|
||||
|
||||
# Create timestamped backup
|
||||
cp /path/to/webui.db /path/to/webui.db.backup.$(date +%Y%m%d_%H%M%S)
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="postgresql" label="PostgreSQL">
|
||||
```bash title="Terminal"
|
||||
pg_dump -h localhost -U your_user -d open_webui_db > backup_$(date +%Y%m%d_%H%M%S).sql
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Verify Backup Integrity
|
||||
|
||||
**Critical:** Test that your backup is readable before proceeding.
|
||||
|
||||
<Tabs groupId="database-type">
|
||||
<TabItem value="sqlite" label="SQLite" default>
|
||||
```bash title="Terminal - Verify Backup"
|
||||
# Test backup can be opened
|
||||
sqlite3 /path/to/webui.db.backup "SELECT count(*) FROM user;"
|
||||
|
||||
# Verify schema matches
|
||||
sqlite3 /path/to/webui.db ".schema" > current-schema.sql
|
||||
sqlite3 /path/to/webui.db.backup ".schema" > backup-schema.sql
|
||||
diff current-schema.sql backup-schema.sql
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="postgresql" label="PostgreSQL">
|
||||
```bash title="Terminal - Verify Backup"
|
||||
# Verify backup file is not empty and contains SQL
|
||||
head -n 20 backup_*.sql
|
||||
grep -c "CREATE TABLE" backup_*.sql
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::tip Backup Storage
|
||||
Store backups on a **different disk or volume** than your database to protect against disk failure.
|
||||
:::
|
||||
|
||||
## Step 2: Diagnose Current State
|
||||
|
||||
Before attempting any fixes, gather information about your database state.
|
||||
|
||||
### Access Your Environment
|
||||
|
||||
<Tabs groupId="install-type">
|
||||
<TabItem value="docker" label="Docker" default>
|
||||
```bash title="Terminal"
|
||||
# Stop Open WebUI first
|
||||
docker stop open-webui
|
||||
|
||||
# Enter container for diagnostics
|
||||
docker run --rm -it \
|
||||
-v open-webui:/app/backend/data \
|
||||
--entrypoint /bin/bash \
|
||||
ghcr.io/open-webui/open-webui:main
|
||||
```
|
||||
|
||||
:::note Verify Your Location
|
||||
Check where you are after entering the container:
|
||||
```bash
|
||||
pwd
|
||||
```
|
||||
The Alembic configuration is at `/app/backend/open_webui/alembic.ini`. Navigate there regardless of your starting directory.
|
||||
:::
|
||||
</TabItem>
|
||||
<TabItem value="local" label="Local Install">
|
||||
```bash title="Terminal"
|
||||
# Navigate to Open WebUI installation
|
||||
cd /path/to/open-webui/backend/open_webui
|
||||
|
||||
# Activate virtual environment if used
|
||||
source ../../venv/bin/activate # Linux/Mac
|
||||
# venv\Scripts\activate # Windows
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Navigate to Alembic Directory and Set Environment
|
||||
|
||||
Navigate to the directory containing `alembic.ini` and configure required environment variables:
|
||||
|
||||
```bash title="Terminal - Navigate and Configure Environment"
|
||||
# First, verify where you are
|
||||
pwd
|
||||
|
||||
# Navigate to Alembic directory (adjust path if your pwd is different)
|
||||
cd /app/backend/open_webui # Docker
|
||||
# OR
|
||||
cd /path/to/open-webui/backend/open_webui # Local
|
||||
|
||||
# Verify alembic.ini exists in current directory
|
||||
ls -la alembic.ini
|
||||
```
|
||||
|
||||
### Set Required Environment Variables
|
||||
|
||||
<Tabs groupId="install-type">
|
||||
<TabItem value="docker" label="Docker" default>
|
||||
|
||||
```bash title="Terminal - Set Environment Variables (Docker)"
|
||||
# Required: Database URL
|
||||
# For SQLite (4 slashes for absolute path)
|
||||
export DATABASE_URL="sqlite:////app/backend/data/webui.db"
|
||||
|
||||
# For PostgreSQL
|
||||
export DATABASE_URL="postgresql://user:password@localhost:5432/open_webui_db"
|
||||
|
||||
# Required: WEBUI_SECRET_KEY
|
||||
# Get from existing file
|
||||
export WEBUI_SECRET_KEY=$(cat /app/backend/data/.webui_secret_key)
|
||||
|
||||
# If .webui_secret_key doesn't exist, generate one
|
||||
# export WEBUI_SECRET_KEY=$(python3 -c "import secrets; print(secrets.token_hex(32))")
|
||||
# echo $WEBUI_SECRET_KEY > /app/backend/data/.webui_secret_key
|
||||
|
||||
# Verify both are set
|
||||
echo "DATABASE_URL: $DATABASE_URL"
|
||||
echo "WEBUI_SECRET_KEY: ${WEBUI_SECRET_KEY:0:10}..."
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="local" label="Local Install">
|
||||
|
||||
```bash title="Terminal - Set Environment Variables (Local)"
|
||||
# Required: Database URL
|
||||
# For SQLite (relative path from backend/open_webui directory)
|
||||
export DATABASE_URL="sqlite:///../data/webui.db"
|
||||
|
||||
# For absolute path
|
||||
export DATABASE_URL="sqlite:////full/path/to/webui.db"
|
||||
|
||||
# For PostgreSQL
|
||||
export DATABASE_URL="postgresql://user:password@localhost:5432/open_webui_db"
|
||||
|
||||
# Required: WEBUI_SECRET_KEY
|
||||
# If using .env file, Alembic may not pick it up automatically - export manually
|
||||
export WEBUI_SECRET_KEY=$(cat ../data/.webui_secret_key)
|
||||
|
||||
# Or if you have it in your environment already
|
||||
# export WEBUI_SECRET_KEY="your-existing-key"
|
||||
|
||||
# Verify both are set
|
||||
echo "DATABASE_URL: $DATABASE_URL"
|
||||
echo "WEBUI_SECRET_KEY: ${WEBUI_SECRET_KEY:0:10}..."
|
||||
```
|
||||
|
||||
:::note Local Installation Environment
|
||||
Local installations often have `DATABASE_URL` in a `.env` file, but Alembic's `env.py` may not automatically load `.env` files. You must explicitly export these variables in your shell before running Alembic commands.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::danger Both Variables Required
|
||||
Alembic commands will fail with `Required environment variable not found` if `WEBUI_SECRET_KEY` is missing. Open WebUI's code imports `env.py` which validates this variable exists before Alembic can even connect to the database.
|
||||
:::
|
||||
|
||||
:::warning Path Syntax for SQLite
|
||||
|
||||
- `sqlite:////app/...` = 4 slashes total (absolute path: `sqlite://` + `/` + `/app/...`)
|
||||
- `sqlite:///../data/...` = 3 slashes total (relative path)
|
||||
:::
|
||||
|
||||
### Run Diagnostic Commands
|
||||
|
||||
Execute these read-only diagnostic commands:
|
||||
|
||||
```bash title="Terminal - Diagnostics (Safe - Read Only)"
|
||||
# Check current migration version
|
||||
alembic current -v
|
||||
|
||||
# Check target (latest) version
|
||||
alembic heads
|
||||
|
||||
# List all migration history
|
||||
alembic history
|
||||
|
||||
# Show pending migrations (what would be applied)
|
||||
alembic upgrade head --sql | head -30
|
||||
|
||||
# Check for branching (indicates issues)
|
||||
alembic branches
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
|
||||
```
|
||||
# alembic current should show something like:
|
||||
ae1027a6acf (head)
|
||||
|
||||
# If you see multiple heads or branching, your migration history has issues
|
||||
```
|
||||
|
||||
:::info Understanding Output
|
||||
|
||||
- `alembic current` = what version your database thinks it's at
|
||||
- `alembic heads` = what version the code expects
|
||||
- `alembic upgrade head --sql` = preview SQL that would be executed (doesn't apply changes)
|
||||
- If `current` is older than `heads`, you have pending migrations
|
||||
- If `current` equals `heads`, your database is up-to-date
|
||||
:::
|
||||
|
||||
<details>
|
||||
<summary>Check Actual Database Tables</summary>
|
||||
|
||||
Verify what's actually in your database:
|
||||
|
||||
<Tabs groupId="database-type">
|
||||
<TabItem value="sqlite" label="SQLite" default>
|
||||
```bash title="Terminal"
|
||||
sqlite3 /app/backend/data/webui.db ".tables"
|
||||
sqlite3 /app/backend/data/webui.db "SELECT * FROM alembic_version;"
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="postgresql" label="PostgreSQL">
|
||||
```bash title="Terminal"
|
||||
psql -h localhost -U user -d dbname -c "\dt"
|
||||
psql -h localhost -U user -d dbname -c "SELECT * FROM alembic_version;"
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
</details>
|
||||
|
||||
## Step 3: Apply Migrations
|
||||
|
||||
### Standard Upgrade (Most Common)
|
||||
|
||||
If diagnostics show you have pending migrations (`current` < `heads`), upgrade to latest:
|
||||
|
||||
```bash title="Terminal - Upgrade to Latest"
|
||||
# Ensure you're in the correct directory
|
||||
cd /app/backend/open_webui
|
||||
|
||||
# Run upgrade
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
**Watch for these outputs:**
|
||||
|
||||
```bash
|
||||
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
|
||||
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
|
||||
# highlight-next-line
|
||||
INFO [alembic.runtime.migration] Running upgrade abc123 -> def456, add_new_column
|
||||
```
|
||||
|
||||
:::note "Will assume non-transactional DDL"
|
||||
This is a **normal informational message** for SQLite, not an error. SQLite doesn't support rollback of schema changes, so migrations run without transaction protection.
|
||||
|
||||
If the process appears to hang after this message, wait 2-3 minutes - some migrations take time, especially:
|
||||
|
||||
- Migrations that add indexes to large tables (1M+ rows: 1-5 minutes)
|
||||
- Migrations with data transformations (100K+ rows: 30 seconds to several minutes)
|
||||
- Migrations that rebuild tables (SQLite doesn't support all ALTER operations)
|
||||
|
||||
For very large databases (10M+ rows), consider running migrations during a maintenance window and monitoring progress with `sqlite3 /path/to/webui.db ".tables"` in another terminal.
|
||||
:::
|
||||
|
||||
### Upgrade to Specific Version
|
||||
|
||||
If you need to apply migrations up to a specific point:
|
||||
|
||||
```bash title="Terminal - Upgrade to Specific Version"
|
||||
# List available versions first
|
||||
alembic history
|
||||
|
||||
# Upgrade to specific revision
|
||||
alembic upgrade ae1027a6acf
|
||||
```
|
||||
|
||||
### Downgrade (Rollback)
|
||||
|
||||
:::danger Data Loss Risk
|
||||
Downgrading can cause **permanent data loss** if the migration removed columns or tables. Only downgrade if you understand the consequences.
|
||||
:::
|
||||
|
||||
```bash title="Terminal - Downgrade Migrations"
|
||||
# Downgrade one version
|
||||
alembic downgrade -1
|
||||
|
||||
# Downgrade to specific version
|
||||
alembic downgrade <revision_id>
|
||||
|
||||
# Nuclear option: Remove all migrations (rarely needed)
|
||||
alembic downgrade base
|
||||
```
|
||||
|
||||
## Step 4: Verify Migration Success
|
||||
|
||||
After running migrations, confirm everything is correct:
|
||||
|
||||
```bash title="Terminal - Post-Migration Verification"
|
||||
# Verify current version matches expected
|
||||
alembic current
|
||||
# Should show (head) indicating you're at latest
|
||||
# Example: ae1027a6acf (head)
|
||||
|
||||
# Confirm no pending migrations
|
||||
alembic upgrade head --sql | head -20
|
||||
# If output contains only comments or is empty, you're up to date
|
||||
|
||||
# Verify key tables exist (SQLite)
|
||||
sqlite3 /app/backend/data/webui.db ".tables" | grep -E "user|chat|model"
|
||||
# Should show user, chat, model tables among others
|
||||
|
||||
# Test a simple query to ensure schema is intact
|
||||
sqlite3 /app/backend/data/webui.db "SELECT COUNT(*) FROM user;"
|
||||
# Should return a number, not an error
|
||||
```
|
||||
|
||||
### Test Application Startup
|
||||
|
||||
<Tabs groupId="install-type">
|
||||
<TabItem value="docker" label="Docker" default>
|
||||
```bash title="Terminal"
|
||||
# Exit the diagnostic container
|
||||
exit
|
||||
|
||||
# Start Open WebUI normally
|
||||
docker start open-webui
|
||||
|
||||
# Watch logs for migration confirmation
|
||||
docker logs -f open-webui
|
||||
|
||||
# Look for successful startup, then test in browser
|
||||
# Navigate to http://localhost:8080 and verify login page loads
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="local" label="Local Install">
|
||||
```bash title="Terminal"
|
||||
# Start Open WebUI
|
||||
python -m open_webui.main
|
||||
|
||||
# Watch for successful startup messages
|
||||
# Test by navigating to http://localhost:8080
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**Successful startup logs:**
|
||||
|
||||
```
|
||||
INFO: [db] Database initialization complete
|
||||
INFO: [main] Open WebUI starting on http://0.0.0.0:8080
|
||||
```
|
||||
|
||||
**Smoke test after startup:**
|
||||
|
||||
- Can access login page
|
||||
- Can log in with existing credentials
|
||||
- Can view chat history
|
||||
- No JavaScript console errors
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Required environment variable not found"
|
||||
|
||||
**Cause:** `WEBUI_SECRET_KEY` environment variable is missing.
|
||||
|
||||
**Solution:**
|
||||
|
||||
<Tabs groupId="install-type">
|
||||
<TabItem value="docker" label="Docker" default>
|
||||
|
||||
```bash title="Terminal - Fix Missing Secret Key (Docker)"
|
||||
# Method 1: Use existing key from file
|
||||
export WEBUI_SECRET_KEY=$(cat /app/backend/data/.webui_secret_key)
|
||||
|
||||
# Method 2: If file doesn't exist, generate new key
|
||||
export WEBUI_SECRET_KEY=$(python3 -c "import secrets; print(secrets.token_hex(32))")
|
||||
echo $WEBUI_SECRET_KEY > /app/backend/data/.webui_secret_key
|
||||
|
||||
# Verify it's set
|
||||
echo "WEBUI_SECRET_KEY: ${WEBUI_SECRET_KEY:0:10}..."
|
||||
|
||||
# Try alembic again
|
||||
alembic current -v
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="local" label="Local Install">
|
||||
|
||||
```bash title="Terminal - Fix Missing Secret Key (Local)"
|
||||
# Method 1: Use existing key from file
|
||||
export WEBUI_SECRET_KEY=$(cat ../data/.webui_secret_key)
|
||||
|
||||
# Method 2: Check if it's in your .env file
|
||||
grep WEBUI_SECRET_KEY .env
|
||||
# Then export it: export WEBUI_SECRET_KEY="value-from-env-file"
|
||||
|
||||
# Verify it's set
|
||||
echo "WEBUI_SECRET_KEY: ${WEBUI_SECRET_KEY:0:10}..."
|
||||
|
||||
# Try alembic again
|
||||
alembic current -v
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::warning Why This Happens
|
||||
Open WebUI's `env.py` file imports models, which import `open_webui.env`, which validates that `WEBUI_SECRET_KEY` exists. Without it, Python crashes before Alembic can even connect to the database.
|
||||
:::
|
||||
|
||||
### "No config file 'alembic.ini' found"
|
||||
|
||||
**Cause:** You're in the wrong directory.
|
||||
|
||||
**Solution:**
|
||||
|
||||
```bash title="Terminal"
|
||||
# Find your container name if not 'open-webui'
|
||||
docker ps
|
||||
|
||||
# Find alembic.ini location
|
||||
find /app -name "alembic.ini" 2>/dev/null # Docker
|
||||
find . -name "alembic.ini" # Local
|
||||
|
||||
# Navigate to that directory
|
||||
cd /app/backend/open_webui # Most common path
|
||||
|
||||
# Verify you're in the right place
|
||||
ls -la alembic.ini
|
||||
```
|
||||
|
||||
### "Target database is not up to date"
|
||||
|
||||
**Cause:** Your database version doesn't match expected schema.
|
||||
|
||||
**Diagnosis:**
|
||||
|
||||
```bash title="Terminal - Diagnose Version Mismatch"
|
||||
# Check what database thinks its version is
|
||||
alembic current
|
||||
|
||||
# Check what code expects
|
||||
alembic heads
|
||||
|
||||
# Compare
|
||||
```
|
||||
|
||||
**Solution depends on diagnosis:**
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="pending" label="Pending Migrations" default>
|
||||
**Scenario:** `alembic current` shows older version than `alembic heads`
|
||||
|
||||
**Fix:** You simply need to apply pending migrations.
|
||||
|
||||
```bash title="Terminal"
|
||||
alembic upgrade head
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="mismatch" label="Schema Mismatch">
|
||||
**Scenario:** `alembic current` shows correct version, but you still see errors
|
||||
|
||||
**Cause:** Someone manually modified the database schema without migrations, or a previous migration partially failed.
|
||||
|
||||
**Fix:** Restore from backup - you have database corruption.
|
||||
|
||||
```bash title="Terminal"
|
||||
# Stop everything
|
||||
docker stop open-webui
|
||||
|
||||
# Restore backup
|
||||
cp /path/to/webui.db.backup /path/to/webui.db
|
||||
|
||||
# Try migration again
|
||||
alembic upgrade head
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="fresh" label="Fresh Database">
|
||||
**Scenario:** New database that needs initial schema
|
||||
|
||||
**Fix:** Run migrations from scratch.
|
||||
|
||||
```bash title="Terminal"
|
||||
alembic upgrade head
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::danger Never Use "alembic stamp" as a Fix
|
||||
You may see advice to run `alembic stamp head` to "fix" version mismatches. **This is dangerous.**
|
||||
|
||||
`alembic stamp` tells Alembic "pretend this migration was applied" without actually running it. This creates permanent database corruption where Alembic thinks your schema is up-to-date when it isn't.
|
||||
|
||||
**Only use `alembic stamp <revision>` if:**
|
||||
|
||||
- You manually created all tables using `create_all()` and need to mark them as migrated
|
||||
- You're a developer initializing a fresh database that matches current schema
|
||||
- You imported a database backup from another system and need to mark it at the correct revision
|
||||
- You've manually applied migrations via raw SQL and need to update the version tracking
|
||||
|
||||
**Never use it to "fix" migration errors or skip failed migrations.**
|
||||
:::
|
||||
|
||||
### Process Hangs After "Will assume non-transactional DDL"
|
||||
|
||||
**Understanding the message:** This is **not an error**. It's informational. SQLite doesn't support transactional DDL, so Alembic is warning that migrations can't be rolled back automatically.
|
||||
|
||||
**If genuinely stuck:**
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="wait" label="Wait First" default>
|
||||
Some migrations (especially those adding indexes or modifying large tables) take several minutes.
|
||||
|
||||
**Action:** Wait 3-5 minutes before assuming it's stuck.
|
||||
</TabItem>
|
||||
<TabItem value="lock" label="Check Database Lock">
|
||||
Another process might have locked the database.
|
||||
|
||||
```bash title="Terminal - Check for Locks"
|
||||
# Find processes using database file
|
||||
fuser /app/backend/data/webui.db
|
||||
|
||||
# Kill any orphaned processes
|
||||
pkill -f "open-webui"
|
||||
|
||||
# Verify nothing running
|
||||
ps aux | grep open-webui
|
||||
|
||||
# Try migration again
|
||||
alembic upgrade head
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="corrupt" label="Database Corruption">
|
||||
If the database is corrupted, migration will hang.
|
||||
|
||||
```bash title="Terminal - Check Integrity"
|
||||
sqlite3 /app/backend/data/webui.db "PRAGMA integrity_check;"
|
||||
```
|
||||
|
||||
If integrity check fails, restore from backup.
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Autogenerate Detects Removed Tables
|
||||
|
||||
**Symptom:** You ran `alembic revision --autogenerate` and it wants to drop existing tables.
|
||||
|
||||
:::warning Don't Run Autogenerate
|
||||
**Regular users should NEVER run `alembic revision --autogenerate`.** This command is for developers creating new migration files, not for applying existing migrations.
|
||||
|
||||
The command you want is `alembic upgrade head` (no `revision`, no `--autogenerate`).
|
||||
:::
|
||||
|
||||
**If you accidentally created a bad migration file:**
|
||||
|
||||
```bash title="Terminal - Remove Bad Migration"
|
||||
# List migration files
|
||||
ls -la /app/backend/open_webui/migrations/versions/
|
||||
|
||||
# Delete the incorrect auto-generated file (newest file)
|
||||
rm /app/backend/open_webui/migrations/versions/<newest_timestamp>_*.py
|
||||
|
||||
# Restore to known good state
|
||||
git checkout /app/backend/open_webui/migrations/ # If using git
|
||||
```
|
||||
|
||||
**Technical context:** The "autogenerate detects removed tables" issue occurs because Open WebUI's Alembic metadata configuration doesn't import all model definitions. This causes autogenerate to compare against incomplete metadata, thinking tables should be removed. This is a developer-level issue that doesn't affect users running `alembic upgrade`.
|
||||
|
||||
### Peewee to Alembic Transition Issues
|
||||
|
||||
**Background:** Older Open WebUI versions (pre-0.4.x) used Peewee migrations. Current versions use Alembic.
|
||||
|
||||
**Symptoms:**
|
||||
|
||||
- Both `migratehistory` and `alembic_version` tables exist
|
||||
- Errors about "migration already applied"
|
||||
|
||||
**What happens automatically:**
|
||||
|
||||
1. Open WebUI's `internal/db.py` runs old Peewee migrations first via `handle_peewee_migration()`
|
||||
2. Then `config.py` runs Alembic migrations via `run_migrations()`
|
||||
3. Both systems should work transparently
|
||||
|
||||
**If automatic transition fails:**
|
||||
|
||||
```bash title="Terminal - Manual Transition"
|
||||
# Check if old Peewee migrations exist
|
||||
sqlite3 /app/backend/data/webui.db "SELECT * FROM migratehistory;" 2>/dev/null
|
||||
|
||||
# If Peewee migrations exist, ensure they completed
|
||||
# Then run Alembic migrations
|
||||
cd /app/backend/open_webui
|
||||
alembic upgrade head
|
||||
```
|
||||
|
||||
:::tip
|
||||
If upgrading from very old Open WebUI versions (< 0.3.x), consider a fresh install with data export/import rather than attempting to migrate the database schema across multiple major version changes.
|
||||
:::
|
||||
|
||||
### PostgreSQL Foreign Key Errors
|
||||
|
||||
:::info PostgreSQL Only
|
||||
This troubleshooting applies only to PostgreSQL databases. SQLite handles foreign keys differently.
|
||||
:::
|
||||
|
||||
**Symptom:** Errors like `psycopg2.errors.InvalidForeignKey: there is no unique constraint matching given keys for referenced table "user"`
|
||||
|
||||
**Cause:** PostgreSQL requires explicit primary key constraints that were missing in older schema versions.
|
||||
|
||||
**Solution for PostgreSQL:**
|
||||
|
||||
```sql title="PostgreSQL Fix"
|
||||
-- Connect to your PostgreSQL database
|
||||
psql -h localhost -U your_user -d open_webui_db
|
||||
|
||||
-- Add missing primary key constraint (PostgreSQL syntax)
|
||||
ALTER TABLE public."user" ADD CONSTRAINT user_pk PRIMARY KEY (id);
|
||||
|
||||
-- Verify constraint was added
|
||||
\d+ public."user"
|
||||
```
|
||||
|
||||
**Note:** The `public.` schema prefix and quoted `"user"` identifier are PostgreSQL-specific. This SQL will not work on SQLite or MySQL.
|
||||
|
||||
## Advanced Operations
|
||||
|
||||
### Production and Multi-Server Deployments
|
||||
|
||||
:::warning Rolling Updates Can Cause Failures
|
||||
In multi-server deployments, running different code versions simultaneously during rolling updates can cause errors if the new code expects schema changes that haven't been applied yet, or if old code is incompatible with new schema.
|
||||
:::
|
||||
|
||||
**Recommended deployment strategies:**
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="separate-job" label="Separate Migration Job" default>
|
||||
|
||||
Run migrations as a one-time job before deploying new application code:
|
||||
|
||||
```bash title="Kubernetes Job Example"
|
||||
# 1. Run migration job
|
||||
kubectl apply -f migration-job.yaml
|
||||
|
||||
# 2. Wait for completion
|
||||
kubectl wait --for=condition=complete job/openwebui-migration
|
||||
|
||||
# 3. Deploy new application version
|
||||
kubectl rollout restart deployment/openwebui
|
||||
```
|
||||
|
||||
This ensures schema is updated before any new code runs.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="maintenance" label="Maintenance Window">
|
||||
|
||||
Take the application offline during migration:
|
||||
|
||||
```bash title="Maintenance Workflow"
|
||||
# 1. Stop all application instances
|
||||
docker-compose down
|
||||
|
||||
# 2. Run migrations
|
||||
docker run --rm -v open-webui:/app/backend/data \
|
||||
ghcr.io/open-webui/open-webui:main \
|
||||
bash -c "cd /app/backend/open_webui && alembic upgrade head"
|
||||
|
||||
# 3. Start all instances with new code
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
Simplest approach but requires downtime.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="blue-green" label="Blue-Green Deployment">
|
||||
|
||||
Maintain two identical environments and switch traffic after migration:
|
||||
|
||||
```bash title="Blue-Green Workflow"
|
||||
# 1. Green (new) environment gets migrated database
|
||||
# 2. Deploy new code to green environment
|
||||
# 3. Test green environment thoroughly
|
||||
# 4. Switch traffic from blue to green
|
||||
# 5. Keep blue as instant rollback option
|
||||
```
|
||||
|
||||
Zero downtime but requires double infrastructure.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Generate SQL Without Applying
|
||||
|
||||
For review or audit purposes, generate the SQL that would be executed:
|
||||
|
||||
```bash title="Terminal - Generate Migration SQL"
|
||||
# Generate SQL for pending migrations
|
||||
alembic upgrade head --sql > /tmp/migration-plan.sql
|
||||
|
||||
# Review what would be applied
|
||||
cat /tmp/migration-plan.sql
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
|
||||
- DBA review in enterprise environments
|
||||
- Understanding what changes will occur
|
||||
- Debugging migration issues
|
||||
- Applying migrations in restricted environments
|
||||
|
||||
:::info When to Use This
|
||||
This is advanced functionality for DBAs or DevOps engineers. Regular users should just run `alembic upgrade head` directly.
|
||||
:::
|
||||
|
||||
### Offline Migration (No Network)
|
||||
|
||||
If your database server is offline or isolated:
|
||||
|
||||
```bash title="Terminal - Offline Migration Workflow"
|
||||
# 1. Generate SQL on development machine
|
||||
alembic upgrade head --sql > upgrade-to-head.sql
|
||||
|
||||
# 2. Transfer SQL file to production
|
||||
scp upgrade-to-head.sql production-server:/tmp/
|
||||
|
||||
# 3. On production, apply SQL manually
|
||||
sqlite3 /app/backend/data/webui.db < /tmp/upgrade-to-head.sql
|
||||
|
||||
# 4. Update alembic_version table manually
|
||||
sqlite3 /app/backend/data/webui.db \
|
||||
"UPDATE alembic_version SET version_num='<target_revision>';"
|
||||
```
|
||||
|
||||
:::danger Manual alembic_version Updates
|
||||
Only update `alembic_version` if you've **actually applied** the corresponding migrations. Lying to Alembic about migration state causes permanent corruption.
|
||||
:::
|
||||
|
||||
## Recovery Procedures
|
||||
|
||||
### Recovery from Failed Migration
|
||||
|
||||
:::danger SQLite Has No Rollback
|
||||
SQLite migrations are **non-transactional**. If a migration fails halfway through, your database is in a partially-migrated state. The only safe recovery is restoring from backup.
|
||||
:::
|
||||
|
||||
**Symptoms of partial migration:**
|
||||
|
||||
- Some tables exist, others don't match expected schema
|
||||
- Foreign key violations
|
||||
- Missing columns that migration should have added
|
||||
- Application errors about missing database fields
|
||||
|
||||
**Recovery steps:**
|
||||
|
||||
```bash title="Terminal - Restore from Backup"
|
||||
# 1. Stop Open WebUI immediately
|
||||
docker stop open-webui
|
||||
|
||||
# 2. Verify backup integrity
|
||||
sqlite3 /path/to/webui.db.backup "PRAGMA integrity_check;"
|
||||
|
||||
# 3. Restore backup
|
||||
cp /path/to/webui.db.backup /path/to/webui.db
|
||||
|
||||
# 4. Investigate root cause before retrying
|
||||
docker logs open-webui > migration-failure-logs.txt
|
||||
|
||||
# 5. Get help with logs before attempting migration again
|
||||
```
|
||||
|
||||
:::warning Do Not Use "stamp" to Fix Failed Migrations
|
||||
Never use `alembic stamp` to mark a partially-failed migration as complete. This leaves your database in a corrupt state.
|
||||
:::
|
||||
|
||||
### Validate Database Integrity
|
||||
|
||||
Before and after migrations, verify your database isn't corrupted:
|
||||
|
||||
<Tabs groupId="database-type">
|
||||
<TabItem value="sqlite" label="SQLite" default>
|
||||
```bash title="Terminal - SQLite Integrity Check"
|
||||
sqlite3 /app/backend/data/webui.db "PRAGMA integrity_check;"
|
||||
|
||||
# Should output: ok
|
||||
# If it outputs anything else, database is corrupted
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="postgresql" label="PostgreSQL">
|
||||
```bash title="Terminal - PostgreSQL Integrity Check"
|
||||
# Check for table corruption
|
||||
psql -h localhost -U user -d dbname -c "SELECT * FROM pg_stat_database WHERE datname='open_webui_db';"
|
||||
|
||||
# Vacuum and analyze
|
||||
psql -h localhost -U user -d dbname -c "VACUUM ANALYZE;"
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Post-Migration Checklist
|
||||
|
||||
After successful migration, verify:
|
||||
|
||||
- [ ] `alembic current` shows `(head)` indicating latest version
|
||||
- [ ] Open WebUI starts without errors
|
||||
- [ ] Can log in successfully
|
||||
- [ ] Core features work (chat, model selection, etc.)
|
||||
- [ ] No error messages in logs
|
||||
- [ ] Data appears intact (users, chats, models)
|
||||
- [ ] Backup can be safely archived after 1 week of stability
|
||||
|
||||
:::tip Keep Recent Backups
|
||||
Retain backups from before major migrations for at least 1-2 weeks. Issues sometimes appear days later during specific workflows.
|
||||
:::
|
||||
|
||||
## Getting Help
|
||||
|
||||
If migrations continue to fail after following this guide:
|
||||
|
||||
**Gather diagnostic information:**
|
||||
|
||||
```bash title="Terminal - Collect Diagnostic Data"
|
||||
# Version information
|
||||
docker logs open-webui 2>&1 | head -20 > diagnostics.txt
|
||||
|
||||
# Migration state
|
||||
cd /app/backend/open_webui
|
||||
alembic current -v >> diagnostics.txt
|
||||
alembic history >> diagnostics.txt
|
||||
|
||||
# Database info (SQLite)
|
||||
sqlite3 /app/backend/data/webui.db ".tables" >> diagnostics.txt
|
||||
sqlite3 /app/backend/data/webui.db "SELECT * FROM alembic_version;" >> diagnostics.txt
|
||||
|
||||
# Full migration log
|
||||
alembic upgrade head 2>&1 >> diagnostics.txt
|
||||
```
|
||||
|
||||
**Where to get help:**
|
||||
|
||||
1. **Open WebUI GitHub Issues:** https://github.com/open-webui/open-webui/issues
|
||||
- Search existing issues first
|
||||
- Include your `diagnostics.txt` file
|
||||
- Specify your Open WebUI version and installation method
|
||||
|
||||
2. **Open WebUI Discord Community**
|
||||
- Real-time support from community members
|
||||
- Share error messages and diagnostics
|
||||
|
||||
3. **Provide this information:**
|
||||
- Open WebUI version
|
||||
- Installation method (Docker/local)
|
||||
- Database type (SQLite/PostgreSQL)
|
||||
- Output of `alembic current` and `alembic history`
|
||||
- Complete error messages
|
||||
- What you were doing when it failed
|
||||
|
||||
:::note
|
||||
Do not share your `webui.db` database file publicly - it contains user credentials and sensitive data. Only share the diagnostic text output.
|
||||
:::
|
||||
@@ -19,12 +19,168 @@ This integration utilizes the official Notion MCP server, which specializes in *
|
||||
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the [contributing tutorial](/tutorials/tips/contributing-tutorial/).
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
## Method 1: Streamable HTTP (Recommended)
|
||||
|
||||
Regardless of the connection method chosen, you must first create an internal integration within Notion.
|
||||
This method connects directly to Notion's hosted MCP endpoint (`https://mcp.notion.com/mcp`). It utilizes standard OAuth and is **natively supported** by Open WebUI without extra containers.
|
||||
|
||||
### 1. Create Internal Integration
|
||||
1. Navigate to [Notion My Integrations](https://www.notion.so/my-integrations).
|
||||
:::info Preferred Method
|
||||
**Streamable HTTP** is preferred for its simplicity and enhanced security. It handles authentication via Notion's official OAuth flow, meaning you do not need to manually manage secrets or integration tokens.
|
||||
:::
|
||||
|
||||
### 1. Configure Tool
|
||||
You can automatically prefill the connection settings by importing the JSON configuration below.
|
||||
|
||||
1. Navigate to **Admin Panel > Settings > External Tools**.
|
||||
2. Click the **+** (Plus) button to add a new tool.
|
||||
3. Click **Import** (top right of the modal).
|
||||
4. Paste the following JSON snippet:
|
||||
|
||||
```json title="Notion Remote MCP Configuration"
|
||||
[
|
||||
{
|
||||
"type": "mcp",
|
||||
"url": "https://mcp.notion.com/mcp",
|
||||
"spec_type": "url",
|
||||
"spec": "",
|
||||
"path": "openapi.json",
|
||||
"auth_type": "oauth_2.1",
|
||||
"key": "",
|
||||
"info": {
|
||||
"id": "ntn",
|
||||
"name": "Notion",
|
||||
"description": "A note-taking and collaboration platform that allows users to create, organize, and share notes, databases, and other content."
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
5. **Register:** Click the **Register Client** button (next to the Auth dropdown).
|
||||
6. Click **Save**.
|
||||
|
||||

|
||||
|
||||
### 2. Authenticate & Grant Access
|
||||
Once the tool is added, you must authenticate to link your specific workspace.
|
||||
|
||||
1. Open any chat window.
|
||||
2. Click the **+** (Plus) button in the chat input bar.
|
||||
3. Navigate to **Integrations > Tools**.
|
||||
4. Toggle the **Notion** switch to **ON**.
|
||||
|
||||

|
||||
|
||||
5. **Authorize:** You will be redirected to a "Connect with Notion MCP" screen.
|
||||
* Ensure the correct **Workspace** is selected in the dropdown.
|
||||
* Click **Continue**.
|
||||
|
||||
:::note Security: Frequent Re-authentication
|
||||
For security reasons, Notion's OAuth session may expire after a period of inactivity or if you restart your Open WebUI instance. If this happens, you will see a `Failed to connect to MCP server 'ntn'` error.
|
||||
|
||||
This is **intended behavior** by Notion to keep your workspace secure. To refresh your session, revisit the steps above to complete the "Connect with Notion MCP" authorization flow again.
|
||||
:::
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Method 2: Self-Hosted via MCPO (Advanced)
|
||||
|
||||
This method is for advanced users who prefer to run the MCP server locally within their own infrastructure using **MCPO**. Unlike Streamable HTTP, this method requires you to manually manage your own credentials.
|
||||
|
||||
Direct local execution (stdio) of MCP servers is not natively supported in Open WebUI. To run the Notion MCP server locally (using Docker or Node.js) within your infrastructure, you must use **MCPO** to bridge the connection.
|
||||
|
||||
:::info Prerequisites
|
||||
To use this method, you must first create an internal integration to obtain a **Secret Key**. Please complete the **[Creating an Internal Integration](#creating-an-internal-integration)** section below before proceeding with the configuration steps here.
|
||||
:::
|
||||
|
||||
### 1. Configure MCPO
|
||||
Follow the installation instructions in the [MCPO Repository](https://github.com/open-webui/mcpo) to get it running. Configure your MCPO instance to run the Notion server using one of the runtimes below by adding the JSON block to your `mcpo-config.json` file.
|
||||
|
||||
**Note:** Replace `secret_YOUR_KEY_HERE` with the secret obtained from the [Creating an Internal Integration](#creating-an-internal-integration) section.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="npx" label="Node (npx)" default>
|
||||
This configuration uses the official Node.js package.
|
||||
|
||||
```json title="mcpo-config.json"
|
||||
{
|
||||
"mcpServers": {
|
||||
"notion": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"@notionhq/notion-mcp-server"
|
||||
],
|
||||
"env": {
|
||||
"NOTION_TOKEN": "secret_YOUR_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="docker" label="Docker">
|
||||
This configuration uses the official Docker image.
|
||||
|
||||
```json title="mcpo-config.json"
|
||||
{
|
||||
"mcpServers": {
|
||||
"notion": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"--rm",
|
||||
"-i",
|
||||
"-e",
|
||||
"NOTION_TOKEN",
|
||||
"mcp/notion"
|
||||
],
|
||||
"env": {
|
||||
"NOTION_TOKEN": "secret_YOUR_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 2. Connect Open WebUI
|
||||
Once MCPO is running and configured with Notion:
|
||||
|
||||
1. Navigate to **Admin Panel > Settings > External Tools**.
|
||||
2. Click the **+** (Plus) button.
|
||||
3. Click **Import** (top right of the modal).
|
||||
4. Paste the following JSON snippet (update the URL with your MCPO address):
|
||||
|
||||
```json title="MCPO Connection JSON"
|
||||
[
|
||||
{
|
||||
"type": "openapi",
|
||||
"url": "http://<YOUR_MCPO_IP>:<PORT>/notion",
|
||||
"spec_type": "url",
|
||||
"spec": "",
|
||||
"path": "openapi.json",
|
||||
"auth_type": "bearer",
|
||||
"key": "",
|
||||
"info": {
|
||||
"id": "notion-local",
|
||||
"name": "Notion (Local)",
|
||||
"description": "Local Notion integration via MCPO"
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
5. Click **Save**.
|
||||
|
||||
---
|
||||
|
||||
## Creating an Internal Integration
|
||||
|
||||
Required for **Method 2**, creating an internal integration within Notion ensures you have the necessary credentials and permission scopes readily available.
|
||||
|
||||
### 1. Create Integration
|
||||
1. Navigate to **[Notion My Integrations](https://www.notion.so/my-integrations)**.
|
||||
2. Click the **+ New integration** button.
|
||||
3. Fill in the required fields:
|
||||
* **Integration Name:** Give it a recognizable name (e.g., "Open WebUI MCP").
|
||||
@@ -41,7 +197,7 @@ You **must** select **Internal** for the integration type. Public integrations r
|
||||
|
||||
### 2. Configure Capabilities & Copy Secret
|
||||
Once saved, you will be directed to the configuration page.
|
||||
1. **Copy Secret:** Locate the **Internal Integration Secret** field. Click **Show** and copy this key. You will need it for Open WebUI.
|
||||
1. **Copy Secret:** Locate the **Internal Integration Secret** field. Click **Show** and copy this key. You will need it for MCPO configuration.
|
||||
2. **Review Capabilities:** Ensure the following checkboxes are selected under the "Capabilities" section:
|
||||
* ✅ **Read content**
|
||||
* ✅ **Update content**
|
||||
@@ -50,9 +206,9 @@ Once saved, you will be directed to the configuration page.
|
||||
3. Click **Save changes** if you modified any capabilities.
|
||||
|
||||
:::warning Security: Risk to Workspace Data
|
||||
While the Notion MCP server limits the scope of the API (e.g., databases cannot be deleted), exposing your workspace to LLMs carries a **non-zero risk** to your data.
|
||||
While the Notion MCP server limits the scope of the API (e.g., databases cannot be deleted), exposing your workspace to LLMs carries a **non-zero risk**.
|
||||
|
||||
**Security-conscious users** can create a safer, **Read-Only** integration by unchecking **Update content** and **Insert content** in this step. The AI will be able to search and answer questions based on your notes but will be physically unable to modify or create pages.
|
||||
**Security-conscious users** can create a safer, **Read-Only** integration by unchecking **Update content** and **Insert content**. The AI will be able to search and answer questions based on your notes but will be physically unable to modify or create pages.
|
||||
:::
|
||||
|
||||
:::danger Secret Safety
|
||||
@@ -61,10 +217,10 @@ Your **Internal Integration Secret** allows access to your Notion data. Treat it
|
||||
|
||||

|
||||
|
||||
### 3. Grant Page Access
|
||||
### 3. Grant Page Access (Manual)
|
||||
|
||||
:::danger Critical Step: Permissions
|
||||
By default, your new integration has **zero access** to your workspace. It cannot see *any* pages until you explicitly invite it. If you skip this step, the AI will return "Object not found" errors.
|
||||
By default, your new internal integration has **zero access** to your workspace. It cannot see *any* pages until you explicitly invite it. If you skip this step, the AI will return "Object not found" errors.
|
||||
:::
|
||||
|
||||
You can grant access centrally or on a per-page basis.
|
||||
@@ -87,172 +243,11 @@ Still in the Notion Integration dashboard:
|
||||
|
||||

|
||||
|
||||
## Configuration
|
||||
---
|
||||
|
||||
There are two ways to connect Notion. We recommend **Streamable HTTP** for the easiest setup experience (OAuth), or **Local CLI** for advanced control using your integration token.
|
||||
## Configuration: Always On (Optional)
|
||||
|
||||
The **Streamable HTTP** method is natively supported and recommended for most users for the easiest setup experience (OAuth).
|
||||
|
||||
To run the server locally (using Docker or Node.js), you must use the **MCPO Bridge**.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="http" label="Method 1: Streamable HTTP (Recommended)" default>
|
||||
This method connects directly to Notion's hosted MCP endpoint (`https://mcp.notion.com/mcp`). It utilizes standard OAuth and is **natively supported** by Open WebUI without extra containers.
|
||||
|
||||
### Quick Setup via Import
|
||||
You can automatically prefill the connection settings by importing the JSON configuration below.
|
||||
|
||||
1. Navigate to **Admin Panel > Settings > External Tools**.
|
||||
2. Click the **+** (Plus) button to add a new tool.
|
||||
3. Click **Import** (top right of the modal).
|
||||
4. Paste the following JSON snippet:
|
||||
|
||||
```json title="Notion Remote MCP Configuration"
|
||||
[
|
||||
{
|
||||
"type": "mcp",
|
||||
"url": "https://mcp.notion.com/mcp",
|
||||
"spec_type": "url",
|
||||
"spec": "",
|
||||
"path": "openapi.json",
|
||||
"auth_type": "oauth_2.1",
|
||||
"key": "",
|
||||
"info": {
|
||||
"id": "ntn",
|
||||
"name": "Notion",
|
||||
"description": "A note-taking and collaboration platform that allows users to create, organize, and share notes, databases, and other content."
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
5. **Enter Key:** Paste your **Internal Integration Secret** (starts with `secret_`) into the "Key" field.
|
||||
6. **Register:** Click the **Register Client** button (next to the Auth dropdown).
|
||||
7. Click **Save**.
|
||||
|
||||

|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="mcpo" label="Method 2: Self-Hosted via MCPO (Advanced)">
|
||||
Direct local execution (stdio) of MCP servers is not natively supported in Open WebUI. To run the Notion MCP server using `docker` or `npx` within your infrastructure, you must use **MCPO**.
|
||||
|
||||
MCPO acts as a bridge, running your local commands and exposing them to Open WebUI via a local HTTP endpoint.
|
||||
|
||||
### Step 1: Deploy MCPO
|
||||
Follow the installation instructions in the [MCPO Repository](https://github.com/open-webui/mcpo) to get it running (usually done via Docker).
|
||||
|
||||
### Step 2: Configure MCPO
|
||||
Configure your MCPO instance to run the Notion server using one of the runtimes below. Add the appropriate JSON block to your `mcpo-config.json` file.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="npx" label="Node (npx)" default>
|
||||
This configuration uses the official Node.js package.
|
||||
|
||||
```json title="mcpo-config.json"
|
||||
{
|
||||
"mcpServers": {
|
||||
"notion": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"@notionhq/notion-mcp-server"
|
||||
],
|
||||
"env": {
|
||||
"NOTION_TOKEN": "secret_YOUR_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="docker" label="Docker">
|
||||
This configuration runs the server as an isolated container.
|
||||
|
||||
```json title="mcpo-config.json"
|
||||
{
|
||||
"mcpServers": {
|
||||
"notion": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"--rm",
|
||||
"-i",
|
||||
"-e",
|
||||
"NOTION_TOKEN",
|
||||
"mcp/notion"
|
||||
],
|
||||
"env": {
|
||||
"NOTION_TOKEN": "secret_YOUR_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Step 3: Connect Open WebUI
|
||||
Once MCPO is running and configured with Notion:
|
||||
|
||||
1. Navigate to **Admin Panel > Settings > External Tools**.
|
||||
2. Click the **+** (Plus) button.
|
||||
3. Click **Import** (top right of the modal).
|
||||
4. Paste the following JSON snippet (update the URL with your MCPO address):
|
||||
|
||||
```json title="MCPO Connection JSON"
|
||||
[
|
||||
{
|
||||
"type": "openapi",
|
||||
"url": "http://<YOUR_MCPO_IP>:<PORT>/notion",
|
||||
"spec_type": "url",
|
||||
"spec": "",
|
||||
"path": "openapi.json",
|
||||
"auth_type": "bearer",
|
||||
"key": "",
|
||||
"info": {
|
||||
"id": "notion-local",
|
||||
"name": "Notion (Local)",
|
||||
"description": "Local Notion integration via MCPO"
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
5. Click **Save**.
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Enabling the Tool
|
||||
|
||||
After configuring the connection in the Admin Panel, you must enable the tool for the AI to use it.
|
||||
|
||||
:::tip Initial Authentication
|
||||
If you are using **Method 1 (Streamable HTTP)**, you must perform the On-Demand step below at least once to trigger the OAuth flow. If using **Method 2 (MCPO)**, authentication is handled by the API key in your configuration.
|
||||
:::
|
||||
|
||||
### Option 1: On-Demand (Per Chat)
|
||||
|
||||
1. Open a new chat.
|
||||
2. Click the **+** (Plus) button in the chat input bar.
|
||||
3. Navigate to **Integrations > Tools**.
|
||||
4. Toggle the **Notion** switch to **ON**.
|
||||
|
||||

|
||||
|
||||
5. **Authorize:** (Method 1 Only) You will be redirected to a "Connect with Notion MCP" screen.
|
||||
* Ensure the correct **Workspace** (the one you configured in step 1) is selected in the dropdown.
|
||||
* Click **Continue**.
|
||||
|
||||
:::note Security: Frequent Re-authentication
|
||||
For security reasons, Notion's OAuth session may expire after a period of inactivity or if you restart your Open WebUI instance. If this happens, you will see a `Failed to connect to MCP server 'ntn'` error.
|
||||
|
||||
This is **intended behavior** by Notion to keep your workspace secure. To refresh your session, revist steps 1-4 of this option to complete the "Connect with Notion MCP" authorization flow again, which will refresh your session.
|
||||
:::
|
||||
|
||||

|
||||
|
||||
### Option 2: Always On (Model Default)
|
||||
You can configure a specific model to have Notion access enabled by default for every conversation.
|
||||
By default, users must toggle the tool **ON** in the chat menu. You can configure a specific model to have Notion access enabled by default for every conversation.
|
||||
|
||||
1. Go to **Workspace > Models**.
|
||||
2. Click the **pencil icon** to edit a model.
|
||||
@@ -260,7 +255,7 @@ You can configure a specific model to have Notion access enabled by default for
|
||||
4. Check the box for **Notion**.
|
||||
5. Click **Save & Update**.
|
||||
|
||||
## Best Practice: Create a Notion Agent
|
||||
## Building a Specialized Notion Agent (Optional)
|
||||
|
||||
For the most reliable experience, we recommend creating a dedicated "Notion Assistant" model. This allows you to provide a specialized **System Prompt**, a helpful **Knowledge Base**, and quick-start **Prompt Suggestions** that teaches the model how to navigate Notion's structure.
|
||||
|
||||
@@ -335,7 +330,6 @@ You are the Notion Workspace Manager, an intelligent agent connected directly to
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
7. **Attach Knowledge Base:**
|
||||
* In the **Knowledge** section, click **Select Knowledge**.
|
||||
* In the modal that appears, find and select the **Notion MCP Docs** knowledge base you created in Step 1.
|
||||
@@ -343,7 +337,7 @@ You are the Notion Workspace Manager, an intelligent agent connected directly to
|
||||
:::warning Performance Tuning
|
||||
While the knowledge base helps the model understand Notion's capabilities, injecting large amounts of documentation can sometimes interfere with tool calling on smaller models (overloading the context).
|
||||
|
||||
If you notice the model failing to call tools correctly or hallucinating parameters, **detach the knowledge base** and rely solely on the System Prompt provided above or use your own custom system prompt.
|
||||
If you notice the model failing to call tools correctly or hallucinating parameters, **detach the knowledge base** and rely solely on the System Prompt provided above.
|
||||
:::
|
||||
|
||||
8. **Add Prompt Suggestions:**
|
||||
@@ -362,99 +356,15 @@ Under the **Prompts** section, click the **+** button to add a few helpful start
|
||||
|
||||

|
||||
|
||||
## Supported Tools & Usage
|
||||
|
||||
Once enabled, the model will have access to a powerful suite of tools to manage your Notion workspace. The server automatically handles converting Notion's block-based structure into Markdown for the AI, and converts the AI's Markdown back into Notion blocks.
|
||||
## Supported Tools
|
||||
|
||||
:::tip Workflow Best Practice
|
||||
LLMs cannot "browse" Notion like a human. For most actions, the model first needs to know the **Page ID or URL**. Always ask the model to **search** for a page first before asking it to read or modify it.
|
||||
:::
|
||||
|
||||
## 🔎 Search & Retrieval
|
||||
This integration supports a wide range of tools for searching, reading, creating, and updating Notion pages and databases.
|
||||
|
||||
- **`notion-search`** – Full‑text / metadata search across Notion (and linked tools)
|
||||
- **Input:** query string (e.g., `ready for dev`)
|
||||
- **Returns:** list of object IDs + brief metadata
|
||||
- **Prompt example:** “Find all project pages that mention **‘ready for dev’**.”
|
||||
- **Note:** IDs returned here are required for almost every other command.
|
||||
|
||||
- **`notion-fetch`** *(aka `read-page`)* – Pull a page or database content by URL or ID
|
||||
- **Input:** page/database URL **or** ID
|
||||
- **Returns:** Markdown‑formatted content of the page/database
|
||||
- **Prompt example:** “What are the product requirements from this ticket `https://notion.so/page-url`?”
|
||||
- **Note:** Gives you a clean Markdown view, ready for further processing.
|
||||
|
||||
## 🛠️ Content Management
|
||||
|
||||
- **`notion-create-pages`** – Create a brand‑new page
|
||||
- **Input:** parent page ID, title, property map, body (Markdown format)
|
||||
- **Returns:** new page ID & URL
|
||||
- **Prompt example:** “Create a meeting‑notes page for today’s stand‑up with action items.”
|
||||
- **Key:** **Parent page ID is mandatory**.
|
||||
|
||||
- **`notion-update-page`** – Patch a page’s **properties** (status, tags, dates, etc.)
|
||||
- **Input:** page ID + property map
|
||||
- **Returns:** updated page object
|
||||
- **Prompt example:** “Change the status of this task from **‘In Progress’** to **‘Complete’**.”
|
||||
- **Key:** Does **not** edit the page’s body blocks.
|
||||
|
||||
- **`notion-append-block`** – Add a block (text, checklist, heading, etc.) to the end of a page
|
||||
- **Input:** page ID + block payload (JSON format)
|
||||
- **Returns:** updated page version
|
||||
- **Prompt example:** “Add a checklist item to the bottom of the shopping‑list page.”
|
||||
|
||||
- **`notion-move-pages`** – Move one or many pages/databases under a new parent
|
||||
- **Input:** source page/database ID(s) + destination parent ID
|
||||
- **Returns:** new parent relationship (page now lives under the target)
|
||||
- **Prompt example:** “Move my weekly meeting‑notes page to the **‘Team Meetings’** page.”
|
||||
|
||||
- **`notion-duplicate-page`** – Clone a page (asynchronous – returns a job ID)
|
||||
- **Input:** source page ID (optional target parent)
|
||||
- **Returns:** job ID → duplicated page ID once the job finishes
|
||||
- **Prompt example:** “Duplicate my project‑template page for the new Q3 initiative.”
|
||||
|
||||
## 📊 Database Management
|
||||
|
||||
- **`notion-create-database`** – Spin up a new database with a custom schema
|
||||
- **Input:** parent page ID, title, property definitions (type, name, options)
|
||||
- **Returns:** new database ID & URL
|
||||
- **Prompt example:** “Create a database to track customer feedback with fields for **name**, **priority**, and **status**.”
|
||||
|
||||
- **`notion-update-database`** – Alter a database’s schema (add/rename fields) or rename the DB itself
|
||||
- **Input:** database ID + schema changes (add property, rename, etc.)
|
||||
- **Returns:** updated database object
|
||||
- **Prompt example:** “Add a **‘Status’** field to our project database to track completion.”
|
||||
|
||||
## 💬 Collaboration & Workspace
|
||||
|
||||
- **`notion-create-comment`** – Post a comment on a page
|
||||
- **Input:** page ID + comment text
|
||||
- **Returns:** comment ID & timestamp
|
||||
- **Prompt example:** “Leave a note on the quarterly review page about budget concerns.”
|
||||
|
||||
- **`notion-get-comments`** – List every comment on a page (supports pagination)
|
||||
- **Input:** page ID
|
||||
- **Returns:** array of comment objects
|
||||
- **Prompt example:** “List all comments on the project‑requirements section.”
|
||||
|
||||
- **`notion-get-users`** – Fetch **all** workspace members
|
||||
- **Input:** *(none)*
|
||||
- **Returns:** array of user objects
|
||||
- **Prompt example:** “Who are the members of this workspace?”
|
||||
|
||||
- **`notion-get-user`** – Get a single user’s profile (by ID or email)
|
||||
- **Input:** user ID or email address
|
||||
- **Returns:** user object (name, avatar, email, etc.)
|
||||
- **Prompt example:** “Look up the profile of the person assigned to this task.”
|
||||
|
||||
- **`notion-get-teams`** – Retrieve all **teamspaces** (formerly “teams”) in the workspace
|
||||
- **Input:** *(none)*
|
||||
- **Returns:** array of team objects
|
||||
|
||||
- **`notion-get-self`** – Information about the bot itself and the workspace it’s linked to
|
||||
- **Input:** *(none)*
|
||||
- **Returns:** bot metadata + workspace metadata (ID, name, domain, etc.)
|
||||
- **Prompt example:** “Which Notion workspace am I currently connected to?”
|
||||
For a complete list of available tools, their descriptions, and specific usage examples, please refer to the **[official Notion MCP documentation](https://developers.notion.com/docs/mcp-supported-tools)**.
|
||||
|
||||
## Rate Limits
|
||||
Standard [API request limits](https://developers.notion.com/reference/request-limits) apply to your use of Notion MCP, totaled across all tool calls.
|
||||
@@ -468,7 +378,9 @@ If you encounter rate limit errors, prompt your model to reduce the number of pa
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Error: `Failed to connect to MCP server 'ntn'`
|
||||
### Connection Errors
|
||||
|
||||
#### `Failed to connect to MCP server 'ntn'`
|
||||
|
||||

|
||||
|
||||
@@ -480,27 +392,26 @@ If you encounter rate limit errors, prompt your model to reduce the number of pa
|
||||
4. This will trigger the redirect to Notion's authorization page to complete the "Connect with Notion MCP" authorization flow again.
|
||||
5. Once authorized successfully, the connection will work across all chats again, including for models with the tool enabled by default.
|
||||
|
||||
### Error: `OAuth callback failed: mismatching_state`
|
||||
|
||||
#### `OAuth callback failed: mismatching_state`
|
||||

|
||||
|
||||
If you receive this red error toast when registering the client or connecting via the tool toggle, it is likely due to a URL mismatch.
|
||||
|
||||
* **Cause:** You are likely accessing Open WebUI via `localhost` (e.g., `http://localhost:3000`), but your instance is configured with a public domain via the `WEBUI_URL` environment variable (e.g., `https://chat.mydomain.com`). The OAuth session state created on `localhost` is lost when the callback redirects to your public domain.
|
||||
* **Fix:** Access your Open WebUI instance using the **exact URL** defined in `WEBUI_URL` (your public domain) and perform the setup again. **Do not use `localhost` for OAuth setups if a domain is configured.**
|
||||
|
||||
### Error: `Object not found`
|
||||
### Usage Errors
|
||||
|
||||
#### `Object not found`
|
||||
* **Cause:** The Integration Token is valid, but the specific page has not been shared with the integration.
|
||||
* **Fix:** In Notion, go to your Integration settings > **Access** tab and ensure the page is checked, or visit the page directly and check the **Connections** menu to ensure your integration is listed and selected.
|
||||
|
||||
### Error: `Tool execution failed` (Local Method)
|
||||
#### `Tool execution failed` (Local Method)
|
||||
* **Cause:** Open WebUI is unable to execute the local command (npx/docker) because it is missing from the container, or the configuration is incorrect.
|
||||
* **Fix:** Native local execution is not supported. Ensure you are using **MCPO** (Method 2) to bridge these commands, rather than entering them directly into Open WebUI's config, or switch to **Method 1 (Streamable HTTP)** in the Configuration section above. This runs on Notion's servers and requires no local dependencies.
|
||||
|
||||
### Error: `missing_property` when creating a page
|
||||
#### `missing_property` when creating a page
|
||||
* **Cause:** The model is trying to create a page without specifying a **Parent ID**. Notion requires every page to exist inside another page or database.
|
||||
* **Fix:** Instruct the model in your prompt: *"Search for my 'Notes' page first, get its ID, and create the new page inside there."*
|
||||
|
||||
### Error: `RateLimitedError` (429)
|
||||
#### `RateLimitedError` (429)
|
||||
* **Cause:** You have exceeded Notion's API limits (approx. 3 requests/second).
|
||||
* **Fix:** Ask the model to perform actions sequentially rather than all at once (e.g., "Search for X, then wait, then search for Y").
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 9.2 KiB After Width: | Height: | Size: 12 KiB |
Reference in New Issue
Block a user