mirror of
https://github.com/open-webui/docs.git
synced 2026-03-27 13:28:37 +07:00
update various docs
This commit is contained in:
@@ -15,16 +15,16 @@ This guide explains how to optimize your setup by configuring a dedicated, light
|
||||
|
||||
## Why Does Open-WebUI Feel Slow?
|
||||
|
||||
By default, Open-WebUI has several background tasks that can make it feel like magic but can also place a eavy load on local resources:
|
||||
By default, Open-WebUI has several background tasks that can make it feel like magic but can also place a heavy load on local resources:
|
||||
|
||||
- **Title Generation**
|
||||
- **Tag Generation**
|
||||
- **Autocomplete Generation** (this function triggers on every keystroke)
|
||||
- **Search Query Generation**
|
||||
|
||||
Each of these features makes asynchronous requests to your model. For example, continuous calls from the utocomplete feature can significantly delay responses on devices with limited memory >or processing power, uch as a Mac with 32GB of RAM running a 32B quantized model.
|
||||
Each of these features makes asynchronous requests to your model. For example, continuous calls from the autocomplete feature can significantly delay responses on devices with limited memory or processing power, such as a Mac with 32GB of RAM running a 32B quantized model.
|
||||
|
||||
Optimizing the task model can help isolate these background tasks from your main chat application, improving verall responsiveness.
|
||||
Optimizing the task model can help isolate these background tasks from your main chat application, improving overall responsiveness.
|
||||
|
||||
:::
|
||||
|
||||
@@ -78,6 +78,109 @@ Implementing these recommendations can greatly improve the responsiveness of Ope
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Environment Variables for Performance
|
||||
|
||||
You can also configure performance-related settings via environment variables. Add these to your Docker Compose file or `.env` file.
|
||||
|
||||
:::tip
|
||||
|
||||
Many of these settings can also be configured directly in the **Admin Panel > Settings** interface. Environment variables are useful for initial deployment configuration or when managing settings across multiple instances.
|
||||
|
||||
:::
|
||||
|
||||
### Task Model Configuration
|
||||
|
||||
Set a dedicated lightweight model for background tasks:
|
||||
|
||||
```bash
|
||||
TASK_MODEL=llama3.2:3b
|
||||
|
||||
# For OpenAI-compatible endpoints
|
||||
TASK_MODEL_EXTERNAL=gpt-4o-mini
|
||||
```
|
||||
|
||||
### Disable Unnecessary Features
|
||||
|
||||
```bash
|
||||
# Disable automatic title generation
|
||||
ENABLE_TITLE_GENERATION=False
|
||||
|
||||
# Disable follow-up question suggestions
|
||||
ENABLE_FOLLOW_UP_GENERATION=False
|
||||
|
||||
# Disable autocomplete suggestions (triggers on every keystroke - high impact!)
|
||||
ENABLE_AUTOCOMPLETE_GENERATION=False
|
||||
|
||||
# Disable automatic tag generation
|
||||
ENABLE_TAGS_GENERATION=False
|
||||
|
||||
# Disable search query generation for RAG (if not using web search)
|
||||
ENABLE_SEARCH_QUERY_GENERATION=False
|
||||
|
||||
# Disable retrieval query generation
|
||||
ENABLE_RETRIEVAL_QUERY_GENERATION=False
|
||||
```
|
||||
|
||||
### Enable Caching and Optimization
|
||||
|
||||
```bash
|
||||
# Cache model list responses (seconds) - reduces API calls
|
||||
MODELS_CACHE_TTL=300
|
||||
|
||||
# Cache LLM-generated search queries - eliminates duplicate LLM calls when both web search and RAG are active
|
||||
ENABLE_QUERIES_CACHE=True
|
||||
|
||||
# Convert base64 images to file URLs - reduces response size and database strain
|
||||
ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION=True
|
||||
|
||||
# Batch streaming tokens to reduce CPU load (recommended: 5-10 for high concurrency)
|
||||
CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE=5
|
||||
|
||||
# Enable gzip compression for HTTP responses (enabled by default)
|
||||
ENABLE_COMPRESSION_MIDDLEWARE=True
|
||||
```
|
||||
|
||||
### Database and Persistence
|
||||
|
||||
```bash
|
||||
# Disable real-time chat saving for better performance (trades off data persistence)
|
||||
ENABLE_REALTIME_CHAT_SAVE=False
|
||||
```
|
||||
|
||||
### Network Timeouts
|
||||
|
||||
```bash
|
||||
# Increase timeout for slow models (default: 300 seconds)
|
||||
AIOHTTP_CLIENT_TIMEOUT=300
|
||||
|
||||
# Faster timeout for model list fetching (default: 10 seconds)
|
||||
AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST=5
|
||||
```
|
||||
|
||||
### RAG Performance
|
||||
|
||||
```bash
|
||||
# Enable parallel embedding for faster document processing (requires sufficient resources)
|
||||
RAG_EMBEDDING_BATCH_SIZE=100
|
||||
```
|
||||
|
||||
### High Concurrency Settings
|
||||
|
||||
For larger instances with many concurrent users:
|
||||
|
||||
```bash
|
||||
# Increase thread pool size (default is 40)
|
||||
THREAD_POOL_SIZE=500
|
||||
```
|
||||
|
||||
:::info
|
||||
|
||||
For a complete list of environment variables, see the [Environment Variable Configuration](/getting-started/env-configuration) documentation.
|
||||
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## 💡 Additional Tips
|
||||
|
||||
- **Monitor System Resources:** Use your operating system’s tools (such as Activity Monitor on macOS or Task Manager on Windows) to keep an eye on resource usage.
|
||||
@@ -87,4 +190,12 @@ Implementing these recommendations can greatly improve the responsiveness of Ope
|
||||
|
||||
---
|
||||
|
||||
## Related Guides
|
||||
|
||||
- [Reduce RAM Usage](/tutorials/tips/reduce-ram-usage) - For memory-constrained environments like Raspberry Pi
|
||||
- [SQLite Database Overview](/tutorials/tips/sqlite-database) - Database schema, encryption, and advanced configuration
|
||||
- [Environment Variable Configuration](/getting-started/env-configuration) - Complete list of all configuration options
|
||||
|
||||
---
|
||||
|
||||
By applying these configuration changes, you'll support a more responsive and efficient Open-WebUI experience, allowing your local LLM to focus on delivering high-quality chat interactions without unnecessary delays.
|
||||
|
||||
@@ -5,23 +5,164 @@ title: "Reduce RAM Usage"
|
||||
|
||||
## Reduce RAM Usage
|
||||
|
||||
If you are deploying this image in a RAM-constrained environment, there are a few things you can do to slim down the image.
|
||||
If you are deploying Open WebUI in a RAM-constrained environment (such as a Raspberry Pi, small VPS, or shared hosting), there are several strategies to significantly reduce memory consumption.
|
||||
|
||||
On a Raspberry Pi 4 (arm64) with version v0.3.10, this was able to reduce idle memory consumption from >1GB to ~200MB (as observed with `docker container stats`).
|
||||
On a Raspberry Pi 4 (arm64) with version v0.3.10, these optimizations reduced idle memory consumption from >1GB to ~200MB (as observed with `docker container stats`).
|
||||
|
||||
## TLDR
|
||||
---
|
||||
|
||||
Set the following environment variables (or the respective UI settings for an existing deployment): `RAG_EMBEDDING_ENGINE: ollama`, `AUDIO_STT_ENGINE: openai`.
|
||||
## Quick Start
|
||||
|
||||
## Longer explanation
|
||||
Set the following environment variables for immediate RAM savings:
|
||||
|
||||
Much of the memory consumption is due to loaded ML models. Even if you are using an external language model (OpenAI or unbundled ollama), many models may be loaded for additional purposes.
|
||||
```bash
|
||||
# Use external embedding instead of local SentenceTransformers
|
||||
RAG_EMBEDDING_ENGINE=ollama
|
||||
|
||||
As of v0.3.10 this includes:
|
||||
# Use external Speech-to-Text instead of local Whisper
|
||||
AUDIO_STT_ENGINE=openai
|
||||
```
|
||||
|
||||
- Speech-to-text (whisper by default)
|
||||
- RAG embedding engine (defaults to local SentenceTransformers model)
|
||||
- Image generation engine (disabled by default)
|
||||
:::tip
|
||||
|
||||
These settings can also be configured in the **Admin Panel > Settings** interface - set RAG embedding to Ollama or OpenAI, and Speech-to-Text to OpenAI or WebAPI.
|
||||
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## Why Does Open WebUI Use So Much RAM?
|
||||
|
||||
Much of the memory consumption comes from locally loaded ML models. Even when using an external LLM (OpenAI or separate Ollama instance), Open WebUI may load additional models for:
|
||||
|
||||
| Feature | Default | RAM Impact | Solution |
|
||||
|---------|---------|------------|----------|
|
||||
| **RAG Embedding** | Local SentenceTransformers | ~500-800MB | Use Ollama or OpenAI embeddings |
|
||||
| **Speech-to-Text** | Local Whisper | ~300-500MB | Use OpenAI or WebAPI |
|
||||
| **Reranking** | Disabled | ~200-400MB when enabled | Keep disabled or use external |
|
||||
| **Image Generation** | Disabled | Variable | Keep disabled if not needed |
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Environment Variables for RAM Reduction
|
||||
|
||||
### Offload Embedding to External Service
|
||||
|
||||
The biggest RAM saver is using an external embedding engine:
|
||||
|
||||
```bash
|
||||
# Option 1: Use Ollama for embeddings (if you have Ollama running separately)
|
||||
RAG_EMBEDDING_ENGINE=ollama
|
||||
|
||||
# Option 2: Use OpenAI for embeddings
|
||||
RAG_EMBEDDING_ENGINE=openai
|
||||
OPENAI_API_KEY=your-api-key
|
||||
```
|
||||
|
||||
### Offload Speech-to-Text
|
||||
|
||||
Local Whisper models consume significant RAM:
|
||||
|
||||
```bash
|
||||
# Use OpenAI's Whisper API
|
||||
AUDIO_STT_ENGINE=openai
|
||||
|
||||
# Or use browser-based WebAPI (no external service needed)
|
||||
AUDIO_STT_ENGINE=webapi
|
||||
```
|
||||
|
||||
### Disable Unused Features
|
||||
|
||||
Disable features you don't need to prevent model loading:
|
||||
|
||||
```bash
|
||||
# Disable image generation (prevents loading image models)
|
||||
ENABLE_IMAGE_GENERATION=False
|
||||
|
||||
# Disable code execution (reduces overhead)
|
||||
ENABLE_CODE_EXECUTION=False
|
||||
|
||||
# Disable code interpreter
|
||||
ENABLE_CODE_INTERPRETER=False
|
||||
```
|
||||
|
||||
### Reduce Background Task Overhead
|
||||
|
||||
These settings reduce memory usage from background operations:
|
||||
|
||||
```bash
|
||||
# Disable autocomplete (high resource usage)
|
||||
ENABLE_AUTOCOMPLETE_GENERATION=False
|
||||
|
||||
# Disable automatic title generation
|
||||
ENABLE_TITLE_GENERATION=False
|
||||
|
||||
# Disable tag generation
|
||||
ENABLE_TAGS_GENERATION=False
|
||||
|
||||
# Disable follow-up suggestions
|
||||
ENABLE_FOLLOW_UP_GENERATION=False
|
||||
```
|
||||
|
||||
### Database and Cache Optimization
|
||||
|
||||
```bash
|
||||
# Disable real-time chat saving (reduces database overhead)
|
||||
ENABLE_REALTIME_CHAT_SAVE=False
|
||||
|
||||
# Reduce thread pool size for low-resource systems
|
||||
THREAD_POOL_SIZE=10
|
||||
```
|
||||
|
||||
### Vector Database Multitenancy
|
||||
|
||||
If using Milvus or Qdrant, enable multitenancy mode to reduce RAM:
|
||||
|
||||
```bash
|
||||
# For Milvus
|
||||
ENABLE_MILVUS_MULTITENANCY_MODE=True
|
||||
|
||||
# For Qdrant
|
||||
ENABLE_QDRANT_MULTITENANCY_MODE=True
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Recommended Minimal Configuration
|
||||
|
||||
For extremely RAM-constrained environments, use this combined configuration:
|
||||
|
||||
```bash
|
||||
# Offload ML models to external services
|
||||
RAG_EMBEDDING_ENGINE=ollama
|
||||
AUDIO_STT_ENGINE=openai
|
||||
|
||||
# Disable all non-essential features
|
||||
ENABLE_IMAGE_GENERATION=False
|
||||
ENABLE_CODE_EXECUTION=False
|
||||
ENABLE_CODE_INTERPRETER=False
|
||||
ENABLE_AUTOCOMPLETE_GENERATION=False
|
||||
ENABLE_TITLE_GENERATION=False
|
||||
ENABLE_TAGS_GENERATION=False
|
||||
ENABLE_FOLLOW_UP_GENERATION=False
|
||||
|
||||
# Reduce worker overhead
|
||||
THREAD_POOL_SIZE=10
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💡 Additional Tips
|
||||
|
||||
- **Monitor Memory Usage**: Use `docker container stats` or `htop` to monitor RAM consumption
|
||||
- **Restart After Changes**: Environment variable changes require a container restart
|
||||
- **Fresh Deployments**: Some environment variables only take effect on fresh deployments without an existing `config.json`
|
||||
- **Consider Alternatives**: For very constrained systems, consider running Open WebUI on a more capable machine and accessing it remotely
|
||||
|
||||
---
|
||||
|
||||
## Related Guides
|
||||
|
||||
- [Improve Local LLM Performance](/tutorials/tips/improve-performance-local) - For optimizing performance without reducing features
|
||||
- [Environment Variable Configuration](/getting-started/env-configuration) - Complete list of all configuration options
|
||||
|
||||
The first 2 are enabled and set to local models by default. You can change the models in the admin panel (RAG: Documents category, set it to Ollama or OpenAI, Speech-to-text: Audio section, work with OpenAI or WebAPI).
|
||||
If you are deploying a fresh Docker image, you can also set them with the following environment variables: `RAG_EMBEDDING_ENGINE: ollama`, `AUDIO_STT_ENGINE: openai`. Note that these environment variables have no effect if a `config.json` already exists.
|
||||
|
||||
@@ -10,7 +10,7 @@ This tutorial is a community contribution and is not supported by the Open WebUI
|
||||
:::
|
||||
|
||||
> [!WARNING]
|
||||
> This documentation was created/updated based on version 0.6.30.
|
||||
> This documentation was created/updated based on version 0.6.41 and updated for recent migrations.
|
||||
|
||||
## Open-WebUI Internal SQLite Database
|
||||
|
||||
@@ -56,28 +56,31 @@ Here is a complete list of tables in Open-WebUI's SQLite database. The tables ar
|
||||
| ------- | ---------------- | ------------------------------------------------------------ |
|
||||
| 01 | auth | Stores user authentication credentials and login information |
|
||||
| 02 | channel | Manages chat channels and their configurations |
|
||||
| 03 | channel_member | Tracks user membership and permissions within channels |
|
||||
| 04 | chat | Stores chat sessions and their metadata |
|
||||
| 05 | chatidtag | Maps relationships between chats and their associated tags |
|
||||
| 06 | config | Maintains system-wide configuration settings |
|
||||
| 07 | document | Stores documents and their metadata for knowledge management |
|
||||
| 08 | feedback | Captures user feedback and ratings |
|
||||
| 09 | file | Manages uploaded files and their metadata |
|
||||
| 10 | folder | Organizes files and content into hierarchical structures |
|
||||
| 11 | function | Stores custom functions and their configurations |
|
||||
| 12 | group | Manages user groups and their permissions |
|
||||
| 13 | knowledge | Stores knowledge base entries and related information |
|
||||
| 14 | memory | Maintains chat history and context memory |
|
||||
| 15 | message | Stores individual chat messages and their content |
|
||||
| 16 | message_reaction | Records user reactions (emojis/responses) to messages |
|
||||
| 17 | migrate_history | Tracks database schema version and migration records |
|
||||
| 18 | model | Manages AI model configurations and settings |
|
||||
| 19 | note | Stores user-created notes and annotations |
|
||||
| 20 | oauth_session | Manages active OAuth sessions for users |
|
||||
| 21 | prompt | Stores templates and configurations for AI prompts |
|
||||
| 22 | tag | Manages tags/labels for content categorization |
|
||||
| 23 | tool | Stores configurations for system tools and integrations |
|
||||
| 24 | user | Maintains user profiles and account information |
|
||||
| 03 | channel_file | Links files to channels and messages |
|
||||
| 04 | channel_member | Tracks user membership and permissions within channels |
|
||||
| 05 | chat | Stores chat sessions and their metadata |
|
||||
| 06 | chatidtag | Maps relationships between chats and their associated tags |
|
||||
| 07 | config | Maintains system-wide configuration settings |
|
||||
| 08 | document | Stores documents and their metadata for knowledge management |
|
||||
| 09 | feedback | Captures user feedback and ratings |
|
||||
| 10 | file | Manages uploaded files and their metadata |
|
||||
| 11 | folder | Organizes files and content into hierarchical structures |
|
||||
| 12 | function | Stores custom functions and their configurations |
|
||||
| 13 | group | Manages user groups and their permissions |
|
||||
| 14 | group_member | Tracks user membership within groups |
|
||||
| 15 | knowledge | Stores knowledge base entries and related information |
|
||||
| 16 | knowledge_file | Links files to knowledge bases |
|
||||
| 17 | memory | Maintains chat history and context memory |
|
||||
| 18 | message | Stores individual chat messages and their content |
|
||||
| 19 | message_reaction | Records user reactions (emojis/responses) to messages |
|
||||
| 20 | migrate_history | Tracks database schema version and migration records |
|
||||
| 21 | model | Manages AI model configurations and settings |
|
||||
| 22 | note | Stores user-created notes and annotations |
|
||||
| 23 | oauth_session | Manages active OAuth sessions for users |
|
||||
| 24 | prompt | Stores templates and configurations for AI prompts |
|
||||
| 25 | tag | Manages tags/labels for content categorization |
|
||||
| 26 | tool | Stores configurations for system tools and integrations |
|
||||
| 27 | user | Maintains user profiles and account information |
|
||||
|
||||
Note: there are two additional tables in Open-WebUI's SQLite database that are not related to Open-WebUI's core functionality, that have been excluded:
|
||||
|
||||
@@ -129,6 +132,24 @@ Things to know about the auth table:
|
||||
| user_id | TEXT | NOT NULL | Reference to the user |
|
||||
| created_at | BIGINT | - | Timestamp when membership was created |
|
||||
|
||||
## Channel File Table
|
||||
|
||||
| **Column Name** | **Data Type** | **Constraints** | **Description** |
|
||||
| --------------- | ------------- | ---------------------------------- | --------------------------------- |
|
||||
| id | Text | PRIMARY KEY | Unique identifier (UUID) |
|
||||
| user_id | Text | NOT NULL | Owner of the relationship |
|
||||
| channel_id | Text | FOREIGN KEY(channel.id), NOT NULL | Reference to the channel |
|
||||
| file_id | Text | FOREIGN KEY(file.id), NOT NULL | Reference to the file |
|
||||
| message_id | Text | FOREIGN KEY(message.id), nullable | Reference to associated message |
|
||||
| created_at | BigInteger | NOT NULL | Creation timestamp |
|
||||
| updated_at | BigInteger | NOT NULL | Last update timestamp |
|
||||
|
||||
Things to know about the channel_file table:
|
||||
|
||||
- Unique constraint on (`channel_id`, `file_id`) to prevent duplicate entries
|
||||
- Foreign key relationships with CASCADE delete
|
||||
- Indexed on `channel_id`, `file_id`, and `user_id` for performance
|
||||
|
||||
## Chat Table
|
||||
|
||||
| **Column Name** | **Data Type** | **Constraints** | **Description** |
|
||||
@@ -258,10 +279,26 @@ Things to know about the function table:
|
||||
| data | JSON | nullable | Additional group data |
|
||||
| meta | JSON | nullable | Group metadata |
|
||||
| permissions | JSON | nullable | Permission configuration |
|
||||
| user_ids | JSON | nullable | List of member user IDs |
|
||||
| created_at | BigInteger | - | Creation timestamp |
|
||||
| updated_at | BigInteger | - | Last update timestamp |
|
||||
|
||||
Note: The `user_ids` column has been migrated to the `group_member` table.
|
||||
|
||||
## Group Member Table
|
||||
|
||||
| **Column Name** | **Data Type** | **Constraints** | **Description** |
|
||||
| --------------- | ------------- | -------------------------------- | --------------------------------- |
|
||||
| id | Text | PRIMARY KEY, UNIQUE | Unique identifier (UUID) |
|
||||
| group_id | Text | FOREIGN KEY(group.id), NOT NULL | Reference to the group |
|
||||
| user_id | Text | FOREIGN KEY(user.id), NOT NULL | Reference to the user |
|
||||
| created_at | BigInteger | nullable | Creation timestamp |
|
||||
| updated_at | BigInteger | nullable | Last update timestamp |
|
||||
|
||||
Things to know about the group_member table:
|
||||
|
||||
- Unique constraint on (`group_id`, `user_id`) to prevent duplicate memberships
|
||||
- Foreign key relationships with CASCADE delete to group and user tables
|
||||
|
||||
## Knowledge Table
|
||||
|
||||
| **Column Name** | **Data Type** | **Constraints** | **Description** |
|
||||
@@ -276,6 +313,23 @@ Things to know about the function table:
|
||||
| created_at | BigInteger | - | Creation timestamp |
|
||||
| updated_at | BigInteger | - | Last update timestamp |
|
||||
|
||||
## Knowledge File Table
|
||||
|
||||
| **Column Name** | **Data Type** | **Constraints** | **Description** |
|
||||
| --------------- | ------------- | ------------------------------------ | --------------------------------- |
|
||||
| id | Text | PRIMARY KEY | Unique identifier (UUID) |
|
||||
| user_id | Text | NOT NULL | Owner of the relationship |
|
||||
| knowledge_id | Text | FOREIGN KEY(knowledge.id), NOT NULL | Reference to the knowledge base |
|
||||
| file_id | Text | FOREIGN KEY(file.id), NOT NULL | Reference to the file |
|
||||
| created_at | BigInteger | NOT NULL | Creation timestamp |
|
||||
| updated_at | BigInteger | NOT NULL | Last update timestamp |
|
||||
|
||||
Things to know about the knowledge_file table:
|
||||
|
||||
- Unique constraint on (`knowledge_id`, `file_id`) to prevent duplicate entries
|
||||
- Foreign key relationships with CASCADE delete
|
||||
- Indexed on `knowledge_id`, `file_id`, and `user_id` for performance
|
||||
|
||||
The `access_control` fields expected structure:
|
||||
|
||||
```python
|
||||
@@ -644,3 +698,57 @@ erDiagram
|
||||
json access_control
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Database Encryption with SQLCipher
|
||||
|
||||
For enhanced security, Open WebUI supports at-rest encryption for its primary SQLite database using SQLCipher. This is recommended for deployments handling sensitive data where using a larger database like PostgreSQL is not needed.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
SQLCipher encryption requires additional dependencies that are **not included by default**. Before using this feature, you must install:
|
||||
|
||||
- The **SQLCipher system library** (e.g., `libsqlcipher-dev` on Debian/Ubuntu, `sqlcipher` on macOS via Homebrew)
|
||||
- The **`sqlcipher3-wheels`** Python package (`pip install sqlcipher3-wheels`)
|
||||
|
||||
For Docker users, this means building a custom image with these dependencies included.
|
||||
|
||||
### Configuration
|
||||
|
||||
To enable encryption, set the following environment variables:
|
||||
|
||||
```bash
|
||||
# Required: Set the database type to use SQLCipher
|
||||
DATABASE_TYPE=sqlite+sqlcipher
|
||||
|
||||
# Required: Set a secure password for database encryption
|
||||
DATABASE_PASSWORD=your-secure-password
|
||||
```
|
||||
|
||||
When these are set and a full `DATABASE_URL` is **not** explicitly defined, Open WebUI will automatically create and use an encrypted database file at `./data/webui.db`.
|
||||
|
||||
### Important Notes
|
||||
|
||||
:::danger
|
||||
|
||||
- The **`DATABASE_PASSWORD`** environment variable is **required** when using `sqlite+sqlcipher`.
|
||||
- The **`DATABASE_TYPE`** variable tells Open WebUI which connection logic to use. Setting it to `sqlite+sqlcipher` activates the encryption feature.
|
||||
- **Keep the password secure**, as it is needed to decrypt and access all application data.
|
||||
- **Losing the password means losing access to all data** in the encrypted database.
|
||||
|
||||
:::
|
||||
|
||||
### Related Database Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `DATABASE_TYPE` | `None` | Set to `sqlite+sqlcipher` for encrypted SQLite |
|
||||
| `DATABASE_PASSWORD` | - | Encryption password (required for SQLCipher) |
|
||||
| `DATABASE_ENABLE_SQLITE_WAL` | `False` | Enable Write-Ahead Logging for better performance |
|
||||
| `DATABASE_POOL_SIZE` | `None` | Database connection pool size |
|
||||
| `DATABASE_POOL_TIMEOUT` | `30` | Pool connection timeout in seconds |
|
||||
| `DATABASE_POOL_RECYCLE` | `3600` | Pool connection recycle time in seconds |
|
||||
|
||||
For more details, see the [Environment Variable Configuration](/getting-started/env-configuration) documentation.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user