mirror of
https://github.com/open-webui/docs.git
synced 2026-03-26 13:18:42 +07:00
complete docs overhaul
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 1000
|
||||
title: "Deployment & Community Guides"
|
||||
sidebar_position: 5
|
||||
title: "Community Guides"
|
||||
---
|
||||
|
||||
import { TopBanners } from "@site/src/components/TopBanners";
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"label": "HTTPS",
|
||||
"position": 200,
|
||||
"position": 10,
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"label": "Integrations",
|
||||
"position": 2,
|
||||
"position": 20,
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"label": "Auth & Identity",
|
||||
"position": 2,
|
||||
"collapsible": true,
|
||||
"collapsed": true
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Azure AD Domain Services (LDAPS) Integration"
|
||||
title: "Azure AD LDAP"
|
||||
---
|
||||
|
||||
:::warning
|
||||
@@ -1,5 +1,6 @@
|
||||
---
|
||||
title: Dual OAuth Configuration (Microsoft & Google)
|
||||
slug: /tutorials/tips/dual-oauth-configuration
|
||||
title: "Dual OAuth Setup"
|
||||
sidebar_label: Dual OAuth Configuration
|
||||
sidebar_position: 100
|
||||
description: Learn how to configure both Microsoft and Google OAuth providers simultaneously in Open WebUI using an unofficial community workaround.
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 15
|
||||
title: "Microsoft Entra ID Group Name Sync"
|
||||
title: "Entra ID Group Sync"
|
||||
---
|
||||
|
||||
:::warning
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
title: "Backend-Controlled, UI-Compatible API Flow"
|
||||
title: "Backend-Controlled API Flow"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 14
|
||||
title: "Setting up with Custom CA Store"
|
||||
title: "Custom CA Store"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
6
docs/tutorials/integrations/dev-tools/_category_.json
Normal file
6
docs/tutorials/integrations/dev-tools/_category_.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"label": "Dev Tools",
|
||||
"position": 4,
|
||||
"collapsible": true,
|
||||
"collapsed": true
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 13
|
||||
title: "Continue.dev VS Code Extension with Open WebUI"
|
||||
title: "Continue.dev"
|
||||
---
|
||||
|
||||
:::warning
|
||||
@@ -1,115 +1,115 @@
|
||||
---
|
||||
sidebar_position: 4100
|
||||
title: "Firefox AI Chatbot Sidebar"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
|
||||
|
||||
:::
|
||||
|
||||
## 🦊 Firefox AI Chatbot Sidebar
|
||||
|
||||
# Integrating Open WebUI as a Local AI Chatbot Browser Assistant in Mozilla Firefox
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before integrating Open WebUI as a AI chatbot browser assistant in Mozilla Firefox, ensure you have:
|
||||
|
||||
- Open WebUI instance URL (local or domain)
|
||||
- Firefox browser installed
|
||||
|
||||
## Enabling AI Chatbot in Firefox
|
||||
|
||||
1. Click on the hamburger button (three horizontal lines button at the top right corner, just below the `X` button)
|
||||
2. Open up Firefox settings
|
||||
3. Click on the `Firefox Labs` section
|
||||
4. Toggle on `AI Chatbot`
|
||||
|
||||
Alternatively, you can enable AI Chatbot through the `about:config` page (described in the next section).
|
||||
|
||||
## Configuring about:config Settings
|
||||
|
||||
1. Type `about:config` in the Firefox address bar
|
||||
2. Click `Accept the Risk and Continue`
|
||||
3. Search for `browser.ml.chat.enabled` and toggle it to `true` if it's not already enabled through Firefox Labs
|
||||
4. Search for `browser.ml.chat.hideLocalhost` and toggle it to `false`
|
||||
|
||||
### browser.ml.chat.prompts.{#}
|
||||
|
||||
To add custom prompts, follow these steps:
|
||||
|
||||
1. Search for `browser.ml.chat.prompts.{#}` (replace `{#}` with a number, e.g., `0`, `1`, `2`, etc.)
|
||||
2. Click the `+` button to add a new prompt
|
||||
3. Enter the prompt label, value, and ID (e.g., `{"id":"My Prompt", "value": "This is my custom prompt.", "label": "My Prompt"}`)
|
||||
4. Repeat the process to add more prompts as desired
|
||||
|
||||
### browser.ml.chat.provider
|
||||
|
||||
1. Search for `browser.ml.chat.provider`
|
||||
2. Enter your Open WebUI instance URL, including any optional parameters (e.g., `https://my-open-webui-instance.com/?model=browser-productivity-assistant&temporary-chat=true&tools=jina_web_scrape`)
|
||||
|
||||
## URL Parameters for Open WebUI
|
||||
|
||||
The following URL parameters can be used to customize your Open WebUI instance:
|
||||
|
||||
### Models and Model Selection
|
||||
|
||||
- `models`: Specify multiple models (comma-separated list) for the chat session (e.g., `/?models=model1,model2`)
|
||||
- `model`: Specify a single model for the chat session (e.g., `/?model=model1`)
|
||||
|
||||
### YouTube Transcription
|
||||
|
||||
- `youtube`: Provide a YouTube video ID to transcribe the video in the chat (e.g., `/?youtube=VIDEO_ID`)
|
||||
|
||||
### Web Search
|
||||
|
||||
- `web-search`: Enable web search functionality by setting this parameter to `true` (e.g., `/?web-search=true`)
|
||||
|
||||
### Tool Selection
|
||||
|
||||
- `tools` or `tool-ids`: Specify a comma-separated list of tool IDs to activate in the chat (e.g., `/?tools=tool1,tool2` or `/?tool-ids=tool1,tool2`)
|
||||
|
||||
### Call Overlay
|
||||
|
||||
- `call`: Enable a video or call overlay in the chat interface by setting this parameter to `true` (e.g., `/?call=true`)
|
||||
|
||||
### Initial Query Prompt
|
||||
|
||||
- `q`: Set an initial query or prompt for the chat (e.g., `/?q=Hello%20there`)
|
||||
|
||||
### Temporary Chat Sessions
|
||||
|
||||
- `temporary-chat`: Mark the chat as a temporary session by setting this parameter to `true` (e.g., `/?temporary-chat=true`)
|
||||
- *Note: Document processing is frontend-only in temporary chats. Complex files requiring backend parsing may not work.*
|
||||
|
||||
See https://docs.openwebui.com/features/chat-features/url-params for more info on URL parameters and how to use them.
|
||||
|
||||
## Additional about:config Settings
|
||||
|
||||
The following `about:config` settings can be adjusted for further customization:
|
||||
|
||||
- `browser.ml.chat.shortcuts`: Enable custom shortcuts for the AI chatbot sidebar
|
||||
- `browser.ml.chat.shortcuts.custom`: Enable custom shortcut keys for the AI chatbot sidebar
|
||||
- `browser.ml.chat.shortcuts.longPress`: Set the long press delay for shortcut keys
|
||||
- `browser.ml.chat.sidebar`: Enable the AI chatbot sidebar
|
||||
- `browser.ml.checkForMemory`: Check for available memory before loading models
|
||||
- `browser.ml.defaultModelMemoryUsage`: Set the default memory usage for models
|
||||
- `browser.ml.enable`: Enable the machine learning features in Firefox
|
||||
- `browser.ml.logLevel`: Set the log level for machine learning features
|
||||
- `browser.ml.maximumMemoryPressure`: Set the maximum memory pressure threshold
|
||||
- `browser.ml.minimumPhysicalMemory`: Set the minimum physical memory required
|
||||
- `browser.ml.modelCacheMaxSize`: Set the maximum size of the model cache
|
||||
- `browser.ml.modelCacheTimeout`: Set the timeout for model cache
|
||||
- `browser.ml.modelHubRootUrl`: Set the root URL for the model hub
|
||||
- `browser.ml.modelHubUrlTemplate`: Set the URL template for the model hub
|
||||
- `browser.ml.queueWaitInterval`: Set the interval for queue wait
|
||||
- `browser.ml.queueWaitTimeout`: Set the timeout for queue wait
|
||||
|
||||
## Accessing the AI Chatbot Sidebar
|
||||
|
||||
To access the AI chatbot sidebar, use one of the following methods:
|
||||
|
||||
- Press `CTRL+B` to open the bookmarks sidebar and switch to AI Chatbot
|
||||
- Press `CTRL+Alt+X` to open the AI chatbot sidebar directly
|
||||
---
|
||||
sidebar_position: 4100
|
||||
title: "Firefox AI Chatbot Sidebar"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
|
||||
|
||||
:::
|
||||
|
||||
## 🦊 Firefox AI Chatbot Sidebar
|
||||
|
||||
# Integrating Open WebUI as a Local AI Chatbot Browser Assistant in Mozilla Firefox
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before integrating Open WebUI as a AI chatbot browser assistant in Mozilla Firefox, ensure you have:
|
||||
|
||||
- Open WebUI instance URL (local or domain)
|
||||
- Firefox browser installed
|
||||
|
||||
## Enabling AI Chatbot in Firefox
|
||||
|
||||
1. Click on the hamburger button (three horizontal lines button at the top right corner, just below the `X` button)
|
||||
2. Open up Firefox settings
|
||||
3. Click on the `Firefox Labs` section
|
||||
4. Toggle on `AI Chatbot`
|
||||
|
||||
Alternatively, you can enable AI Chatbot through the `about:config` page (described in the next section).
|
||||
|
||||
## Configuring about:config Settings
|
||||
|
||||
1. Type `about:config` in the Firefox address bar
|
||||
2. Click `Accept the Risk and Continue`
|
||||
3. Search for `browser.ml.chat.enabled` and toggle it to `true` if it's not already enabled through Firefox Labs
|
||||
4. Search for `browser.ml.chat.hideLocalhost` and toggle it to `false`
|
||||
|
||||
### browser.ml.chat.prompts.{#}
|
||||
|
||||
To add custom prompts, follow these steps:
|
||||
|
||||
1. Search for `browser.ml.chat.prompts.{#}` (replace `{#}` with a number, e.g., `0`, `1`, `2`, etc.)
|
||||
2. Click the `+` button to add a new prompt
|
||||
3. Enter the prompt label, value, and ID (e.g., `{"id":"My Prompt", "value": "This is my custom prompt.", "label": "My Prompt"}`)
|
||||
4. Repeat the process to add more prompts as desired
|
||||
|
||||
### browser.ml.chat.provider
|
||||
|
||||
1. Search for `browser.ml.chat.provider`
|
||||
2. Enter your Open WebUI instance URL, including any optional parameters (e.g., `https://my-open-webui-instance.com/?model=browser-productivity-assistant&temporary-chat=true&tools=jina_web_scrape`)
|
||||
|
||||
## URL Parameters for Open WebUI
|
||||
|
||||
The following URL parameters can be used to customize your Open WebUI instance:
|
||||
|
||||
### Models and Model Selection
|
||||
|
||||
- `models`: Specify multiple models (comma-separated list) for the chat session (e.g., `/?models=model1,model2`)
|
||||
- `model`: Specify a single model for the chat session (e.g., `/?model=model1`)
|
||||
|
||||
### YouTube Transcription
|
||||
|
||||
- `youtube`: Provide a YouTube video ID to transcribe the video in the chat (e.g., `/?youtube=VIDEO_ID`)
|
||||
|
||||
### Web Search
|
||||
|
||||
- `web-search`: Enable web search functionality by setting this parameter to `true` (e.g., `/?web-search=true`)
|
||||
|
||||
### Tool Selection
|
||||
|
||||
- `tools` or `tool-ids`: Specify a comma-separated list of tool IDs to activate in the chat (e.g., `/?tools=tool1,tool2` or `/?tool-ids=tool1,tool2`)
|
||||
|
||||
### Call Overlay
|
||||
|
||||
- `call`: Enable a video or call overlay in the chat interface by setting this parameter to `true` (e.g., `/?call=true`)
|
||||
|
||||
### Initial Query Prompt
|
||||
|
||||
- `q`: Set an initial query or prompt for the chat (e.g., `/?q=Hello%20there`)
|
||||
|
||||
### Temporary Chat Sessions
|
||||
|
||||
- `temporary-chat`: Mark the chat as a temporary session by setting this parameter to `true` (e.g., `/?temporary-chat=true`)
|
||||
- *Note: Document processing is frontend-only in temporary chats. Complex files requiring backend parsing may not work.*
|
||||
|
||||
See https://docs.openwebui.com/features/chat-features/url-params for more info on URL parameters and how to use them.
|
||||
|
||||
## Additional about:config Settings
|
||||
|
||||
The following `about:config` settings can be adjusted for further customization:
|
||||
|
||||
- `browser.ml.chat.shortcuts`: Enable custom shortcuts for the AI chatbot sidebar
|
||||
- `browser.ml.chat.shortcuts.custom`: Enable custom shortcut keys for the AI chatbot sidebar
|
||||
- `browser.ml.chat.shortcuts.longPress`: Set the long press delay for shortcut keys
|
||||
- `browser.ml.chat.sidebar`: Enable the AI chatbot sidebar
|
||||
- `browser.ml.checkForMemory`: Check for available memory before loading models
|
||||
- `browser.ml.defaultModelMemoryUsage`: Set the default memory usage for models
|
||||
- `browser.ml.enable`: Enable the machine learning features in Firefox
|
||||
- `browser.ml.logLevel`: Set the log level for machine learning features
|
||||
- `browser.ml.maximumMemoryPressure`: Set the maximum memory pressure threshold
|
||||
- `browser.ml.minimumPhysicalMemory`: Set the minimum physical memory required
|
||||
- `browser.ml.modelCacheMaxSize`: Set the maximum size of the model cache
|
||||
- `browser.ml.modelCacheTimeout`: Set the timeout for model cache
|
||||
- `browser.ml.modelHubRootUrl`: Set the root URL for the model hub
|
||||
- `browser.ml.modelHubUrlTemplate`: Set the URL template for the model hub
|
||||
- `browser.ml.queueWaitInterval`: Set the interval for queue wait
|
||||
- `browser.ml.queueWaitTimeout`: Set the timeout for queue wait
|
||||
|
||||
## Accessing the AI Chatbot Sidebar
|
||||
|
||||
To access the AI chatbot sidebar, use one of the following methods:
|
||||
|
||||
- Press `CTRL+B` to open the bookmarks sidebar and switch to AI Chatbot
|
||||
- Press `CTRL+Alt+X` to open the AI chatbot sidebar directly
|
||||
@@ -1,4 +1,5 @@
|
||||
---
|
||||
slug: /tutorials/integrations/jupyter
|
||||
sidebar_position: 321
|
||||
title: "Jupyter Notebook Integration"
|
||||
---
|
||||
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"label": "LLM Providers",
|
||||
"position": 1,
|
||||
"collapsible": true,
|
||||
"collapsed": true
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
title: "Run DeepSeek R1 Dynamic 1.58-bit with Llama.cpp"
|
||||
title: "DeepSeek R1 Dynamic"
|
||||
---
|
||||
|
||||
A huge shoutout to **UnslothAI** for their incredible efforts! Thanks to their hard work, we can now run the **full DeepSeek-R1** 671B parameter model in its dynamic 1.58-bit quantized form (compressed to just 131GB) on **Llama.cpp**! And the best part? You no longer have to despair about needing massive enterprise-class GPUs or servers — it’s possible to run this model on your personal machine (albeit slowly for most consumer hardware).
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 11
|
||||
title: "Local LLM Setup with IPEX-LLM on Intel GPU"
|
||||
title: "IPEX-LLM (Intel GPU)"
|
||||
---
|
||||
|
||||
:::warning
|
||||
@@ -11,7 +11,7 @@ This tutorial is a community contribution and is not supported by the Open WebUI
|
||||
|
||||
:::note
|
||||
|
||||
This guide is verified with Open WebUI setup through [Manual Installation](../../getting-started/index.md).
|
||||
This guide is verified with Open WebUI setup through [Manual Installation](/getting-started/).
|
||||
|
||||
:::
|
||||
|
||||
6
docs/tutorials/integrations/monitoring/_category_.json
Normal file
6
docs/tutorials/integrations/monitoring/_category_.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"label": "Monitoring",
|
||||
"position": 3,
|
||||
"collapsible": true,
|
||||
"collapsed": true
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: "Monitor your LLM requests with Helicone"
|
||||
title: "Helicone"
|
||||
sidebar_position: 19
|
||||
---
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 20
|
||||
title: "Monitoring and Debugging with Langfuse"
|
||||
title: "Langfuse"
|
||||
---
|
||||
|
||||
## Langfuse Integration with Open WebUI
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 32
|
||||
title: "Integrate with OneDrive & SharePoint"
|
||||
title: "OneDrive & SharePoint"
|
||||
---
|
||||
|
||||
:::info
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"label": "Maintenance",
|
||||
"position": 5,
|
||||
"position": 30,
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 310
|
||||
title: "Exporting and Importing Database"
|
||||
title: "Exporting & Importing DB"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
---
|
||||
slug: /tutorials/offline-mode
|
||||
sidebar_position: 300
|
||||
title: "Offline Mode"
|
||||
---
|
||||
@@ -9,7 +10,7 @@ import { TopBanners } from "@site/src/components/TopBanners";
|
||||
|
||||
:::warning
|
||||
|
||||
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the [contributing tutorial](../contributing.mdx).
|
||||
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the [contributing tutorial](/contributing).
|
||||
|
||||
:::
|
||||
|
||||
@@ -77,7 +78,7 @@ Consider if you need to start the application offline from the beginning of your
|
||||
|
||||
### I: Speech-To-Text
|
||||
|
||||
The local `whisper` installation does not include the model by default. In this regard, you can follow the [guide](/features/audio/speech-to-text/stt-config.md) only partially if you want to use an external model/provider. To use the local `whisper` application, you must first download the model of your choice (e.g. [Huggingface - Systran](https://huggingface.co/Systran)).
|
||||
The local `whisper` installation does not include the model by default. In this regard, you can follow the [guide](/features/audio/speech-to-text/stt-config) only partially if you want to use an external model/provider. To use the local `whisper` application, you must first download the model of your choice (e.g. [Huggingface - Systran](https://huggingface.co/Systran)).
|
||||
|
||||
```python
|
||||
from faster_whisper import WhisperModel
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"label": "Tips & Tricks",
|
||||
"position": 0,
|
||||
"position": 50,
|
||||
"link": {
|
||||
"type": "generated-index"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 21
|
||||
title: "One-Click Ollama + Open WebUI Launcher"
|
||||
title: "Ollama Launcher"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
title: "Open WebUI RAG Tutorial"
|
||||
title: "RAG Tutorial"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
sidebar_position: 11
|
||||
title: "SQLite Database Overview"
|
||||
title: "Database Schema"
|
||||
---
|
||||
|
||||
:::warning
|
||||
|
||||
Reference in New Issue
Block a user