complete docs overhaul

This commit is contained in:
DrMelone
2026-02-14 01:02:31 +01:00
parent 6e35ee70ff
commit c10c8d15ec
186 changed files with 1427 additions and 1348 deletions

View File

@@ -0,0 +1,6 @@
{
"label": "Dev Tools",
"position": 4,
"collapsible": true,
"collapsed": true
}

View File

@@ -0,0 +1,110 @@
---
sidebar_position: 16
title: "Browser Search Engine"
---
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
# Browser Search Engine Integration
Open WebUI allows you to integrate directly into your web browser. This tutorial will guide you through the process of setting up Open WebUI as a custom search engine, enabling you to execute queries easily from your browser's address bar.
## Setting Up Open WebUI as a Search Engine
### Prerequisites
Before you begin, ensure that:
- You have Chrome or another supported browser installed.
- The `WEBUI_URL` environment variable is set correctly, either using Docker environment variables or in the `.env` file as specified in the [Getting Started](https://docs.openwebui.com/getting-started/env-configuration) guide.
### Step 1: Set the WEBUI_URL Environment Variable
Setting the `WEBUI_URL` environment variable ensures your browser knows where to direct queries.
#### Using Docker Environment Variables
If you are running Open WebUI using Docker, you can set the environment variable in your `docker run` command:
```bash
docker run -d \
-p 3000:8080 \
--add-host=host.docker.internal:host-gateway \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
-e WEBUI_URL="https://<your-open-webui-url>" \
ghcr.io/open-webui/open-webui:main
```
Alternatively, you can add the variable to your `.env` file:
```plaintext
WEBUI_URL=https://<your-open-webui-url>
```
### Step 2: Add Open WebUI as a Custom Search Engine
### For Chrome
1. Open Chrome and navigate to **Settings**.
2. Select **Search engine** from the sidebar, then click on **Manage search engines**.
3. Click **Add** to create a new search engine.
4. Fill in the details as follows:
- **Search engine**: Open WebUI Search
- **Keyword**: webui (or any keyword you prefer)
- **URL with %s in place of query**:
```txt
https://<your-open-webui-url>/?q=%s
```
5. Click **Add** to save the configuration.
### For Firefox
1. Go to Open WebUI in Firefox.
2. Expand the address bar by clicking on it.
3. Click the plus icon that is enclosed in a green circle at the bottom of the expanded address bar. This adds Open WebUI's search to the search engines in your preferences.
Alternatively:
1. Go to Open WebUI in Firefox.
2. Right-click on the address bar.
3. Select "Add Open WebUI" (or similar) from the context menu.
### Optional: Using Specific Models
If you wish to utilize a specific model for your search, modify the URL format to include the model ID:
```txt
https://<your-open-webui-url>/?models=<model_id>&q=%s
```
:::note
**Note:** The model ID will need to be URL-encoded. Special characters like spaces or slashes need to be encoded (e.g., `my model` becomes `my%20model`).
:::
## Example Usage
Once the search engine is set up, you can perform searches directly from the address bar. Simply type your chosen keyword followed by your query:
```txt
webui your search query
```
This command will redirect you to the Open WebUI interface with your search results.
## Troubleshooting
If you encounter any issues, check the following:
- Ensure the `WEBUI_URL` is correctly configured and points to a valid Open WebUI instance.
- Double-check that the search engine URL format is correctly entered in your browser settings.
- Confirm your internet connection is active and that the Open WebUI service is running smoothly.

View File

@@ -0,0 +1,186 @@
---
sidebar_position: 13
title: "Continue.dev"
---
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the [contributing tutorial](/contributing).
:::
# Integrating Continue.dev VS Code Extension with Open WebUI
## Download Extension
You can download the VS Code extension on the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue) or directly via the `EXTENSION:MARKETPLACE` within VS Code by searching for `continue`.
Once installed, you can access the application via the `continue` tab in the side bar of VS Code.
**VS Code side bar icon:**
![continue.dev vscode icon](/images/tutorials/continue-dev/continue_dev_vscode_icon.png)
---
## Setup
Click on the assistant selector to the right of the main chat input. Then hover over `Local Assistant` and click on the settings icon (⚙️).
This will open the `config.yaml` file in your editor. Here you can change the settings of your `Local Assistant`.
![continue.dev chat input](/images/tutorials/continue-dev/continue_dev_extension_input_field.png)
:::info
Currently the `ollama` provider does not support authentication so we cannot use this provider with Open WebUI.
However Ollama and Open WebUI both have compatibility with OpenAI API spec. Read more about the specification in the [Ollama blog post on OpenAI compatibility](https://ollama.com/blog/openai-compatibility).
We can still setup continue.dev to use the openai provider which will allow us to use Open WebUI's authentication token.
:::
### Example config
Below you find an example config for Llama3 as the model with a local Open WebUI setup.
```yaml
name: Local Assistant
version: 1.0.0
schema: v1
models:
- name: LLama3
provider: openai
model: Meta-Llama-3-8B-Instruct-Q4_K_M.gguf
env:
useLegacyCompletionsEndpoint: false
apiBase: http://localhost:3000/api
apiKey: YOUR_OPEN_WEBUI_API_KEY
roles:
- chat
- edit
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase
```
---
### Miscellaneous Configuration Settings
These values are needed by the extension to work properly. Find more information in the [official config guide](https://docs.continue.dev/reference).
```yaml
name: Local Assistant
version: 1.0.0
schema: v1
```
The context section provides additional information to the models. Find more information in the [official config guide](https://docs.continue.dev/reference#context) and in the [context provider guide](https://docs.continue.dev/customize/custom-providers).
```yaml
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase
```
---
### Models
The models section is where you specify all models you want to add. Find more information in the [official models guide](https://docs.continue.dev/reference#models).
```yaml
models:
- ...
```
---
### Name
Sets the name for the model you want to use. This will be displayed within the chat input of the extension.
```yaml
name: LLama3
```
![continue.dev chat input](/images/tutorials/continue-dev/continue_dev_extension_input_field.png)
---
### Provider
Specifies the method used to communicate with the API, which in our case is the OpenAI API endpoint provided by Open WebUI.
```yaml
provider: openai
```
---
### Model
This is the actual name of your model in Open WebUI. Navigate to `Admin Panel` > `Settings` > `Models`, and then click on your preferred LLM.
Below the user-given name, you'll find the actual model name.
```yaml
model: Meta-Llama-3-8B-Instruct-Q4_K_M.gguf
```
---
### Legacy completions endpoint
This setting is not needed for Open WebUI, though more information is available in the [original guide](https://platform.openai.com/docs/guides/completions/completions-api-legacy).
```yaml
env:
useLegacyCompletionsEndpoint: false
```
---
### APIBase
This is a crucial step: you need to direct the continue.dev extension requests to your Open WebUI instance.
Either use an actual domain name if the instance is hosted somewhere (e.g., `https://example.com/api`) or your localhost setup (e.g., `http://localhost:3000/api`).
You can find more information about the URLs in the [API Endpoints guide](/getting-started/api-endpoints).
```yaml
apiBase: http://localhost:3000/api
```
---
### API Key
To authenticate with your Open WebUI instance, you'll need to generate an API key.
Follow the instructions in [this guide](https://docs.openwebui.com/getting-started/advanced-topics/monitoring#authentication-setup-for-api-key-) to create it.
```yaml
apiKey: YOUR_OPEN_WEBUI_API_KEY
```
---
### Roles
The roles will allow your model to be used by the extension for certain tasks. For the beginning you can choose `chat` and `edit`.
You can find more information about roles in the [official roles guide](https://docs.continue.dev/customize/model-roles/intro).
```yaml
roles:
- chat
- edit
```
The setup is now completed and you can interact with your model(s) via the chat input. Find more information about the features and usage of the continue.dev plugin in the [official documentation](https://docs.continue.dev/getting-started/overview).

View File

@@ -0,0 +1,115 @@
---
sidebar_position: 4100
title: "Firefox AI Chatbot Sidebar"
---
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
## 🦊 Firefox AI Chatbot Sidebar
# Integrating Open WebUI as a Local AI Chatbot Browser Assistant in Mozilla Firefox
## Prerequisites
Before integrating Open WebUI as a AI chatbot browser assistant in Mozilla Firefox, ensure you have:
- Open WebUI instance URL (local or domain)
- Firefox browser installed
## Enabling AI Chatbot in Firefox
1. Click on the hamburger button (three horizontal lines button at the top right corner, just below the `X` button)
2. Open up Firefox settings
3. Click on the `Firefox Labs` section
4. Toggle on `AI Chatbot`
Alternatively, you can enable AI Chatbot through the `about:config` page (described in the next section).
## Configuring about:config Settings
1. Type `about:config` in the Firefox address bar
2. Click `Accept the Risk and Continue`
3. Search for `browser.ml.chat.enabled` and toggle it to `true` if it's not already enabled through Firefox Labs
4. Search for `browser.ml.chat.hideLocalhost` and toggle it to `false`
### browser.ml.chat.prompts.{#}
To add custom prompts, follow these steps:
1. Search for `browser.ml.chat.prompts.{#}` (replace `{#}` with a number, e.g., `0`, `1`, `2`, etc.)
2. Click the `+` button to add a new prompt
3. Enter the prompt label, value, and ID (e.g., `{"id":"My Prompt", "value": "This is my custom prompt.", "label": "My Prompt"}`)
4. Repeat the process to add more prompts as desired
### browser.ml.chat.provider
1. Search for `browser.ml.chat.provider`
2. Enter your Open WebUI instance URL, including any optional parameters (e.g., `https://my-open-webui-instance.com/?model=browser-productivity-assistant&temporary-chat=true&tools=jina_web_scrape`)
## URL Parameters for Open WebUI
The following URL parameters can be used to customize your Open WebUI instance:
### Models and Model Selection
- `models`: Specify multiple models (comma-separated list) for the chat session (e.g., `/?models=model1,model2`)
- `model`: Specify a single model for the chat session (e.g., `/?model=model1`)
### YouTube Transcription
- `youtube`: Provide a YouTube video ID to transcribe the video in the chat (e.g., `/?youtube=VIDEO_ID`)
### Web Search
- `web-search`: Enable web search functionality by setting this parameter to `true` (e.g., `/?web-search=true`)
### Tool Selection
- `tools` or `tool-ids`: Specify a comma-separated list of tool IDs to activate in the chat (e.g., `/?tools=tool1,tool2` or `/?tool-ids=tool1,tool2`)
### Call Overlay
- `call`: Enable a video or call overlay in the chat interface by setting this parameter to `true` (e.g., `/?call=true`)
### Initial Query Prompt
- `q`: Set an initial query or prompt for the chat (e.g., `/?q=Hello%20there`)
### Temporary Chat Sessions
- `temporary-chat`: Mark the chat as a temporary session by setting this parameter to `true` (e.g., `/?temporary-chat=true`)
- *Note: Document processing is frontend-only in temporary chats. Complex files requiring backend parsing may not work.*
See https://docs.openwebui.com/features/chat-features/url-params for more info on URL parameters and how to use them.
## Additional about:config Settings
The following `about:config` settings can be adjusted for further customization:
- `browser.ml.chat.shortcuts`: Enable custom shortcuts for the AI chatbot sidebar
- `browser.ml.chat.shortcuts.custom`: Enable custom shortcut keys for the AI chatbot sidebar
- `browser.ml.chat.shortcuts.longPress`: Set the long press delay for shortcut keys
- `browser.ml.chat.sidebar`: Enable the AI chatbot sidebar
- `browser.ml.checkForMemory`: Check for available memory before loading models
- `browser.ml.defaultModelMemoryUsage`: Set the default memory usage for models
- `browser.ml.enable`: Enable the machine learning features in Firefox
- `browser.ml.logLevel`: Set the log level for machine learning features
- `browser.ml.maximumMemoryPressure`: Set the maximum memory pressure threshold
- `browser.ml.minimumPhysicalMemory`: Set the minimum physical memory required
- `browser.ml.modelCacheMaxSize`: Set the maximum size of the model cache
- `browser.ml.modelCacheTimeout`: Set the timeout for model cache
- `browser.ml.modelHubRootUrl`: Set the root URL for the model hub
- `browser.ml.modelHubUrlTemplate`: Set the URL template for the model hub
- `browser.ml.queueWaitInterval`: Set the interval for queue wait
- `browser.ml.queueWaitTimeout`: Set the timeout for queue wait
## Accessing the AI Chatbot Sidebar
To access the AI chatbot sidebar, use one of the following methods:
- Press `CTRL+B` to open the bookmarks sidebar and switch to AI Chatbot
- Press `CTRL+Alt+X` to open the AI chatbot sidebar directly

View File

@@ -0,0 +1,125 @@
---
title: "Iterm2 AI Integration"
---
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the [contributing tutorial](/contributing).
:::
# Use your Open WebUI models with Iterm2
You can use your Open WebUI models within the Iterm2 AI plugin. This guide shows you how to set up the necessary configuration.
## Why use the Iterm2 AI plugin?
Whenever you forget a command or need a quick bash script for a repetitive task, you likely already use AI responses. To streamline this workflow, the Iterm2 AI plugin allows you to send requests to your specified AI provider or your Open WebUI.
## Why connect to your Open WebUI instance?
Open WebUI provides a simple and straightforward way to interact with your LLMs via its [API Endpoints](/getting-started/api-endpoints). This is particularly beneficial if you are running your own LLMs locally. Furthermore, you can leverage all your implemented features, monitoring, and other capabilities.
## Prerequisites
### 1. Download the iTerm2 AI plugin
If you haven't already installed the iTerm2 AI plugin, you'll need to download it first from [their page](https://iterm2.com/ai-plugin.html).
Unzip the file and move the application into your **Applications** folder.
### 2. Generate your Open WebUI API key
To authenticate with your Open WebUI instance, you'll need to generate an API key.
Follow the instructions in [this guide](https://docs.openwebui.com/getting-started/advanced-topics/monitoring#authentication-setup-for-api-key-) to create it.
## Configuration
Open your iTerm2 terminal and navigate to **Settings** (⌘,) from the **iTerm2** menu, then select the **AI** tab.
![iterm2 menu before setup](/images/tutorials/iterm2/iterm2_ai_plugin_before.png)
### Verify the installed plugin
Once the iTerm2 AI plugin is installed, verify that the **Plugin** section shows `Plugin installed and working ✅`.
---
### Give consent for generative AI features
Under the **Consent** section, check the box for `Enable generative AI features` to agree.
---
### Set API key
Enter your previously created Open WebUI API token into the **API Key** field.
---
### Optional: customize your prompt
If you want a specialized prompt sent to your LLM, feel free to edit the `Prompt template`.
**Original prompt example:**
```text
Return commands suitable for copy/pasting into \(shell) on \(uname). Do
NOT include commentary NOR Markdown triple-backtick code blocks as your
whole response will be copied into my terminal automatically.
The script should do this: \(ai.prompt)
```
You can read more about the Iterm2 prompt in the [Iterm2 documentation](https://gitlab.com/gnachman/iterm2/-/wikis/AI-Prompt).
---
### Select Your LLM
Since the iTerm2 AI plugin does not automatically list your custom models, you'll need to add your preferred one manually.
In your Open WebUI instance, navigate to `Admin Panel` > `Settings` > `Models`, and then click on your preferred LLM.
Below the user-given name, you'll find the actual model name that you need to enter into iTerm2 (e.g., name: Gemma3 - model name: `/models/gemma3-27b-it-Q4_K_M.gguf`).
---
### Adjust the Tokens
Set your preferred amount of tokens here. Typically, your inference tool will already have a limitation set.
---
### Adjust the URL
This is a crucial step: you need to direct the iTerm2 AI plugin requests to your Open WebUI instance.
Either use an actual domain name if the instance is hosted somewhere (e.g., `https://example.com/api/chat/completions`) or your localhost setup (e.g., `http://localhost:8080/api/chat/completions`).
You can find more information about the URLs in the [API Endpoints guide](/getting-started/api-endpoints).
---
### Legacy Completions API
This setting is not needed for Open WebUI, though more information is available in the [original guide](https://platform.openai.com/docs/guides/completions/completions-api-legacy).
---
After setup, the **AI** section will look like this:
![iterm2 menu after setup](/images/tutorials/iterm2/iterm2_ai_plugin_after.png)
## Usage
Within your terminal session, open the prompt input field by pressing **command + y** (⌘y). Write your prompt and send it by clicking the **OK** button or by using **shift + enter** (⇧⌤).
![iterm2 prompt window](/images/tutorials/iterm2/iterm2_ai_plugin_prompt_window.png)
---
This will lead you back to the terminal with an additional window bound to the session frame. The result of your query will be displayed within this overlay. To send the command to your terminal, move your cursor to the target line and use **shift + enter** (⇧⌤).
:::info
There can be more than one line of response. If so, you can navigate with the arrow keys to edit the commands as needed.
:::
![iterm2 prompt window](/images/tutorials/iterm2/iterm2_ai_plugin_result_window.png)

View File

@@ -0,0 +1,160 @@
---
slug: /tutorials/integrations/jupyter
sidebar_position: 321
title: "Jupyter Notebook Integration"
---
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
> [!WARNING]
> This documentation was created based on the current version (0.5.16) and is constantly being updated.
# Jupyter Notebook Integration
Starting in v0.5.11, Open-WebUI released a new feature called `Jupyter Notebook Support in Code Interpreter`. This feature allow you to integrate Open-WebUI with Jupyter. There have already been several improvements to this feauture over the subsequent releases, so review the release notes carefully.
This tutorial walks you through the basics of setting-up the connection between the two services.
- [See v0.5.11 Release Notes](https://github.com/open-webui/open-webui/releases/tag/v0.5.11)
- [See v0.5.15 Release Notes](https://github.com/open-webui/open-webui/releases/tag/v0.5.14)
## What are Jupyter Notebooks
Jupyter Notebook is an open-source web application that allows users to create and share documents containing live code, equations, visualizations, and narrative text. It's particularly popular in data science, scientific computing, and education because it enables users to combine executable code (in languages like Python, R, or Julia) with explanatory text, images, and interactive visualizations all in one document. Jupyter Notebooks are especially useful for data analysis and exploration because they allow users to execute code in small, manageable chunks while documenting their thought process and findings along the way. This format makes it easy to experiment, debug code, and create comprehensive, shareable reports that demonstrate both the analysis process and results.
See Jupyter's website for more info at: [Project Juptyer](https://jupyter.org/)
## Step 0: Configuration Summary
Here is the target configuration we're going to set-up through this tutorial.
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-code-execution.png)
# Step 1: Launch OUI and Jupyter
To accomplish this, I used `docker-compose` to launch a stack that includes both services, along with my LLMs, but this should also work if run each docker container separately.
```yaml title="docker-compose.yml"
version: "3.8"
services:
open-webui:
image: ghcr.io/open-webui/open-webui:latest
container_name: open-webui
ports:
- "3000:8080"
volumes:
- open-webui:/app/backend/data
jupyter:
image: jupyter/minimal-notebook:latest
container_name: jupyter-notebook
ports:
- "8888:8888"
volumes:
- jupyter_data:/home/jovyan/work
environment:
- JUPYTER_ENABLE_LAB=yes
- JUPYTER_TOKEN=123456
volumes:
open-webui:
jupyter_data:
```
You can launch the above stack by running the below command in the directory where the `docker-compose` file is saved:
```bash title="Run docker-compose"
docker-compose up -d
```
You should now be able to access both services at the following URLs:
| Service | URL |
| ---------- | ----------------------- |
| Open-WebUI | `http://localhost:3000` |
| Jupyter | `http://localhost:8888` |
When accessing the Jupyter service, you will need the `JUPYTER_TOKEN` defined above. For this tutorial, I've picked a dummary token value of `123456`.
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-token.png)
# Step 2: Configure Code Execution for Jupyter
Now that we have Open-WebUI and Jupter running, we need to configure Open-WebUI's Code Execution to use Jupyter under Admin Panel -> Settings -> Code Execution. Since Open-WebUI is constantly releasing and improving this feature, I recommend always reviewing the possible configuraitons in the [`configs.py` file](https://github.com/open-webui/open-webui/blob/6fedd72e3973e1d13c9daf540350cd822826bf27/backend/open_webui/routers/configs.py#L72) for the latest and greatest. As of v0.5.16, this includes the following:
| Open-WebUI Env Var | Value |
| ------------------------------------- | -------------------------------- |
| `ENABLE_CODE_INTERPRETER` | True |
| `CODE_EXECUTION_ENGINE` | jupyter |
| `CODE_EXECUTION_JUPYTER_URL` | http://host.docker.internal:8888 |
| `CODE_EXECUTION_JUPYTER_AUTH` | token |
| `CODE_EXECUTION_JUPYTER_AUTH_TOKEN` | 123456 |
| `CODE_EXECUTION_JUPYTER_TIMEOUT` | 60 |
| `CODE_INTERPRETER_ENGINE` | jupyter |
| `CODE_INTERPRETER_JUPYTER_URL` | http://host.docker.internal:8888 |
| `CODE_INTERPRETER_JUPYTER_AUTH` | token |
| `CODE_INTERPRETER_JUPYTER_AUTH_TOKEN` | 123456 |
| `CODE_INTERPRETER_JUPYTER_TIMEOUT` | 60 |
## Step 3: Test the Connection
To start, let's confirm what's in our Jupyter directory. As you can see from the image below, we only have an empty `work` folder.
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-empty.png)
### Create a CSV
Let's run our first prompt. Make sure you've selected the `Code Execution` button.
```txt
Prompt: Create two CSV files using fake data. The first CSV should be created using vanilla python and the second CSV should be created using the pandas library. Name the CSVs data1.csv and data2.csv
```
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-create-csv.png)
We can see the CSVs were created and are now accessible within Jupyter.
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-view-csv.png)
### Create a Visualization
Let's run our second prompt. Again, make sure you've selected the `Code Execution` button.
```txt
Prompt: Create several visualizations in python using matplotlib and seaborn and save them to jupyter
```
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-create-viz.png)
We can see the visualizations were created and are now accessible within Jupyter.
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-view-viz.png)
### Create a Notebook
Let's run our last prompt together. In this prompt, we'll create an entirely new notebook using just a prompt.
```txt
Prompt: Write python code to read and write json files and save it to my notebook called notebook.ipynb
```
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-create-notebook.png)
We can see the visualizations were created and are now accessible within Jupyter.
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-view-notebook.png)
## Note about workflow
While testing this feature, I noticed several times that Open-WebUI would not automatically save the code or output generated within Open-WebUI to my instance of Jupyter. To force it to output the file/item I created, I often followed this two-step workflow, which first creates the code artifact I want and then asks it to save it to my instance of Jupyter.
![Code Execution Configuration](/images/tutorials/jupyter/jupyter-workflow.png)
## How are you using this feature?
Are you using the Code Execution feature and/or Jupyter? If so, please reach out. I'd love to hear how you're using it so I can continue adding examples to this tutorial of other awesome ways you can use this feature!