mirror of
https://github.com/open-webui/docs.git
synced 2026-03-27 13:28:37 +07:00
@@ -8,12 +8,20 @@ Skills are reusable, markdown-based instruction sets that you can attach to mode
|
||||
|
||||
## How Skills Work
|
||||
|
||||
Skills use a **lazy-loading** architecture to keep the model's context window efficient:
|
||||
Skills behave differently depending on how they are activated:
|
||||
|
||||
1. **Manifest injection** — When a skill is active (either bound to a model or mentioned in chat), only a lightweight manifest containing the skill's **name** and **description** is injected into the system prompt.
|
||||
### User-Selected Skills ($ Mention)
|
||||
|
||||
When you mention a skill in chat with `$`, its **full content is injected directly** into the system prompt. The model has immediate access to the complete instructions without needing any extra tool calls.
|
||||
|
||||
### Model-Attached Skills
|
||||
|
||||
Skills bound to a model use a **lazy-loading** architecture to keep the context window efficient:
|
||||
|
||||
1. **Manifest injection** — Only a lightweight manifest containing the skill's **name** and **description** is injected into the system prompt.
|
||||
2. **On-demand loading** — The model receives a `view_skill` builtin tool. When it determines it needs a skill's full instructions, it calls `view_skill` with the skill name to load the complete content.
|
||||
|
||||
This design means that even if dozens of skills are available, only the ones the model actually needs are loaded into context.
|
||||
This design means that even if many skills are attached to a model, only the ones the model actually needs are loaded into context.
|
||||
|
||||
## Creating a Skill
|
||||
|
||||
@@ -23,8 +31,8 @@ Navigate to **Workspace → Skills** and click **+ New Skill**.
|
||||
| :--- | :--- |
|
||||
| **Name** | A human-readable display name (e.g., "Code Review Guidelines"). |
|
||||
| **Skill ID** | A unique slug identifier, auto-generated from the name (e.g., `code-review-guidelines`). Editable during creation, read-only afterwards. |
|
||||
| **Description** | A short summary shown in the manifest. The model uses this to decide whether to load the full instructions. |
|
||||
| **Content** | The full skill instructions in **Markdown**. This is the content loaded by `view_skill`. |
|
||||
| **Description** | A short summary shown in the manifest. For model-attached skills, the model uses this to decide whether to load the full instructions. |
|
||||
| **Content** | The full skill instructions in **Markdown**. For user-selected skills this is injected directly; for model-attached skills it is loaded on-demand via `view_skill`. |
|
||||
|
||||
Click **Save & Create** to finalize.
|
||||
|
||||
@@ -32,7 +40,7 @@ Click **Save & Create** to finalize.
|
||||
|
||||
### In Chat ($ Mention)
|
||||
|
||||
Type `$` in the chat input to open the skill picker. Select a skill, and it will be attached to the message as a **skill mention** (similar to `@` for models or `#` for knowledge). The skill manifest is injected for that conversation, and the model can call `view_skill` to load the full instructions when needed.
|
||||
Type `$` in the chat input to open the skill picker. Select a skill, and it will be attached to the message as a **skill mention** (similar to `@` for models or `#` for knowledge). The skill's **full content** is injected directly into the conversation, giving the model immediate access to the complete instructions.
|
||||
|
||||
### Bound to a Model
|
||||
|
||||
@@ -43,7 +51,7 @@ You can permanently attach skills to a model so they are always available:
|
||||
3. Check the skills you want this model to always have access to.
|
||||
4. Click **Save**.
|
||||
|
||||
When a user chats with that model, the selected skills' manifests are automatically injected.
|
||||
When a user chats with that model, the selected skills' manifests (name and description) are automatically injected, and the model can load the full content on-demand via `view_skill`.
|
||||
|
||||
## Import and Export
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ title: "Open Terminal"
|
||||
|
||||
:::info
|
||||
|
||||
This page is up-to-date with Open Terminal release version [v0.2.1](https://github.com/open-webui/open-terminal).
|
||||
This page is up-to-date with Open Terminal release version [v0.2.3](https://github.com/open-webui/open-terminal).
|
||||
|
||||
:::
|
||||
|
||||
@@ -72,6 +72,32 @@ open-terminal run --host 0.0.0.0 --port 8000 --api-key your-secret-key
|
||||
Running bare metal gives the model shell access to your actual machine. Only use this for local development or testing.
|
||||
:::
|
||||
|
||||
### MCP Server Mode
|
||||
|
||||
Open Terminal can also run as an [MCP (Model Context Protocol)](/features/extensibility/plugin/tools/openapi-servers/mcp) server, exposing all its endpoints as MCP tools. This requires an additional dependency:
|
||||
|
||||
```bash
|
||||
pip install open-terminal[mcp]
|
||||
```
|
||||
|
||||
Then start the MCP server:
|
||||
|
||||
```bash
|
||||
# stdio transport (default — for local MCP clients)
|
||||
open-terminal mcp
|
||||
|
||||
# streamable-http transport (for remote/networked MCP clients)
|
||||
open-terminal mcp --transport streamable-http --host 0.0.0.0 --port 8000
|
||||
```
|
||||
|
||||
| Option | Default | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| `--transport` | `stdio` | Transport mode: `stdio` or `streamable-http` |
|
||||
| `--host` | `0.0.0.0` | Bind address (streamable-http only) |
|
||||
| `--port` | `8000` | Bind port (streamable-http only) |
|
||||
|
||||
Under the hood, this uses [FastMCP](https://github.com/jlowin/fastmcp) to automatically convert every FastAPI endpoint into an MCP tool — no manual tool definitions needed.
|
||||
|
||||
### Docker Compose (with Open WebUI)
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
@@ -154,8 +180,10 @@ The `/execute` endpoint description in the OpenAPI spec automatically includes l
|
||||
|
||||
**Query parameters:**
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| Parameter | Default | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| `stream` | `false` | If `true`, stream output as JSONL instead of waiting for completion |
|
||||
| `tail` | (all) | Return only the last N output entries. Useful to limit response size for AI agents. |
|
||||
| `wait` | number | `null` | Seconds to wait for the command to finish before returning (0–300). If the command completes in time, output is included inline. `null` to return immediately. |
|
||||
| `tail` | integer | `null` | Return only the last N output entries. Useful to keep responses bounded. |
|
||||
|
||||
@@ -205,6 +233,31 @@ curl -X POST "http://localhost:8000/execute?wait=5" \
|
||||
}
|
||||
```
|
||||
|
||||
:::info File-Backed Process Output
|
||||
All background process output (stdout/stderr) is persisted to JSONL log files under `~/.open-terminal/logs/processes/`. This means output is never lost, even if the server restarts. The response includes `next_offset` for stateless incremental polling — pass it as the `offset` query parameter on subsequent status requests to get only new output. The `log_path` field shows the path to the raw JSONL log file.
|
||||
:::
|
||||
|
||||
### Search File Contents
|
||||
|
||||
**`GET /files/search`**
|
||||
|
||||
Search for a text pattern across files in a directory. Returns structured matches with file paths, line numbers, and matching lines. Skips binary files automatically.
|
||||
|
||||
**Query parameters:**
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| `query` | string | (required) | Text or regex pattern to search for |
|
||||
| `path` | string | `.` | Directory or file to search in |
|
||||
| `regex` | boolean | `false` | Treat query as a regex pattern |
|
||||
| `case_insensitive` | boolean | `false` | Perform case-insensitive matching |
|
||||
| `include` | string[] | (all files) | Glob patterns to filter files (e.g. `*.py`). Files must match at least one pattern. |
|
||||
| `match_per_line` | boolean | `true` | If true, return each matching line with line numbers. If false, return only matching filenames. |
|
||||
| `max_results` | integer | `50` | Maximum number of matches to return (1–500) |
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8000/files/search?query=TODO&include=*.py&case_insensitive=true" \
|
||||
|
||||
#### Get Command Status
|
||||
|
||||
**`GET /execute/{process_id}/status`**
|
||||
@@ -427,9 +480,10 @@ curl "http://localhost:8000/files/search?query=TODO&path=/home/user/project&incl
|
||||
```json
|
||||
{
|
||||
"query": "TODO",
|
||||
"path": "/home/user/project",
|
||||
"path": "/root",
|
||||
"matches": [
|
||||
{"file": "/home/user/project/main.py", "line": 42, "content": "# TODO: refactor this"}
|
||||
{"file": "/root/app.py", "line": 42, "content": "# TODO: refactor this"},
|
||||
{"file": "/root/utils.py", "line": 7, "content": "# TODO: add tests"}
|
||||
],
|
||||
"truncated": false
|
||||
}
|
||||
@@ -492,6 +546,38 @@ curl "http://localhost:8000/files/download/link?path=/home/user/output.csv" \
|
||||
{"url": "http://localhost:8000/files/download/a1b2c3d4..."}
|
||||
```
|
||||
|
||||
### Process Status (Background)
|
||||
|
||||
**`GET /processes/{process_id}/status`**
|
||||
|
||||
Poll the output of a running or finished background process. Uses offset-based pagination so agents can retrieve only new output since the last poll.
|
||||
|
||||
**Query parameters:**
|
||||
|
||||
| Parameter | Default | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| `wait` | `0` | Seconds to wait for the process to finish before returning. |
|
||||
| `offset` | `0` | Number of output entries to skip. Use `next_offset` from the previous response. |
|
||||
| `tail` | (all) | Return only the last N output entries. Useful to limit response size. |
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8000/processes/a1b2c3d4/status?offset=0&tail=20" \
|
||||
-H "Authorization: Bearer <api-key>"
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "a1b2c3d4",
|
||||
"command": "make build",
|
||||
"status": "running",
|
||||
"exit_code": null,
|
||||
"output": [{"type": "stdout", "data": "Building...\n"}],
|
||||
"truncated": false,
|
||||
"next_offset": 1,
|
||||
"log_path": "/root/.open-terminal/logs/processes/a1b2c3d4.jsonl"
|
||||
}
|
||||
```
|
||||
|
||||
### Health Check
|
||||
|
||||
**`GET /health`**
|
||||
|
||||
@@ -176,7 +176,7 @@ These models excel at multi-step reasoning, proper JSON formatting, and autonomo
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| **Search & Web** | *Requires `ENABLE_WEB_SEARCH` enabled.* |
|
||||
| **Search & Web** | *Requires `ENABLE_WEB_SEARCH` enabled AND per-chat "Web Search" toggle enabled.* |
|
||||
| `search_web` | Search the public web for information. Best for current events, external references, or topics not covered in internal documents. |
|
||||
| `fetch_url` | Visits a URL and extracts text content via the Web Loader. |
|
||||
| **Knowledge Base** | *Requires per-model "Knowledge Base" category enabled (default: on).* |
|
||||
@@ -186,9 +186,11 @@ These models excel at multi-step reasoning, proper JSON formatting, and autonomo
|
||||
| `query_knowledge_files` | Search *file contents* inside KBs using vector search. **This is your main tool for finding information.** When a KB is attached to the model, searches are automatically scoped to that KB. |
|
||||
| `search_knowledge_files` | Search files across accessible knowledge bases by filename (not content). |
|
||||
| `view_knowledge_file` | Get the full content of a file from a knowledge base. |
|
||||
| **Image Gen** | *Requires image generation enabled (per-tool).* |
|
||||
| **Image Gen** | *Requires image generation enabled (per-tool) AND per-chat "Image Generation" toggle enabled.* |
|
||||
| `generate_image` | Generates a new image based on a prompt. Requires `ENABLE_IMAGE_GENERATION`. |
|
||||
| `edit_image` | Edits existing images based on a prompt and image URLs. Requires `ENABLE_IMAGE_EDIT`. |
|
||||
| **Code Interpreter** | *Requires `ENABLE_CODE_INTERPRETER` enabled (default: on) AND per-chat "Code Interpreter" toggle enabled.* |
|
||||
| `execute_code` | Executes code in a sandboxed environment and returns the output. |
|
||||
| **Memory** | *Requires Memory feature enabled AND per-model "Memory" category enabled (default: on).* |
|
||||
| `search_memories` | Searches the user's personal memory/personalization bank. |
|
||||
| `add_memory` | Stores a new fact in the user's personalization memory. |
|
||||
@@ -229,6 +231,8 @@ These models excel at multi-step reasoning, proper JSON formatting, and autonomo
|
||||
| **Image Gen** | | |
|
||||
| `generate_image` | `prompt` (required) | `{status, message, images}` — auto-displayed |
|
||||
| `edit_image` | `prompt` (required), `image_urls` (required) | `{status, message, images}` — auto-displayed |
|
||||
| **Code Interpreter** | | |
|
||||
| `execute_code` | `language` (required), `code` (required) | `{output, status}` |
|
||||
| **Memory** | | |
|
||||
| `search_memories` | `query` (required), `count` (default: 5) | Array of `{id, date, content}` |
|
||||
| `add_memory` | `content` (required) | `{status: "success", id}` |
|
||||
@@ -317,6 +321,9 @@ When the **Builtin Tools** capability is enabled, you can further control which
|
||||
| **Chat History** | `search_chats`, `view_chat` | Search and view user chat history |
|
||||
| **Notes** | `search_notes`, `view_note`, `write_note`, `replace_note_content` | Search, view, and manage user notes |
|
||||
| **Knowledge Base** | `list_knowledge_bases`, `search_knowledge_bases`, `query_knowledge_bases`, `search_knowledge_files`, `query_knowledge_files`, `view_knowledge_file` | Browse and query knowledge bases |
|
||||
| **Web Search** | `search_web`, `fetch_url` | Search the web and fetch URL content |
|
||||
| **Image Generation** | `generate_image`, `edit_image` | Generate and edit images |
|
||||
| **Code Interpreter** | `execute_code` | Execute code in a sandboxed environment |
|
||||
| **Channels** | `search_channels`, `search_channel_messages`, `view_channel_message`, `view_channel_thread` | Search channels and channel messages |
|
||||
| **Skills** | `view_skill` | Load skill instructions on-demand from the manifest |
|
||||
|
||||
@@ -339,6 +346,20 @@ These per-category toggles only appear when the main **Builtin Tools** capabilit
|
||||
Enabling a per-model category toggle does **not** override global feature flags. For example, if `ENABLE_NOTES` is disabled globally (Admin Panel), Notes tools will not be available even if the "Notes" category is enabled for the model. The per-model toggles only allow you to *further restrict* what's already available—they cannot enable features that are disabled at the global level.
|
||||
:::
|
||||
|
||||
:::tip Per-Chat Feature Toggles (Web Search, Image Generation, Code Interpreter)
|
||||
**Web Search**, **Image Generation**, and **Code Interpreter** built-in tools have an additional layer of control: the **per-chat feature toggle** in the chat input bar. For these tools to be injected in Native Mode, **all three conditions** must be met:
|
||||
|
||||
1. **Global config enabled** — the feature is turned on in Admin Panel (e.g., `ENABLE_WEB_SEARCH`)
|
||||
2. **Model capability enabled** — the model has the capability checked in Workspace > Models (e.g., "Web Search")
|
||||
3. **Per-chat toggle enabled** — the user has activated the feature for this specific chat via the chat input bar toggles
|
||||
|
||||
This means users can disable web search (or image generation, or code interpreter) on a per-conversation basis, even if it's enabled globally and on the model. This is useful for chats where information must stay offline or where you want to prevent unintended tool usage.
|
||||
:::
|
||||
|
||||
:::tip Full Agentic Experience
|
||||
For the best out-of-the-box agentic experience, administrators can enable **Web Search**, **Image Generation**, and **Code Interpreter** as default features for a model. In the **Admin Panel > Settings > Models**, find the **Model Specific Settings** for your target model and toggle these three on under **Default Features**. This ensures they are active in every new chat by default, so users get the full tool-calling experience without manually enabling each toggle. Users can still turn them off per-chat if needed.
|
||||
:::
|
||||
|
||||
:::tip Builtin Tools vs File Context
|
||||
**Builtin Tools** controls whether the model gets *tools* for autonomous retrieval. It does **not** control whether file content is injected via RAG—that's controlled by the separate **File Context** capability.
|
||||
|
||||
|
||||
@@ -7,13 +7,13 @@ title: "Open WebUI Integration"
|
||||
|
||||
Open WebUI v0.6+ supports seamless integration with external tools via the OpenAPI servers — meaning you can easily extend your LLM workflows using custom or community-powered tool servers 🧰.
|
||||
|
||||
In this guide, you'll learn how to launch an OpenAPI-compatible tool server and connect it to Open WebUI through the intuitive user interface. Let’s get started! 🚀
|
||||
In this guide, you'll learn how to launch an OpenAPI-compatible tool server and connect it to Open WebUI through the intuitive user interface. Let's get started! 🚀
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Launch an OpenAPI Tool Server
|
||||
|
||||
To begin, you'll need to start one of the reference tool servers available in the [openapi-servers repo](https://github.com/open-webui/openapi-servers). For quick testing, we’ll use the time tool server as an example.
|
||||
To begin, you'll need to start one of the reference tool servers available in the [openapi-servers repo](https://github.com/open-webui/openapi-servers). For quick testing, we'll use the time tool server as an example.
|
||||
|
||||
🛠️ Example: Starting the `time` server locally
|
||||
|
||||
@@ -42,7 +42,7 @@ Once running, this will host a local OpenAPI server at http://localhost:8000, wh
|
||||
Next, connect your running tool server to Open WebUI:
|
||||
|
||||
1. Open WebUI in your browser.
|
||||
2. Open ⚙️ **Settings**.
|
||||
2. Open ⚙️ **Settings**.
|
||||
3. Click on ➕ **Tools** to add a new tool server.
|
||||
4. Enter the URL where your OpenAPI tool server is running (e.g., http://localhost:8000).
|
||||
5. Click "Save".
|
||||
@@ -65,7 +65,7 @@ Admins can manage shared tool servers available to all or selected users across
|
||||
|
||||
- Go to 🛠️ **Admin Settings > Tools**.
|
||||
- Add the tool server URL just as you would in user settings.
|
||||
- These tools are treated similarly to Open WebUI’s built-in tools.
|
||||
- These tools are treated similarly to Open WebUI's built-in tools.
|
||||
|
||||
#### Main Difference: Where Are Requests Made From?
|
||||
|
||||
@@ -101,7 +101,7 @@ If you're running multiple tools through mcpo using a config file, take note:
|
||||
|
||||
🧩 Each tool is mounted under its own unique path!
|
||||
|
||||
For example, if you’re using memory and time tools simultaneously through mcpo, they’ll each be available at a distinct route:
|
||||
For example, if you're using memory and time tools simultaneously through mcpo, they'll each be available at a distinct route:
|
||||
|
||||
- http://localhost:8000/time
|
||||
- http://localhost:8000/memory
|
||||
@@ -140,20 +140,20 @@ Clicking this icon opens a popup where you can:
|
||||
- See which tools are available and which server they're provided by
|
||||
- Debug or disconnect any tool if needed
|
||||
|
||||
🔍 Here’s what the tool information modal looks like:
|
||||
🔍 Here's what the tool information modal looks like:
|
||||
|
||||

|
||||
|
||||
### 🛠️ Global Tool Servers Look Different — And Are Hidden by Default!
|
||||
|
||||
If you've connected a Global Tool Server (i.e., one that’s admin-configured), it will not appear automatically in the input area like user tool servers do.
|
||||
If you've connected a Global Tool Server (i.e., one that's admin-configured), it will not appear automatically in the input area like user tool servers do.
|
||||
|
||||
Instead:
|
||||
|
||||
- Global tools are hidden by default and must be explicitly activated per user.
|
||||
- To enable them, you'll need to click on the ➕ button in the message input area (bottom left of the chat box), and manually toggle on the specific global tool(s) you want to use.
|
||||
|
||||
Here’s what that looks like:
|
||||
Here's what that looks like:
|
||||
|
||||

|
||||
|
||||
@@ -181,7 +181,7 @@ Want to enable ReACT-style (Reasoning + Acting) native function calls directly i
|
||||
✳️ How to enable native function calling:
|
||||
|
||||
1. Open the chat window.
|
||||
2. Go to ⚙️ **Chat Controls > Advanced Params**.
|
||||
2. Go to ⚙️ **Chat Controls > Advanced Params**.
|
||||
3. Change the **Function Calling** parameter from `Default` to `Native`.
|
||||
|
||||

|
||||
@@ -204,8 +204,37 @@ You can run any of these in the same way and connect them to Open WebUI by repea
|
||||
|
||||
## Troubleshooting & Tips 🧩
|
||||
|
||||
- ❌ Not connecting? Make sure the URL is correct and accessible from the browser used to run Open WebUI.
|
||||
- 🔒 If you're using remote servers, check firewalls and HTTPS configs!
|
||||
- 📝 To make servers persist, consider deploying them in Docker or with system services.
|
||||
### Connection fails immediately after adding the URL
|
||||
|
||||
**Check the protocol (HTTP vs HTTPS).** The "Add Tool Server" modal may pre-fill `https://` in the URL field. Most local tool servers don't use TLS, so you need plain `http://`. Change the URL to `http://localhost:8000` (or whichever port your server uses).
|
||||
|
||||
**Check the port number.** The default port varies by server. For example, uvicorn defaults to `8000`, while Open Terminal defaults to `8888`. Make sure the port in your URL matches the port your tool server is actually listening on.
|
||||
|
||||
### Connection fails even though the server is running
|
||||
|
||||
**Understand which machine "localhost" refers to.** This is the most common connectivity issue and depends on which type of tool server you are registering:
|
||||
|
||||
- **User Tool Servers** — Requests come from **your browser**. `localhost` means *the machine running your browser*. This usually works for local development setups, but there is a catch (see below).
|
||||
- **Global Tool Servers** — Requests come from the **Open WebUI backend**. `localhost` means the *backend server*, not your personal machine. If the backend runs in Docker, `localhost` inside the container does not reach your host machine — use `host.docker.internal` or the machine's actual IP instead.
|
||||
|
||||
:::warning User Tool Servers: Browser IP matters
|
||||
|
||||
Even for User Tool Servers, `localhost` only works if you're accessing Open WebUI at `http://localhost:...` in your browser.
|
||||
|
||||
If you're accessing Open WebUI at a different IP address — for example, connecting from another device on your LAN via `http://10.0.0.5:3000`, or using the network URL that `npm run dev` outputs — then your browser is *not* on `localhost` relative to the tool server. The browser will try to reach `localhost` on the device it's running on, not the machine hosting the tool server.
|
||||
|
||||
**Fix:** Replace `localhost` with the actual IP of the machine where your tool server is running (e.g., `http://10.0.0.5:8000`), and ensure the tool server is binding to `0.0.0.0` (not just `127.0.0.1`).
|
||||
|
||||
:::
|
||||
|
||||
### CORS errors in the browser console
|
||||
|
||||
If you see CORS errors, your tool server needs to allow requests from the Open WebUI origin. See the [FAQ entry on CORS](/features/extensibility/plugin/tools/openapi-servers/faq) for a FastAPI example.
|
||||
|
||||
### General tips
|
||||
|
||||
- 🔒 If you're using remote servers, check firewalls and HTTPS configs.
|
||||
- 📝 To make servers persist across reboots, consider deploying them in Docker or with system services.
|
||||
- 🔍 When in doubt, try opening the tool server URL (e.g., `http://localhost:8000/docs`) directly in the same browser you use for Open WebUI — if it loads, the browser can reach it.
|
||||
|
||||
Need help? Visit the 👉 [Discussions page](https://github.com/open-webui/openapi-servers/discussions) or [open an issue](https://github.com/open-webui/openapi-servers/issues).
|
||||
|
||||
@@ -23,7 +23,7 @@ docker pull ghcr.io/open-webui/open-webui:main-slim
|
||||
You can also pull a specific Open WebUI release version directly by using a versioned image tag. This is recommended for production environments to ensure stable and reproducible deployments.
|
||||
|
||||
```bash
|
||||
docker pull ghcr.io/open-webui/open-webui:v0.8.0
|
||||
docker pull ghcr.io/open-webui/open-webui:v0.8.2
|
||||
```
|
||||
|
||||
## Step 2: Run the Container
|
||||
|
||||
@@ -99,9 +99,9 @@ ghcr.io/open-webui/open-webui:<RELEASE_VERSION>-<TYPE>
|
||||
|
||||
Examples (pinned versions for illustration purposes only):
|
||||
```
|
||||
ghcr.io/open-webui/open-webui:v0.8.0
|
||||
ghcr.io/open-webui/open-webui:v0.8.0-ollama
|
||||
ghcr.io/open-webui/open-webui:v0.8.0-cuda
|
||||
ghcr.io/open-webui/open-webui:v0.8.2
|
||||
ghcr.io/open-webui/open-webui:v0.8.2-ollama
|
||||
ghcr.io/open-webui/open-webui:v0.8.2-cuda
|
||||
```
|
||||
|
||||
### Using the Dev Branch 🌙
|
||||
|
||||
@@ -12,7 +12,7 @@ As new variables are introduced, this page will be updated to reflect the growin
|
||||
|
||||
:::info
|
||||
|
||||
This page is up-to-date with Open WebUI release version [v0.8.0](https://github.com/open-webui/open-webui/releases/tag/v0.8.0), but is still a work in progress to later include more accurate descriptions, listing out options available for environment variables, defaults, and improving descriptions.
|
||||
This page is up-to-date with Open WebUI release version [v0.8.2](https://github.com/open-webui/open-webui/releases/tag/v0.8.2), but is still a work in progress to later include more accurate descriptions, listing out options available for environment variables, defaults, and improving descriptions.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
Reference in New Issue
Block a user